Skip to main content
Chemistry LibreTexts

4.2: Entropy

  • Page ID
    41420
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Entropy is a state function that is often erroneously referred to as the 'state of disorder' of a system. Qualitatively, entropy is simply a measure how much the energy of atoms and molecules become more spread out in a process and can be defined in terms of statistical probabilities of a system or in terms of the other thermodynamic quantities. Entropy is also the subject of the Second and Third laws of thermodynamics, which describe the changes in entropy of the universe with respect to the system and surroundings, and the entropy of substances, respectively.

    Statistical Definition of Entropy

    Entropy is a thermodynamic quantity that is generally used to describe the course of a process, that is, whether it is a spontaneous process and has a probability of occurring in a defined direction, or a non-spontaneous process and will not proceed in the defined direction, but in the reverse direction. To define entropy in a statistical manner, it helps to consider a simple system such as in Figure \(\PageIndex{1}\). Two atoms of hydrogen gas are contained in a volume of \( V_1 \).

    entropy1.jpg
    Figure \(\PageIndex{1}\). Two hydrogen atoms in a volume \( V_1\)

    Since all the hydrogen atoms are contained within this volume, the probability of finding any one hydrogen atom in \( V_1 \) is 1. However, if we consider half the volume of this box and call it \( V_2 \),the probability of finding any one atom in this new volume is \( \frac{1}{2}\), since it could either be in \( V_2 \) or outside. If we consider the two atoms, finding both in \( V_2 \), using the multiplication rule of probabilities, is

    \[ \frac{1}{2} \times \frac{1}{2} =\frac{1}{4}.\]

    For finding four atoms in \( V_2 \) would be

    \[ \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2} \times \frac{1}{2}= \frac{1}{16}.\]

    Therefore, the probability of finding N number of atoms in this volume is \( \frac{1}{2}^N \). Notice that the probability decreases as we increase the number of atoms.

    If we started with volume \( V_2 \) and expanded the box to volume \( V_1 \), the atoms would eventually distribute themselves evenly because this is the most probable state. In this way, we can define our direction of spontaneous change from the lowest to the highest state of probability. Therefore, entropy \(S\) can be expressed as

    \[ S=k_B \ln{\Omega} \label{1}\]

    where \(\Omega\) is the probability and \( k_B \) is a proportionality constant. This makes sense because entropy is an extensive property and relies on the number of molecules, when \(\Omega\) increases to \( W^2 \), S should increase to 2S. Doubling the number of molecules doubles the entropy.

    So far, we have been considering one system for which to calculate the entropy. If we have a process, however, we wish to calculate the change in entropy of that process from an initial state to a final state. If our initial state 1 is \( S_1=K_B \ln{\Omega}_1 \) and the final state 2 is \( S_2=K_B\ln{\Omega}_2 \),

    \[ \Delta S=S_2-S_1=k_B \ln \dfrac{\Omega_2}{\Omega_1} \label{2}\]

    using the rule for subtracting logarithms. However, we wish to define \(\Omega\) in terms of a measurable quantity. Considering the system of expanding a volume of gas molecules from above, we know that the probability is proportional to the volume raised to the number of atoms (or molecules), \( \alpha V^{N}\). Therefore,

    \[ \Delta S=S_2-S_1=k_B \ln \left(\dfrac{V_2}{V_1} \right)^N=Nk_B \ln \dfrac{V_2}{V_1} \label{3} \]

    We can define this in terms of moles of gas and not molecules by setting the \( k_{B} \) or Boltzmann constant equal to \( \frac{R}{N_A} \), where R is the gas constant and \( N_A \) is Avogadro's number. So for a expansion of an ideal gas and holding the temperature constant,

    \[ \Delta S=\dfrac{N}{N_A} R \ln \left(\dfrac{V_2}{V_1} \right)^N=nR\ln \dfrac{V_2}{V_1} \label{4}\]

    because \( \frac{N}{N_A}=n \), the number of moles. This is only defined for constant temperature because entropy can change with temperature. Furthermore, since S is a state function, we do not need to specify whether this process is reversible or irreversible.

    Thermodynamic Definition of Entropy

    Using the statistical definition of entropy is very helpful to visualize how processes occur. However, calculating probabilities like \(\Omega\) can be very difficult. Fortunately, entropy can also be derived from thermodynamic quantities that are easier to measure. Recalling the concept of work from the first law of thermodynamics, the heat (q) absorbed by an ideal gas in a reversible, isothermal expansion is

    \[ q_{rev}=nRT\ln \dfrac{V_2}{V_1} \; . \label{5}\]

    If we divide by T, we can obtain the same equation we derived above for \( \Delta S\):

    \[ \Delta S=\dfrac{q_{rev}}{T}=nR\ln \dfrac{V_2}{V_1} \;. \label{6}\]

    We must restrict this to a reversible process because entropy is a state function, however the heat absorbed is path dependent. An irreversible expansion would result in less heat being absorbed, but the entropy change would stay the same. Then, we are left with

    \[ \Delta S> \dfrac{q_{irrev}}{T} \]

    for an irreversible process because

    \[ \Delta S=\Delta S_{rev}=\Delta S_{irrev} .\]

    This apparent discrepancy in the entropy change between an irreversible and a reversible process becomes clear when considering the changes in entropy of the surrounding and system, as described in the second law of thermodynamics.

    It is evident from our experience that ice melts, iron rusts, and gases mix together. However, the entropic quantity we have defined is very useful in defining whether a given reaction will occur. Remember that the rate of a reaction is independent of spontaneity. A reaction can be spontaneous but the rate so slow that we effectively will not see that reaction happen, such as diamond converting to graphite, which is a spontaneous process.

    The Second Law as Energy Dispersion

    Energy of all types -- in chemistry, most frequently the kinetic energy of molecules (but also including the phase change/potential energy of molecules in fusion and vaporization, as well as radiation) changes from being localized to becoming more dispersed in space if that energy is not constrained from doing so. The simplest example stereotypical is the expansion illustrated in Figure 1.

    The initial motional/kinetic energy (and potential energy) of the molecules in the first bulb is unchanged in such an isothermal process, but it becomes more widely distributed in the final larger volume. Further, this concept of energy dispersal equally applies to heating a system: a spreading of molecular energy from the volume of greater-motional energy (“warmer”) molecules in the surroundings to include the additional volume of a system that initially had “cooler” molecules. It is not obvious, but true, that this distribution of energy in greater space is implicit in the Gibbs free energy equation and thus in chemical reactions.

    “Entropy change is the measure of how more widely a specific quantity of molecular energy is dispersed in a process, whether isothermal gas expansion, gas or liquid mixing, reversible heating and phase change, or chemical reactions.” There are two requisites for entropy change.

    1. It is enabled by the above-described increased distribution of molecular energy.
    2. It is actualized if the process makes available a larger number of arrangements for the system’s energy, i.e., a final state that involves the most probable distribution of that energy under the new constraints.

    Thus, “information probability” is only one of the two requisites for entropy change. Some current approaches regarding “information entropy” are either misleading or truly fallacious, if they do not include explicit statements about the essential inclusion of molecular kinetic energy in their treatment of chemical reactions.

    References

    1. This literal greater spreading of molecular energy in 3-D space in an isothermal process is accompanied by occupancy of more quantum states (“energy levels”) within each microstate and thus more microstates for the final macrostate (i.e., a larger \(\Omega\) in R\ln{W}). Similarly, in any thermal process higher energy quantum states can be significantly occupied – thereby increasing the number of microstates in the product macrostate as measured by the Boltzmann relationship.
    2. I. N. Levine, Physical Chemistry, 6th ed. 2009, p. 101, toward the end of “What Is Entropy?”
    3. Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, California: University Science Books, 2005.

    Contributors and Attributions

    • Frank L. Lambert, Professor Emeritus, Occidental College
    • Konstantin Malley (UCD)

    Carnot Cycle

    In the early 19th century, steam engines came to play an increasingly important role in industry and transportation. However, a systematic set of theories of the conversion of thermal energy to motive power by steam engines had not yet been developed. Nicolas Léonard Sadi Carnot (1796-1832), a French military engineer, published Reflections on the Motive Power of Fire in 1824. The book proposed a generalized theory of heat engines, as well as an idealized model of a thermodynamic system for a heat engine that is now known as the Carnot cycle. Carnot developed the foundation of the second law of thermodynamics, and is often described as the "Father of thermodynamics."

    The Carnot Cycle

    The Carnot cycle consists of the following four processes:

    1. A reversible isothermal gas expansion process. In this process, the ideal gas in the system absorbs \(q_{in}\) amount heat from a heat source at a high temperature \(T_{high}\), expands and does work on surroundings.
    2. A reversible adiabatic gas expansion process. In this process, the system is thermally insulated. The gas continues to expand and do work on surroundings, which causes the system to cool to a lower temperature, \(T_{low}\).
    3. A reversible isothermal gas compression process. In this process, surroundings do work to the gas at \(T_{low}\), and causes a loss of heat, \(q_{out}\).
    4. A reversible adiabatic gas compression process. In this process, the system is thermally insulated. Surroundings continue to do work to the gas, which causes the temperature to rise back to \(T_{high}\).
    Figure 1: An ideal gas-piston model of the Carnot cycle.
    Figure \(\PageIndex{1}\): An ideal gas-piston model of the Carnot cycle. (CC BY 4.0; XiSen Hou via Hope College)

    P-V Diagram

    The P-V diagram of the Carnot cycle is shown in Figure \(\PageIndex{2}\). In isothermal processes I and III, ∆U=0 because ∆T=0. In adiabatic processes II and IV, q=0. Work, heat, ∆U, and ∆H of each process in the Carnot cycle are summarized in Table \(\PageIndex{1}\).

    Figure 2: A P-V diagram of the Carnot Cycle.
    Figure \(\PageIndex{2}\): A P-V diagram of the Carnot Cycle.
    Table \(\PageIndex{1}\): Work, heat, ∆U, and ∆H in the P-V diagram of the Carnot Cycle.
    Process w q ΔU ΔH
    I \(-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)\) \(nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)\) 0 0
    II \(n\bar{C_{v}}(T_{low}-T_{high})\) 0 \(n\bar{C_{v}}(T_{low}-T_{high})\) \(n\bar{C_{p}}(T_{low}-T_{high})\)
    III \(-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)\) \(nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)\) 0 0
    IV \(n\bar{C_{v}}(T_{high}-T_{low})\) 0 \(n\bar{C_{v}}(T_{hight}-T_{low})\) \(n\bar{C_{p}}(T_{high}-T_{low})\)
    Full Cycle \(-nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)\) \(nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln\left(\dfrac{V_{4}}{V_{3}}\right)\) 0 0

    T-S Diagram

    The T-S diagram of the Carnot cycle is shown in Figure \(\PageIndex{3}\). In isothermal processes I and III, ∆T=0. In adiabatic processes II and IV, ∆S=0 because dq=0. ∆T and ∆S of each process in the Carnot cycle are shown in Table \(\PageIndex{2}\).

    Figure 3: A T-S diagram of the Carnot Cycle.
    Figure \(\PageIndex{3}\): A T-S diagram of the Carnot Cycle. (CC BY 4.0; XiSen Hou via Hope College)
    Table \(\PageIndex{1}\): Work, heat, and ∆U in the T-S diagram of the Carnot Cycle.
    Process ΔT ΔS
    I 0 \(-nR\ln\left(\dfrac{V_{2}}{V_{1}}\right)\)
    II \(T_{low}-T_{high}\) 0
    III 0 \(-nR\ln\left(\dfrac{V_{4}}{V_{3}}\right)\)
    IV \(T_{high}-T_{low}\) 0
    Full Cycle 0 0

    Efficiency

    The Carnot cycle is the most efficient engine possible based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures. The efficiency of the carnot engine is defined as the ratio of the energy output to the energy input.

    \[\begin{align*} \text{efficiency} &=\dfrac{\text{net work done by heat engine}}{\text{heat absorbed by heat engine}} =\dfrac{-w_{sys}}{q_{high}} \\[4pt] &=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)+nRT_{low}\ln \left(\dfrac{V_{4}}{V_{3}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)} \end{align*}\]

    Since processes II (2-3) and IV (4-1) are adiabatic,

    \[\left(\dfrac{T_{2}}{T_{3}}\right)^{C_{V}/R}=\dfrac{V_{3}}{V_{2}}\]

    and

    \[\left(\dfrac{T_{1}}{T_{4}}\right)^{C_{V}/R}=\dfrac{V_{4}}{V_{1}}\]

    And since T1 = T2 and T3 = T4,

    \[\dfrac{V_{3}}{V_{4}}=\dfrac{V_{2}}{V_{1}}\]

    Therefore,

    \[\text{efficiency}=\dfrac{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)-nRT_{low}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}{nRT_{high}\ln\left(\dfrac{V_{2}}{V_{1}}\right)}\]

    \[\boxed{\text{efficiency}=\dfrac{T_{high}-T_{low}}{T_{high}}}\]

    Summary

    The Carnot cycle has the greatest efficiency possible of an engine (although other cycles have the same efficiency) based on the assumption of the absence of incidental wasteful processes such as friction, and the assumption of no conduction of heat between different parts of the engine at different temperatures.

    Problems

    1. You are now operating a Carnot engine at 40% efficiency, which exhausts heat into a heat sink at 298 K. If you want to increase the efficiency of the engine to 65%, to what temperature would you have to raise the heat reservoir?
    2. A Carnot engine absorbed 1.0 kJ of heat at 300 K, and exhausted 400 J of heat at the end of the cycle. What is the temperature at the end of the cycle?
    3. An indoor heater operating on the Carnot cycle is warming the house up at a rate of 30 kJ/s to maintain the indoor temperature at 72 ºF. What is the power operating the heater if the outdoor temperature is 30 ºF?

    References

    1. Goldstein, M. J. Chem. Educ., 1980, 57, 114-116
    2. Bader, M. J. Chem. Educ., 1973, 50, 834
    3. W. F. Luder. J. Chem. Educ., 1944, 21, 600-601
    4. Salter, C. J. Chem. Educ., 2000, 77, 1027-1030

    4.2: Entropy is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts.

    • Was this article helpful?