next up previous
Next: About this document ...

LECTURE 7

General Relations for a Homogeneous Substance
For simplicity we will omit the averaging bar; macroscopic quantities such as energy $E$ and pressure $p$ will be understood to refer to their mean values. A macrostate can be specified by 2 macroscopic variables. For example if we specify the volume $V$ and internal energy $E$, then the other macroscopic parameters such as $p$ and $T$ are then determined. We usually like to specify the macroscopic parameters which we can control in the lab or in our simulation. So $E$ and $V$ may not be the most convenient. We can pick any 2 macroscopic parameters, like $p$ and $T$, and then determine the others (e.g., $V$ and $S$). To determine the other macroscopic parameters, we need to know their relation to the ones that we specified. Let us now derive these relations. In doing so, we will define a number of useful quantities such as enthalpy, the Helmholtz free energy and the Gibbs free energy.

We start with the fundamental thermodynamic relation for a quasi-static infinitesimal process:

\begin{displaymath}
dQ=TdS=dE+pdV
\end{displaymath} (1)

Summary of Maxwell relations and thermodynamic functions


$\displaystyle \left(\frac{\partial T}{\partial V}\right)_S$ $\textstyle =$ $\displaystyle -\left(\frac{\partial p}
{\partial S}\right)_V$ (38)
$\displaystyle \left(\frac{\partial T}{\partial p}\right)_S$ $\textstyle =$ $\displaystyle \left(\frac{\partial V}
{\partial S}\right)_p$ (39)
$\displaystyle \left(\frac{\partial S}{\partial V}\right)_T$ $\textstyle =$ $\displaystyle \left(\frac{\partial p}{\partial T}\right)_V$ (40)
$\displaystyle -\left(\frac{\partial S}{\partial p}\right)_T$ $\textstyle =$ $\displaystyle \left(\frac{\partial V}{\partial T}\right)_p$ (41)

These are Maxwell's relations which we derived starting from the fundamental relation
\begin{displaymath}
dE=TdS-pdV
\end{displaymath} (42)

Notice that we can rewrite this
\begin{displaymath}
dS=\frac{1}{T}dE+\frac{p}{T}dV
\end{displaymath} (43)

Since the entropy $S$ is a state function, $dS$ is an exact differential and we can write it as:
\begin{displaymath}
dS=\left(\frac{\partial S}{\partial E}\right)_V dE+\left(\frac{\partial S}
{\partial V}\right)_E dV
\end{displaymath} (44)

Comparing coefficients of $dE$ and $dV$, we find
\begin{displaymath}
\frac{1}{T}=\left(\frac{\partial S}{\partial E}\right)_V\;\;\;{\rm and}\;\;\;
p=T\left(\frac{\partial S}{\partial V}\right)_E
\end{displaymath} (45)

These are relations that we derived earlier. Recall
\begin{displaymath}
\beta=\frac{\partial \ln\Omega}{\partial E}
\end{displaymath} (46)

and
\begin{displaymath}
\beta p=\frac{\partial \ln\Omega}{\partial V}
\end{displaymath} (47)

Note that the conjugate pairs of variables

\begin{displaymath}
(T,S)\;\;\;\;{\rm and}\;\;\;\; (p,V)
\end{displaymath} (48)

appear paired in (42) and when one cross multiplies the Maxwell relations. In other words the numerator on one side is conjugate to the denominator on the other side. To obtain the correct sign, note that if the two variables with respect to which one differentiates are the same variables $S$ and $V$ which occur as differentials in (42), then the minus sign that occurs in (42) also occurs in the Maxwell relation. Any one permutation away from these particular variables introduces a change of sign. For example, consider the Maxwell relation (39) with derivatives with respect to $S$ and $p$. Switching from $p$ to $V$ implies one sign change with respect to the minus sign in (42); hence there is a plus sign in (39).

We also summarize the thermodynamic functions:

$\displaystyle E$ $\textstyle =$ $\displaystyle E(S,V)$ (49)
$\displaystyle H$ $\textstyle =$ $\displaystyle H(S,p)=E+pV$ (50)
$\displaystyle F$ $\textstyle =$ $\displaystyle F(T,V)=E-TS$ (51)
$\displaystyle G$ $\textstyle =$ $\displaystyle G(T,p)=E-TS+pV$ (52)


$\displaystyle dE$ $\textstyle =$ $\displaystyle TdS-pdV$ (53)
$\displaystyle dH$ $\textstyle =$ $\displaystyle TdS+Vdp$ (54)
$\displaystyle dF$ $\textstyle =$ $\displaystyle -SdT-pdV$ (55)
$\displaystyle dG$ $\textstyle =$ $\displaystyle -SdT+Vdp$ (56)

Notice that the conjugate variables are always paired up. The sign changes when one changes the independent variable compared to the fundamental relation (42).

Phase Transitions and the Clausius-Clapeyron Equation
Let me try to give some idea of why these thermodynamic functions are useful. We know from classical mechanics and electromagnetism that energy is a very useful concept because it is conserved, and because systems try to minimize their energy. Now we have ``generalized energies'' such as $H$, $F$, and $G$. Chemists like enthalpy because in the lab pressure rather than volume is kept constant. Note that $dH=TdS+Vdp$, so if pressure is constant which means $dp=0$, then $dH=TdS=dQ$ which is why enthalpy is thought of as heat. When physicists refer to free energy, they usually mean the Helmholtz free energy $F=E-TS$ because they can calculate this starting from the Hamiltonian. However, experimentally it is the Gibbs free energy $G=E-TS+pV$ that is relevant. Physicists often speak about minimizing the free energy. Consider water and ice. If we were to just consider minimizing the internal energy $E$ which is the sum of the kinetic and potential energies, then ice would be the lowest energy state. Water molecules have less kinetic energy in ice than in water. Also the water molecules are, on average, farther from each other in ice than in water, so their potential energy of interaction is less in ice than in water. But ice only exists at low temperatures. Why is that? Because at high temperatures the free energy of water is lower than that of ice. If we consider constant pressure and hence the Gibbs free energy, then we want to minimize
\begin{displaymath}
G=E-TS+pV
\end{displaymath} (57)

At high temperatures the second term $-TS$ is important. Water molecules have a much higher entropy in their liquid state than in their solid state. More entropy means more microstates in phase space which improves the chances of the system being in a liquid microstate. (Just like buying more lottery tickets improves your chances of winning.) The higher entropy of the liquid offsets the fact that $E_{\rm water}>E_{\rm ice}$. At low temperatures $-TS$ is less important, so $E_{\rm water}>E_{\rm ice}$ matters and the free energy for ice is lower than that of water. The transition temperature $T_m$ between ice and water is given by
\begin{displaymath}
G_{\rm liquid}(T_m,p)=G_{\rm solid}(T_m,p)
\end{displaymath} (58)

We can actually use this relation to derive an interesting relation called the Clausius-Clapeyron equation. Remember I told you that water is unusual because ice expands and because the slope of the phase boundary between ice and water is negative ($dp/dT<0$). These facts are related through the Clausius-Claperyon equation. (See Reif 8.5 for more details.)

=3.0 true in \epsfbox{phasediagramH2O.eps}
Let us consider the general case where a substance (like water) has 2 phases (like liquid and solid) with a first order transition between them. Along the phase-equilibrium line these two phases have equal Gibbs free energies:
\begin{displaymath}
g_1(T,p)=g_2(T,p)
\end{displaymath} (59)

Here $g_i(T,p)=G_i(T,p)/\nu$ is the Gibbs free energy per mole of phase $i$ at temperature $T$ and pressure $p$. If we move a little ways along the phase boundary, then we have
\begin{displaymath}
g_1(T+dT,p+dp)=g_2(T+dT,p+dp)
\end{displaymath} (60)

Subtracting these two equations leads to the condition:
\begin{displaymath}
dg_1=dg_2
\end{displaymath} (61)

Now use (57)
\begin{displaymath}
dg=-sdT+vdp
\end{displaymath} (62)

where $s$ is the molar entropy and $v$ is the molar volume. So (62) becomes
\begin{displaymath}
-s_1dT+v_1dp=-s_2dT+v_2dp
\end{displaymath} (63)


\begin{displaymath}
(s_2-s_1)dT=(v_2-v_1)dp
\end{displaymath} (64)

or
\begin{displaymath}
\frac{dp}{dT}=\frac{\Delta s}{\Delta v}
\end{displaymath} (65)

where $\Delta s=s_2-s_1$ and $\Delta v=v_2-v_1$. This is called the Clausius-Clapeyron equation. It relates the slope of the phase boundary at a given point to the ratio of the entropy change $\Delta s$ to the volume change $\Delta v$.

Let's apply this to the water-ice transition. We know that the slope of the phase boundary $dp/dT<0$. Let phase 1 be water and let phase 2 be ice. Then $\Delta s=s_{\rm ice}-s_{\rm water}<0$ since ice has less entropy than water. Putting these 2 facts together in the Clausius-Clapeyron equation implies that we must have $\Delta v>0$. Indeed water expands on freezing and $\Delta v=v_{\rm ice}-v_{\rm water}>0$. So the unusual negative slope of the melting line means that water expands on freezing. As we mentioned earlier, it also means that you can cool down a water-ice mixture by pressurizing it and following the coexistence curve, i.e., the melting line or phase boundary.

Another example is $^3$He which also has a melting line with a negative slope. The fact that $dp/dT<0$ means that if you increase the pressure on a mixture of liquid and solid $^3$He, the temperature will drop. This is the principle behind cooling with a Pomeranchuk cell. Unlike the case of water where ice floats because it is less dense, solid $^3$He sinks because it is more dense than liquid $^3$He. The Clausius-Clapeyron equation then implies that $\Delta s=s_{\rm solid}-s_{\rm liquid} > 0$, i.e., solid $^3$He has more entropy than liquid $^3$He! How can this be? It turns out that it is spin entropy. A $^3$He atom is a fermion with a nuclear spin 1/2. The atoms in liquid $^3$He roam around and their wavefunctions overlap, so that liquid $^3$He has a Fermi sea just like electrons do. The Fermi energy is lower if the $^3$He atoms can pair up with opposite spins so that two $^3$He atoms can occupy each translational energy state. However, in solid liquid $^3$He, the atoms are centered on lattice sites and the wavefunctions do not overlap much. So the spins on different atoms in the solid are not correlated and the spin entropy is $\sim R\ln 2$ which is much larger than in the liquid.

Now back to the general case. Since there is an entropy change associated with the phase transformation from phase 1 to phase 2, heat must be absorbed (or emitted). The ``latent heat of transformation'' $L_{12}$ is defined as the heat absorbed when a given amount of phase 1 is transformed to phase 2. For example, to melt a solid, you dump heat into it until it reaches the melting temperature. When it reaches the melting temperature, its temperature stays at $T_m$ even though you continue to dump in heat. It uses the absorbed heat to transform the solid into liquid. This heat is the latent heat $L_{12}$. Since the process takes place at the constant temperature $T_m$, the corresponding entropy change is simply

\begin{displaymath}
\Delta S=S_2-S_1=\frac{L_{12}}{T_m}
\end{displaymath} (66)

Thus the Clausius-Clapeyron equation (66) can be written
\begin{displaymath}
\frac{dp}{dT}=\frac{\Delta S}{\Delta V}=\frac{L_{12}}{T\Delta V}
\end{displaymath} (67)

If $V$ refers to the molar volume, then $L_{12}$ is the latent heat per mole; if $V$ refers to the volume per gram, then $L_{12}$ is the latent heat per gram. In most substances, the latent heat is used to melt the solid into a liquid. However, if you put heat into liquid $^3$He when it is on the melting line, it will form solid because solid $^3$He has more entropy than liquid $^3$He.

Examples of using the Maxwell relations
Let's give some examples where the Maxwell relations can be used. Suppose we want to calculate $(\partial E/\partial V)_T$. We start with our usual fundamental relation
\begin{displaymath}
dE=TdS-pdV
\end{displaymath} (68)

We want to replace $dS$ with $dV$. So we regard the entropy as a function of the independent variables $V$ and $T$:
\begin{displaymath}
S=S(T,V)
\end{displaymath} (69)

$dS$ is an exact differential:
\begin{displaymath}
dS=\left(\frac{\partial S}{\partial V}\right)_T dV+\left(\frac{\partial S}
{\partial T}\right)_V dT
\end{displaymath} (70)

Since we are interested in $(\partial E/\partial V)_T$ at constant temperature, $dT=0$ and
\begin{displaymath}
dS=\left(\frac{\partial S}{\partial V}\right)_T dV
\end{displaymath} (71)

Now use the Maxwell relation (40):
\begin{displaymath}
\left(\frac{\partial S}{\partial V}\right)_T=
\left(\frac{\partial p}{\partial T}\right)_V
\end{displaymath} (72)

So we have
\begin{displaymath}
dS=\left(\frac{\partial p}{\partial T}\right)_VdV
\end{displaymath} (73)

and
$\displaystyle dE$ $\textstyle =$ $\displaystyle TdS-pdV$  
  $\textstyle =$ $\displaystyle T\left(\frac{\partial p}{\partial T}\right)_VdV-pdV$ (74)

Hence
\begin{displaymath}
\left(\frac{\partial E}{\partial V}\right)_T=T\left(\frac{\partial p}
{\partial T}\right)_V-p
\end{displaymath} (75)

Let's take a moment to consider what this means physically. We know that gas cools when it expands, and that the pressure rises when it is heated. There must be some connection between these two phenomena. Microscopically we can think of the kinetic energy of the gas molecules. Macroscopically, eq. (76) gives the relation. If we hold the volume fixed and increase the temperature, the pressure rises at a rate $(\partial p/\partial T)_V$. Related to that fact is this: if we increase the volume, the gas will cool unless we pour some heat in to maintain the temperature, and $(\partial E/\partial V)_TdV$ tells us the amount of heat needed to maintain the temperature. (Notice that for an ideal gas $\partial E/\partial V=0$.) Equation (76) expresses the fundamental relation between these two effects. Notice that we didn't need to know the microscopic interactions between the gas particles in order to deduce the relationship between the amount of heat needed to maintain a constant temperature when the gas expands, and the pressure change when the gas is heated. That's what thermodynamics does; it gives us relationships between macroscopic quantities without having to know about microscopics.

Aside: Notice that

\begin{displaymath}
p\neq -\left(\frac{\partial E}{\partial V}\right)_T
\end{displaymath} (76)

as one might expect. Rather
\begin{displaymath}
p=-\left(\frac{\partial E}{\partial V}\right)_T+T\left(\frac{\partial p}
{\partial T}\right)_V
\end{displaymath} (77)

However,
\begin{displaymath}
dE=TdS-pdV
\end{displaymath} (78)

implies that
\begin{displaymath}
p=-\left(\frac{\partial E}{\partial V}\right)_S
\end{displaymath} (79)

It matters what is kept constant! Recall from (25) that
\begin{displaymath}
\overline{p}=-\left.\frac{\partial F}{\partial V}\right\vert _T
\end{displaymath} (80)

Specific Heats
Consider a homogeneous substance whose volume $V$ is the only relevant external parameter. We want the relation between the molar specific heat $c_V$ at constant volume and the molar specific heat $c_p$ at constant pressure. We found this relation earlier for an ideal gas ($c_p=c_V+R$), but now we want the general relation. This is a useful relation because theoretical calculations are usually done at constant volume and experimental measurements are done at constant pressure. This is also a nice illustration of the usefulness of the Maxwell relations and other identities and definitions.

The heat capacity at constant volume is given by

\begin{displaymath}
C_V=\left(\frac{dQ}{dT}\right)_V=T\left(\frac{\partial S}{\partial T}\right)_V
\end{displaymath} (81)

and the heat capacity at constant pressure is
\begin{displaymath}
C_p=\left(\frac{dQ}{dT}\right)_p=T\left(\frac{\partial S}{\partial T}\right)_p
\end{displaymath} (82)

Let us consider the independent variables as $T$ and $p$. Then $S=S(T,p)$ and
\begin{displaymath}
dQ=TdS=T\left[\left(\frac{\partial S}{\partial T}\right)_pdT+
\left(\frac{\partial S}{\partial p}\right)_Tdp\right]
\end{displaymath} (83)

Using (83), we have
\begin{displaymath}
dQ=TdS=C_pdT+T\left(\frac{\partial S}{\partial p}\right)_Tdp
\end{displaymath} (84)

At constant pressure, $dp=0$ and we obtain (83). But to calculate $C_V$, we see from (82) that $T$ and $V$ are the independent variables. So we plug
\begin{displaymath}
dp=\left(\frac{\partial p}{\partial T}\right)_VdT+\left(\frac{\partial p}
{\partial V}\right)_T dV
\end{displaymath} (85)

into (85) to obtain
\begin{displaymath}
dQ=TdS=C_pdT+T\left(\frac{\partial S}{\partial p}\right)_T\l...
...t)_VdT+\left(\frac{\partial p}
{\partial V}\right)_T dV\right]
\end{displaymath} (86)

Constant $V$ means that $dV=0$ and so
\begin{displaymath}
C_V=T\left(\frac{\partial S}{\partial T}\right)_V=C_p+T
\lef...
...rtial p}\right)_T\left(\frac{\partial p}
{\partial T}\right)_V
\end{displaymath} (87)

This is a relation between $C_V$ and $C_p$ but it involves derivatives which are not easily measured. However we can use Maxwell's relations to write this relation in terms of quantities that are measurable. In particular (41) is
\begin{displaymath}
\left(\frac{\partial S}{\partial p}\right)_T=
-\left(\frac{\partial V}{\partial T}\right)_p
\end{displaymath} (88)

The change of volume with temperature at constant pressure is related to the ``volume coefficient of expansion'' $\alpha$ (sometimes called the coefficient of thermal expansion):
\begin{displaymath}
\alpha\equiv\frac{1}{V}\left(\frac{\partial V}{\partial T}\right)_p
\end{displaymath} (89)

Thus
\begin{displaymath}
\left(\frac{\partial S}{\partial p}\right)_T=-V\alpha
\end{displaymath} (90)

The derivative $(\partial p/\partial T)_V$ is also not easy to measure since measurements at constant volume are difficult. It is easier to control $T$ and $p$. So let's write
\begin{displaymath}
dV=\left(\frac{\partial V}{\partial T}\right)_pdT +\left(\frac{\partial V}
{\partial p}\right)_T dp
\end{displaymath} (91)

For constant volume $dV=0$ and
\begin{displaymath}
0=\left(\frac{\partial V}{\partial T}\right)_pdT +\left(\frac{\partial V}
{\partial p}\right)_T dp
\end{displaymath} (92)

we can rearrange this get an expression for $dp/dT$:
\begin{displaymath}
\left(\frac{\partial p}{\partial T}\right)_V=-\frac
{\left(\...
...al T}\right)_p}
{\left(\frac{\partial V}{\partial p}\right)_T}
\end{displaymath} (93)

Aside: This is an example of the general relation proved in Appendix 9. If we have 3 variables $x$, $y$, and $z$, two of which are independent, then we can write, for example,

\begin{displaymath}
z=z(x,y)
\end{displaymath} (94)

and
\begin{displaymath}
dz=\left(\frac{\partial z}{\partial x}\right)_ydx+\left(\frac{\partial z}
{\partial y}\right)_x dy
\end{displaymath} (95)

At constant $z$ we have $dz=0$ and
\begin{displaymath}
0=\left(\frac{\partial z}{\partial x}\right)_ydx+\left(\frac{\partial z}
{\partial y}\right)_x dy
\end{displaymath} (96)

Thus
\begin{displaymath}
\frac{dx}{dy}=-\frac{(\partial z/\partial y)_x}{(\partial z/\partial x)_y}
\end{displaymath} (97)

or, since $z$ was kept constant
\begin{displaymath}
\left(\frac{\partial x}{\partial y}\right)_z=
-\frac{(\partial z/\partial y)_x}{(\partial z/\partial x)_y}
\end{displaymath} (98)

This sort of relation between partial derivatives is used extensively in thermodynamics.

Returning to (94), we note that the numerator is related to the expansion coefficient $\alpha$. The denominator measures the change in the volume of the substance with increasing pressure at constant temperature. The change of the volume will be negative, since the volume decreases with increasing pressure. We can define the ``isothermal compressibility'' of the substance:

\begin{displaymath}
\kappa\equiv -\frac{1}{V}\left(\frac{\partial V}{\partial p}\right)_T
\end{displaymath} (99)

The compressibility is a measure of how squishy the substance is. Hence (94) becomes
\begin{displaymath}
\left(\frac{\partial p}{\partial T}\right)_V=-\frac
{\left(\...
... p}\right)_T}
=-\frac{V\alpha}{-V\kappa}=\frac{\alpha}{\kappa}
\end{displaymath} (100)

Plugging (91) and (101) into (88) yields
\begin{displaymath}
C_V=C_p+
\left(\frac{\partial S}{\partial p}\right)_T\left(\...
... T}\right)_V=C_p+T(-V\alpha)\left(\frac{\alpha}{\kappa}\right)
\end{displaymath} (101)

or
\begin{displaymath}
C_p-C_V=VT\frac{\alpha^2}{\kappa}
\end{displaymath} (102)

Let's test this formula on the simple case of an ideal gas. We start with the equation of state:

\begin{displaymath}
pV=\nu RT
\end{displaymath} (103)

We need to calculate the expansion coefficient. For constant $p$
\begin{displaymath}
\left(\frac{\partial V}{\partial T}\right)_p=\frac{\nu R}{p}
\end{displaymath} (104)

Hence
\begin{displaymath}
\alpha=\frac{1}{V}\left(\frac{\partial V}{\partial T}\right)...
...}\left(\frac{\nu R}{p}\right)=\frac{\nu R}{\nu RT}=\frac{1}{T}
\end{displaymath} (105)

Next we calculate the compressibility $\kappa$. At constant temperature the equation of state yields
\begin{displaymath}
pdV+Vdp=0
\end{displaymath} (106)

Hence
\begin{displaymath}
\left(\frac{\partial V}{\partial p}\right)_T=-\frac{V}{p}
\end{displaymath} (107)

and
\begin{displaymath}
\kappa=-\frac{1}{V}\left(-\frac{V}{p}\right)=\frac{1}{p}
\end{displaymath} (108)

Thus (103) becomes
\begin{displaymath}
C_p-C_V=VT\frac{\alpha^2}{\kappa}
=VT\frac{(1/T)^2}{1/p}=\frac{Vp}{T}=\nu R
\end{displaymath} (109)

or, per mole,
\begin{displaymath}
c_p-c_V=R
\end{displaymath} (110)

which agrees with our previous result.

Limiting properties of the specific heat as $T\rightarrow 0$
The third law of thermodynamics asserts that as the temperature approaches absolute zero, the entropy $S$ of a system smoothly approaches some limiting constant $S_o$, independent of all parameters of the system. This is just a statement that the number of states at low temperatures is very small. In the case of a nondegenerate ground state, there is just one state and $S(T=0)=0$. So, in general,
\begin{displaymath}
{\rm as}\;\; T\rightarrow 0,\;\;\;\; S\rightarrow S_o
\end{displaymath} (111)

This implies that the derivative $(\partial S/\partial T)$ remains finite as $T\rightarrow 0$. In other words, it does not go to infinity. (Technically speaking, the derivatives appearing in (82) and (83) remain finite as $T$ goes to 0.) So one can conclude that the heat capacity goes to 0 as $T\rightarrow 0$:
\begin{displaymath}
C=T\frac{\partial S}{\partial T}\rightarrow 0\;\;\;{\rm as}\;\;\;
T\rightarrow 0
\end{displaymath} (112)

or more precisely,
\begin{displaymath}
{\rm as}\;\;T\rightarrow 0,\;\;\;\;C_V\rightarrow 0\;\;\;{\rm and}
\;\;\; C_p\rightarrow 0
\end{displaymath} (113)

The fact that the heat capacity goes to 0 at zero temperature merely reflects the fact that the system settles into its ground state as $T\rightarrow 0$. If we recall that
\begin{displaymath}
C_V=\left(\frac{\partial E}{\partial T}\right)_V
\end{displaymath} (114)

then reducing the temperature further will not change the energy since the energy has bottomed out. Notice that we need $C_V(T)\rightarrow 0$ as $T\rightarrow 0$ in order to guarantee proper convergence of the integral in
\begin{displaymath}
S(T)-S(0)=\int_0^{T}\frac{C_V(T^{\prime})}{T^{\prime}}dT^{\prime}
\end{displaymath} (115)

The entropy difference on the left must be finite, so the integral must also be finite.




next up previous
Next: About this document ...
Clare Yu 2007-04-25