next up previous
Next: About this document ...

LECTURE 5

Temperature Scales

The equation of state of any physical system provides a means for measuring the temperature $T$. We just need to know the relationship between the temperature $T$ and measurable quantities such as pressure and volume.

The standard method of implementing a practical definition of temperature uses an ideal gas and a ``constant volume gas thermometer.'' When the constant volume of an ideal gas thermometer is brought into thermal contact with system A, the pressure $p_A$ is recorded. The thermometer is then brought into contact with a second system B and a second pressure $p_B$ is measured and recorded after equilibrium has been reached. Then, from the equation of state, since $V$ is constant,

\begin{displaymath}
\frac{p_A}{p_B}=\frac{T_A}{T_B}
\end{displaymath} (1)

If we want to assign numbers to the temperature, we must choose one arbitrary reference point. The temperature scale used by physicists is the Kelvin scale where the triple point of water is assigned the value 273.16 Kelvin. The triple point is the temperature and pressure at which ice, water, and steam coexist in equilibrium. If we let $p_{\rm ref}$ be the pressure of our constant volume ideal gas thermometer when in contact with a system at the temperature of the triple point of water, then any other temperature is given by
\begin{displaymath}
T_A=273.16\left(\frac{p_A}{p_{\rm ref}}\right)
\end{displaymath} (2)

Another temperature scale sometimes used is the Celsius scale which is related to the Kelvin scale by
\begin{displaymath}
T_C=T_K-273.15 \quad\quad{\rm degrees\;Celsius}
\end{displaymath} (3)

So $T_{K}=0$ K corresponds to $-273.15^o$ C. The size of the Celsius and Kelvin degrees are the same, but 1.8 Fahrenheit degrees equals one Celsius or Kelvin degree. The freezing point of water is 0$^o$ C and 32$^o$ F.

Phase diagram of water:

=3.0 true in \epsfbox{phasediagramH2O.eps}

In reality, H$_2$O has a more complicated phase diagram. The solid phase of H$_2$O has perhaps as many as 8 or 9 known phases known as ice I, ice II, etc. Notice the negative slope of the phase boundary between liquid and solid (ice). This is very unusual. Most substances have a positive slope. The negative slope implies that if we have an equilibrium mixture of water and ice (i.e., if we are on the phase boundary) and if we pressurize it, then the mixture cools down! $^3$He has a negative phase boundary between solid and liquid and by using this technique to cool liquid $^3$He, superfluid $^3$He was discovered in 1972 by Osheroff, Richardson and Lee.

Once the temperature scale has been fixed by the triple point of water, we can determine the gas constant $R$ and Boltzmann's constant:

\begin{displaymath}
R=(8.3143\pm 0.0012) \;\;{\rm joules}\;\;{\rm mole}^{-1}\;\;{\rm deg}^{-1}
\end{displaymath} (4)

and
\begin{displaymath}
k_B=(1.38054\pm 0.00018)\times\; 10^{-16}\;\;{\rm ergs}\;\;{\rm degree}^{-1}
\end{displaymath} (5)

Summary of Thermodynamic Relations
Thermodynamic Laws
Notice that the above 4 laws are macroscopic in content. They refer to $\overline{E}$, $S$, and $T$ which describe the macroscopic state of the system. But nowhere do they make explicit reference to the microscopic nature of the system, e.g., the molecules and their interactions.

Statistical Relations
  1. If $\Omega$ is the number of accessible microstates with energy between $E$ and $E+\delta E$, then
    \begin{displaymath}
S\equiv k_B\ln\Omega
\end{displaymath} (11)

    where $k_B$ is Boltzmann's constant and
    \begin{displaymath}
\beta\equiv\frac{1}{k_B T}\equiv\frac{\partial\ln\Omega}{\partial E}
\end{displaymath} (12)

    or
    \begin{displaymath}
\frac{1}{T}=\frac{\partial S}{\partial E}
\end{displaymath} (13)

  2. The generalized force is defined by
    \begin{displaymath}
\overline{X}_{\alpha}=\frac{1}{\beta}\frac{\partial\ln\Omega}
{\partial x_{\alpha}}
=T\frac{\partial S}{\partial x_{\alpha}}
\end{displaymath} (14)

    or
    \begin{displaymath}
\frac{\partial S}{\partial x_{\alpha}}=\frac{\overline{X}_{\alpha}}{T}
\end{displaymath} (15)

    and if $x_{\alpha}=V$, then
    \begin{displaymath}
\overline{p}=T\frac{\partial S}{\partial V}
\end{displaymath} (16)

  3. Equilibrium criteria between two interacting systems
    \begin{displaymath}
\beta=\beta^{\prime}
\end{displaymath} (17)


    \begin{displaymath}
\overline{X}_{\alpha}=\overline{X}_{\alpha}^{\prime}
\end{displaymath} (18)

    For example
    \begin{displaymath}
\overline{p}=\overline{p}^{\prime}
\end{displaymath} (19)

Specific Heat
An important concept in both thermodynamics and statistical mechanics is heat capacity or specific heat. These are related but not the same. Consider a specific physical system. If we add heat $dQ$ to the system while maintaining the external parameter $y$ constant, then the temperature will increase by $dT$. We define the heat capacity $C_y$ as the heat capacity at constant $y$.
\begin{displaymath}
C_y\equiv \left. \frac{dQ}{dT}\right\vert _y
\end{displaymath} (20)

Qualitatively one can think of the heat capacity as a measure of the ability of the system to hold heat. The more heat it can hold, the higher its heat capacity. In places of the country where it's cold in the winter, there is often a radiator in each room which has hot water circulating through it to heat the room. The radiator is usually made of metal and it's big and heavy so that it can hold lots of heat. It has a large heat capacity. If the radiator is small and doesn't have much mass, then it has a small heat capacity; it doesn't hold much heat and the room is cold.

Historically the heat capacity is reflected in the amount of heat needed to raise the temperature of one gram of water by one degree Celsius (from 14.5 to 15.5$^o$ C) at 1 atmosphere of pressure. This amount of heat is a calorie. (In the old days people didn't realize that heat was a form of energy.)

\begin{displaymath}
1\;\;{\rm calorie}=4.1840\;\;{\rm joules}
\end{displaymath} (21)

The heat capacity reflects the number of microscopic degrees of freedom with energy $E$. If there are a lot of degrees of freedom at energy $E$, then the heat absorbed goes into exciting these degrees of freedom without changing the mean energy or the temperature much. In this case, the heat capacity is large. If there aren't many degrees of freedom per unit energy, then a given amount of heat will excite degrees of freedom over a broader range of energies, i.e., it will raise the mean energy more and the heat capacity is smaller. In other words in

\begin{displaymath}
C_y\sim\frac{\Delta Q}{\Delta T}
\end{displaymath} (22)

for a given $\Delta Q$, if $\Delta T$ is large, $C_y$ is small. But if $\Delta T$ is small, $C_y$ is large.

The heat capacity depends on the quantity of matter in the system. It's nice to have a quantity that just depends on the substance itself and not on how much of it is present. That's what the specific heat is. The specific heat is obtained by dividing the heat capacity by the particle number to obtain the specific heat per particle $c_y=C_y/N$; by dividing by the mole number $\nu$ to obtain the molar specific heat $c^{\prime}_y=C_y/\nu$; or by dividing by the mass to obtain the specific heat per kilogram.

Usually the parameter maintained constant is the volume $V$. If this is true, then in simple systems no mechanical work is done on, or by, the system when heat is added. Theoretical calculations usually keep $V$ constant and refer to $C_V$. However, in the laboratory, it is easier to keep the pressure $p$ constant and measure $C_p$. If the volume is fixed, then all the heat absorbed by a system goes into increasing its internal energy and temperature. But if the pressure is kept constant, the volume can change and work can be done; as a result the heat goes into changing the internal energy and into work:

\begin{displaymath}
dQ=d\overline{E}+\overline{p}dV
\end{displaymath} (23)

So the temperature increases less when the pressure is kept constant and we expect
\begin{displaymath}
c_p>c_V
\end{displaymath} (24)

In general,

\begin{displaymath}
dQ=T\;dS
\end{displaymath} (25)

so
\begin{displaymath}
C_y=T\left.\frac{\partial S}{\partial T}\right\vert _y
\end{displaymath} (26)

And if the external parameters are kept fixed so that no mechanical work is done, then
\begin{displaymath}
dE=dQ
\end{displaymath} (27)

and
\begin{displaymath}
C_V=\left.\frac{\partial Q}{\partial T}\right\vert _V=\left.\frac{\partial E}
{\partial T}\right\vert _V
\end{displaymath} (28)

We can use measurements of $C_V(T)$ to measure entropy differences since
\begin{displaymath}
dS=\frac{dQ}{T}=\frac{C_y(T)\;dT}{T}
\end{displaymath} (29)

implies that the entropy differences between the initial and final states of the system is given by
\begin{displaymath}
S_f-S_i=\int^f_i dS=\int^f_i\frac{C_y(T)\;dT}{T}
=\int^{T_f}_{T_i}\frac{C_y(T)\;dT}{T}
\end{displaymath} (30)

If $C_V(T)=C_V$ is a constant independent of temperature, then
\begin{displaymath}
S_f-S_i=\int^{T_f}_{T_i}\frac{C_V(T)\;dT}{T}=C_V\int^{T_f}_{T_i}\frac{dT}{T}=
C_V\ln\left(\frac{T_f}{T_i}\right)
\end{displaymath} (31)

Note that entropy is a solely a function of the state so that $dS$ is an exact differential and is path independent. So we can pick a convenient path for doing the integral. In particular, we envision a quasi-static process in going from the initial to the final state. That way the system is always arbitrarily close to equilibrium and the temperature and heat capacity are well defined at all points along the way.

Recall that energy is defined up to an arbritrary additive constant so that the zero of the energy can be put anywhere that is convenient. Unlike the energy, the absolute value of the entropy of a state can be defined because the third law of thermodynamics tells us that as $T\rightarrow 0$, the entropy $S$ approaches a definite value $S_o$ which is usually 0.

A simple example is a system of $N$ magnetic atoms, each with spin 1/2. If this system is known to be ferromagnetic at sufficiently low temperatures, all spins must be completely aligned as $T\rightarrow 0$ so that the number of accessible states $\Omega\rightarrow 1$ and $S=k_B\ln\Omega\rightarrow 0$. But at high temperatures all spins must be completely randomly oriented so that $\Omega=2^N$ and $S=k_B\ln(2^N)=Nk_B\ln 2$. Hence it follows that this system must have a heat capacity $C(T)$ which satisfies

\begin{displaymath}
S(T=\infty)-S(T=0)=\int^{\infty}_{0}
\frac{C(T^{\prime})\;dT^{\prime}}{T^{\prime}}=Nk_B\ln 2
\end{displaymath} (32)

This is valid irrespective of the details of the interactions which bring about the ferromagnetic behavior and irrespective of the temperature dependence of $C(T)$.

Extensive and Intensive Parameters
The macroscopic parameters specifying the macrostate of a homogeneous system can be classified into two types. An extensive parameter is proportional to the size of the system. The total mass and the total volume of a system are extensive parameters. An intensive parameter is unchanged if the size or the mass of the system doubles. Temperature is an intensive parameter. The internal energy $\overline{E}$ is an extensive quantity; if we put a system with energy $\overline{E}_1$ together with a system with energy $\overline{E}_2$, then the total internal energy is $\overline{E}=\overline{E}_1+\overline{E}_2$. Heat capacity is an extensive parameter ( $C\sim\Delta \overline{E}/\Delta T$) but specific heat is an intensive parameter. The ratio of two extensive quantities is an intensive quantity since the size dependence cancels out. The entropy is also an extensive quantity ( $\Delta S=\int dQ/T$).

When dealing with extensive quantities such as the entropy $S$, it is often convenient to talk in terms of the quantity per mole $S/\nu$ which is an intensive parameter independent of the size of the system. Sometimes the quantity per mole is denoted by a small letter, e.g., the entropy per mole $s=S/\nu$.




next up previous
Next: About this document ...
Clare Yu 2009-01-19