Physics 309 Lecture 8

Physics 309 Lecture 8
1
Review
Collect HW, return HW, give quiz 3.
Review: We started with the fundamental assumption of statistical mechanics: in an isolated
system in thermal equilibrium, all accessible microstates are equally probable, and used it to derive
the second law of thermodynamics: any large isolated system in thermal equilibrium will be found
in the macrostate with greatest multiplicity. We defined entropy S = k ln Ω giving another form
of the 2nd law: any large isolated system in thermal equilibrium will be found in the macrostate
with greatest entropy.
We derived (sort-of) the entropy for a monatomic ideal gas, and got the Sackur-Tetrode equation:
#
"
!
5
V 4πmU 3/2
+
(1)
S = Nk ln
N 3Nh2
2
We puzzled about Gibbs’ paradox and concluded that it was down to the fact that molecules
of a gas really are indistinguishable from one another.
2
Entropy and temperature
We’ve been progressively getting better at understanding of temperature: starting with “the thing
that a thermometer measures” to the equipartition definition: “something to do with the energy
in quadratic degrees of freedom.” Let’s see if we can go one better. We’ll work it through with
our old friend the weakly coupled Einstein solids A and B, but the result (and the argument) are
completely general.
Recall, there are NA oscillators with q A units of energy in A, and NB oscillators with q B in B,
and the total is fixed q = q A + q B . The energy of each is proportional to q A,B . We can graph the
entropy of each as a function of q A = q − q B , along with the total entropy:
The total entropy takes the maximum possible value at equilibrium, so the derivative is zero:
∂Stotal
∂q A
∂Stotal
∂U A
1
= 0
(2)
= 0
(3)
We wrote a partial derivative in anticipation of S depending on more than one variable, which it
could if we let the total energy q vary, for example. But Stotal = S A + SB so
∂SB
∂S A
+
=0
∂U A
∂U A
(4)
and Utotal = U A + UB with Utotal fixed so dU A = −dUB and
∂S A
∂SB
=
∂U A
∂UB
(5)
So the thing that’s the same for the two systems when they reach thermal equilibrium is the slope
of their entropy vs energy graph. But the thing that’s the same for two systems at thermal equilibrium is by definition temperature. Think about some points away from equilibrium and what happens to
Stotal when we move a little away from them.
So how exactly is this derivative related to temperature? The units are 1/T so that would be
our first guess:
1
∂S
=
(6)
T
∂U N,V
Maybe there is some factor of 2 or π or something there. It turns out that this form matches the
ideal gas temperature, so it’s what we use.
Do problem 3.3 as an exercise if time
2.1
Examples
For the Einstein solid, the entropy in the high-temperature limit is S = Nk[ln(q/N ) + 1], but U =
−1
−1
∂S
qe so S = Nk ln U − Nk ln(eN ) + Nk and T = ∂U
= Nk
or U = NkT, ie equipartition
U
for two dof per oscillator (potential and kinetic energy).
For the monatomic ideal gas, we start with Sackur-Tetrode equation:
#
"
!
V 4πmU 3/2
5
S = Nk ln
+
(7)
N 3Nh2
2
and rewrite it as
S = Nk ln U 3/2 + (function of V ) + (function of N )
so that
T=
3
2 Nk
U
(8)
! −1
(9)
or U = 32 NkT.
Now that we have U as a function of T, we can calculate heat capacities (at constant volume)
and compare them with experiment. So we got from microscopic model of our system, to multiplicities, to entropy, to temperature and finally to heat capacity: something we can measure.
2
2.2
Measuring entropies
1
=
T
∂S
∂U
(10)
N,V
so dS = dU/T if we keep the volume constant, ie do no work on the system. But if we do no
work, then 1st law gives dU = dQ so dS = dQ/T. This happens to also be true when the volume
is changing, as we’ll see later. We can also write this in terms of the heat capacity:
dS =
CV dT
T
(11)
and integrate so that
S2 − S1 =
Z T2
CV dT
T1
T
(12)
If we put T1 = 0 then we can get the actual heat capacity, but it implies that CV → 0 as T → 0, the
third law of thermodynamics. But actually there can be some residual entropy from orientations
of molecules in a solid, or from isotopes fixed in position in a crystal lattice, which will take longer
than we can wait to relax into the lowest energy state.
2.3
The macroscopic view of entropy
S = dQ/T was the original definition of entropy. Suppose two large bodies transfer heat Q between them (without changing temperature because they’re large). Body A is at temperature TA
and B is at T2 and TA > TB . Then ∆S A = − Q/TA and ∆SB = Q/TB so that ∆SB > −∆S A and the
total entropy in the universe increases.
3
Paramagnetism
Reminder of what a paramagnet is: N dipoles in field B with possible energies +µB and −µB. Total
energy of system is U = µB( N↓ − N↑ ) = µB( N − 2N↑ ). Magnetization M = µ( N↑ − N↓ ) = −U/B.
We can write down the multiplicity Ω = N!/( N↑ !N↓ !). We can take the log of multiplicity to plot
the entropy against the energy and get an inverted parabola that goes to zero at U = ± N. Don’t
get this confused with the plot we saw at the beginning where there were two connected Einstein
solids.
The system always wants to move to the state of highest entropy, so if we start at the minimum
energy, it will absorb energy from its surroundings and so move to higher entropy and higher
energy. But if we go past the peak, and let the paramagnet exchange energy with something “normal”, like an Einstein solid, which has an entropy-energy graph with +ve slope, the paramagnet
will want to lose energy (because we want to increase the total entropy of the combined system).
If we look at the temperature of the paramagnet, the reciprocal of the slope of the entropyenergy graph, we find that it goes to ∞ when U = 0 and it’s negative when U > 0. Negative
temperatures behave like infinite temperatures in that they always want to give up energy.
Next time we’ll do the math.
4
For next time
Problems 3.8, 3.10, 3.14
Read Section 3.3
Tuesday: Test 1 on Section 1.1–3.2
3