Student handout: Thermal radiation and Planck distribution

Thermal and Statistical Physics 2020
These notes from the fourth week of cover blackbody radiation and the Planck distribution. They include a number of small group activities.

Reading: K&K 4, Schroeder 7.4

This week we will be tackling things that reduce to a bunch of simple harmonic oscillators. Any system that classically reduces to a set of normal modes each with its own frequency falls in this category. We will start with just an ordinary simple harmonic oscillator, and will move on to look at radiation (photons) and sound in solids (phonons).

Harmonic oscillator

You will recall that the energy eigenvalues of a single simple harmonic oscillator are given by \begin{align} E_n &= (n+\tfrac12)\hbar\omega \end{align} Note : The text uses \(s\) rather than \(n\) for the quantum number, but that is annoying, and on the blackboard my \(s\) looks too much like my \(S\), so I'll stick with \(n\). The text also omits the zero-point energy. It does make the math simpler, but I think it's worth seeing how when the zero-point energy disappears from the results.

We will begin by solving for the properties of a single simple harmonic oscillator at a given temperature. You already did this once using multiplicities, but it's easier this way. \begin{align} Z &= \sum_{n=0}^{\infty} e^{-(n+\tfrac12)\beta\hbar\omega} \\ &= e^{-\tfrac12\beta\hbar\omega}\sum_{n=0}^{\infty} e^{-n\beta\hbar\omega} \end{align} Now the sum is actually a harmonic (or geometric) sum, which has a little trick to solve: \begin{align} Z &= e^{-\tfrac12\beta\hbar\omega}\sum_{n=0}^{\infty} \left(e^{-\beta\hbar\omega}\right)^n \\ \xi &\equiv e^{-\beta\hbar\omega} \end{align} The trick involves multiplying the series by \(\xi\) and subtracting: \begin{align} \Xi &= \sum_{n=0}^{\infty} \xi^n \\ &= 1 + \xi + \xi^2 + \cdots \\ \xi\Xi &= \xi + \xi^2 + \cdots \\ \Xi - \xi\Xi &= 1 \\ \Xi &= \frac1{1-\xi} \end{align} Thus we find that the partition function is simply \begin{align} Z &= \frac{e^{-\tfrac12\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \end{align} This gives us the free energy \begin{align} F &= -kT\ln Z \\ &= -kT\ln\left(\frac{e^{-\tfrac12\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\right) \\ &= \frac12\hbar\omega + kT\ln\left(1 - e^{-\beta\hbar\omega}\right) \end{align} We can see now that the ground state energy just ends up as a constant that we add to the free energy, which is what you probably would have guessed. Kittel was able to omit this constant simply by redefining the zero of energy.

Small groups
Solve for the high temperature limit of the free energy.
At high temperatures, \(\beta\hbar\omega \ll 1\), which means \begin{align} e^{-\beta\hbar\omega} &= 1 - \beta\hbar\omega + \frac12(\beta\hbar\omega)^2+\cdots \\ \ln(1-e^{-\beta\hbar\omega}) &= \ln\left(\beta\hbar\omega - \tfrac12(\beta\hbar\omega)^2+\cdots\right) \\ F &\approx \tfrac12\hbar\omega + kT\ln\left(\frac{\hbar\omega}{kT}\right) \end{align} So far this doesn't tell us much, but from it we can quickly tell the high temperature limits of the entropy and internal energy: \begin{align} S &\approx -k\ln\left(\frac{\hbar\omega}{kT}\right) - kT\frac{kT}{\hbar\omega}\left(-\frac{\hbar\omega}{kT^2}\right) \\ &= k\left(\ln\left(\frac{kT}{\hbar\omega}\right)+1\right) \end{align} The entropy increases as we increase temperature, as it always must. The manner in which \(S\) increases logarithmically with temperature tells us that the number of accessible microstates must be proportional to \(\frac{kT}{\hbar\omega}\).
\begin{align} S &= -\left(\frac{\partial F}{\partial T}\right)_V \\ &= -k\ln\left(1 - e^{-\beta\hbar\omega}\right) + kT\frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{kT^2} \\ &= -k\ln\left(1 - e^{-\beta\hbar\omega}\right) + \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{T} \end{align}

Planck distribution

Finally, we can find the internal energy and the average quantum number (or number of “phonons”). The latter is known as the Planck distribution. \begin{align} U &= F + TS \\ &= \frac12\hbar\omega + T \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{T} \\ &= \left(\frac12 + \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\right)\hbar\omega \\ U &= \left(\langle n\rangle + \tfrac12\right)\hbar\omega \\ \langle n\rangle &= \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \\ &= \frac{1}{e^{\beta\hbar\omega}-1} \end{align} So far, all we've done is a straightforward application of the canonical formalism from last week: we computed a partition function, took a log, and from that found the entropy and internal energy.

Small groups
Solve for the high-temperature and low-temperature limits of the internal energy and/or the average number of quanta \(\langle n\rangle\).
High temperature answer
First we consider \(\frac{kT}{\hbar\omega}\gg 1\) or \(\beta\hbar\omega\ll 1\). In this case, the exponential is going to be very close to one, and we can use a power series approximation for it. \begin{align} \langle n\rangle &= \frac{1}{e^{\beta\hbar\omega}-1} \\ &\approx \frac{1}{\left(1 + \beta\hbar\omega + \tfrac12(\beta\hbar\omega)^2 +\cdots\right) -1} \\ &= \frac{kT}{\hbar\omega} \frac1{1 + \tfrac12\beta\hbar\omega + \cdots} \\ &= \frac{kT}{\hbar\omega} \left(1 - \tfrac12\beta\hbar\omega + \cdots\right) \\ &\approx \frac{kT}{\hbar\omega} - \frac12 \end{align} The first term is our equipartition term: \(\frac12kT\) each for the kinetic and potential energy. The second term is our next-order correction, which you need not necessarily include. There would be a next term which would be proportional to \(1/T\), but we have omitted.
Low temperature answer
At low temperature \(\beta\hbar\omega\gg 1\), and we would rather look the other representation: \begin{align} \langle n\rangle &= \frac{1}{e^{\beta\hbar\omega}-1} \\ &= \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \end{align} because now the exponentials are small (rather than large), which means we can expand the denominator as a power series. \begin{align} \langle n\rangle &= e^{-\beta\hbar\omega}\left(1 + e^{-\beta\hbar\omega} + \cdots\right) \\ &\approx e^{-\beta\hbar\omega} + e^{-2\beta\hbar\omega} \end{align} Once again, I kept one more term than is absolutely needed. Clearly at low temperature we have a very low number of quanta, which should be no shock. I hope you all expected that the system would be in the ground state at very low temperature.

Summing over microstates

I realized that we haven't spent much time talking about how to sum over microstates. Once you “get it,” summing over microstates is very easy. Unfortunately, this makes it less obvious that this requires teaching, and I have a tendency to skim over this summation.

A nice example of this was the second homework, which involved the paramagnet again. You needed to find the partition function for \(N\) dipoles. After spending a week working with multiplicities, it would be very natural to take the \begin{align} Z \equiv \sum_\mu e^{-\beta E_\mu} \end{align} and think of the \(\mu\) as having something to do with spin excess, and to think that this sum should involve multiplicities. You can write a solution here using multiplicities and summing over all possible energies, but that is the hard way. The easy way only looks easy once you know how to do it. The easy way involves literally summing over every possible sequence of spins. \begin{align} Z &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1}e^{-\beta E(s_1,s_2,\cdots,s_N} \end{align} This may look messy, but things simplify when we consider the actual energy (unless we try to simplify that by expressing it in terms of \(N_\uparrow\) or the spin excess). \begin{align} E(s_1,s_2,\cdots,s_N) &= -s_1mB - s_2mB - \cdots - s_NmB \end{align} Now this may look pretty nasty, but it's actually beautiful, because each \(s_i\) has a separate term that is added together, which means that it separates! I'll use fewer words for a bit... \begin{align} Z &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1} e^{\beta (s_1mB + s_2mB + \cdots + s_NmB)} \\ &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1} e^{\beta s_1mB}e^{\beta s_2mB}\cdots \\ &= \sum_{s_1=\pm1}e^{\beta s_1mB}\sum_{s_2=\pm1}e^{\beta s_2mB}\cdots \sum_{s_N=\pm1}e^{\beta s_NmB} \\ &= \left(\sum_{s_1=\pm1}e^{\beta s_1mB}\right)\cdots \left(\sum_{s_N=\pm1}e^{\beta s_NmB}\right) \\ &= \left(\sum_{s=\pm1}e^{\beta s mB}\right)\cdots \left(\sum_{s=\pm1}e^{\beta smB}\right) \\ &= \left(\sum_{s=\pm1}e^{\beta s mB}\right)^N \end{align} The important steps above were

  1. Writing the sum over states as a nested sum over every quantum number of the system.
  2. Breaking the exponential into a product, which we can do because the energy is a sum of terms each of which involve just one quantum number.
  3. Doing each sum separately, and finding the result as the product of all those sums.

Note that the final result here is a free energy that is just \(N\) times the free energy for a system consisting of a single spin. And thus we could alternatively do our computation for a system with a single spin, and then multiply everything that is extensive by \(N\). The latter is a valid shortcut, but you should know why it gives a correct answer, and when (as when we have identical particles) you could run into trouble.

Black body radiation

Researchers in 1858-1860 realized that materials emitted light in strict proportion to their ability to absorb it, and hence a perfectly black body would be emit the most radiation when heated. Planck and others realized that we should be able to use thermal physics to predict the spectrum of black body radation. A key idea was to recognize that the light itself should be in thermal equilibrium.

One example of a “black body” is a small hole in a large box. Any light that goes into the hole will bounce around so many times before coming out of the hole, that it will all be absorbed. This leads to the idea of studying the radiation in a closed box, which should match that of a black body, when it is in thermal equilibrium.

Eigenstates of an empty box

So what are the properties of an empty box? Let's assume metal walls, and not worry too much about details of the boundary conditions, which shouldn't make much difference provided the box is pretty big. The reasoning is basically the same as for a particle in a 3D box: the waves must fit in the box. As for the particle in a box, we can choose either periodic boundary conditions or put nodes at the boundaries. I generally prefer periodic (which gives both positive and negative \(\vec k\)), rather than dealing with sine waves (which are superpositions of the above). A beautiful thing about periodic boundary conditions is that your set of \(\vec k\) vectors is independent of the Hamiltonian, so this looks very much like the single atom in a box we did last week. \begin{align} k_x &= n_x \frac{2\pi}{L} & n_x &= \text{any integer} \end{align} and similarly for \(k_y\) and \(k_z\), which gives us \begin{align} \omega(\vec k) &= c|\vec k| \\ \omega_{n_xn_yn_z} &= \frac{2\pi c}{L}\sqrt{n_x^2+n_y^2+n_z^2} \end{align} where now we need to be extra-careful to remember that in this expression \(n_x\) is not a number of photons, even though \(n\) is a number of photons. Fortunately, we will soon be done with our \(n_x\), once we finish summing. The possible energies of a single mode are those of a simple harmonic oscillator, so for each of the \(n_x\),\(n_y\),\(n_z\) triples there is a different quantum number \(n\), and an energy given by \begin{align} E_n &= n\hbar\omega \end{align} where technically there will also be a zero-point energy (like for the physical harmonic oscillator), but we won't want to include the zero point energy for a couple of reasons. Firstly, it can't be extracted from the vacuum, so including it would make it harder to reason about the quantity of radiation leaking from a hole. Secondly, the total zero point energy of the vacuum will be infinite, which makes it a nuisance.

In your homework, you will use a summation over all the normal modes to solve for the thermodynamic properties of the vacuum, and will show that \begin{align} F &= 8\pi \frac{V(kT)^4}{h^3c^3} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \\ &= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3} \\ &= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3} \end{align} provided the box is big enough that \(\frac{\hbar c}{LkT}\ll 1\). At first this looks freaky, because the free energy is always negative while you know that the energy is always positive. This just means that entropy is dominating the free energy.

The entropy is given by \begin{align} S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3 \\ &= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3 \end{align} which is a comfortingly positive quantity, and the energy is \begin{align} \frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3} \\ &= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3} \end{align} which is also nicely positive.

Note also that these quantities are also nicely extensive, as you would hope.

Knowing the thermodynamic properties of the vacuum is handy, but doesn't tell us yet about the properties of a black body. To do that we'll have to figure out how much of this radiation will escape through a little hole.

Sefan-Boltzmann law of radiation

To find the radiation power, we need do a couple of things. One is to multiply the energy per volume by the speed of light, which would tell us the energy flux through a hole if all the energy were passing straight through that hole. However, there is an additional geometrical term we will need to find the actual magnitude of the power, since the radiation is travelling equally in all directions. This will give us another dimensionless factor.

If we define the velocity as \(c\hat k\) where \(c\) is the speed of light and \(\hat k\) is its direction, the power flux (or intensity) in the \(\hat z\) direction will be given by the energy density times the average value of the positive \(\hat z\) component of the velocity. When \(v_z<0\), the light doesn't come out the hole at all. This average can be written as \begin{align} I &= \frac{U}{V}\frac{\int_0^{2\pi}\int_0^{\frac{\pi}{2}}v_z\sin\theta d\theta d\phi}{4\pi} \\ &= \frac{U}{V}\frac{\int_0^{2\pi}\int_0^{\frac{\pi}{2}}c\cos\theta\sin\theta d\theta d\phi}{4\pi} \\ &= \frac{U}{V}\frac{c}2\int_0^{1}\xi d\xi \\ &= \frac{U}{V}\frac{c}{4} \\ &= -6\pi \frac{(kT)^4}{h^3c^2} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \end{align} This is the famous Stefan-Boltzmann law of radiation. Since the constants are all mostly a nuisance, they are combined into the Stefan-Boltzmann constant: \begin{align} I &\equiv \text{power radiated per area of surface} \\ &= \sigma_B T^4 \\ \sigma_B &= -6\pi \frac{k_B^4}{h^3c^2} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \end{align}

Side note
Why is this \(T^4\) law important for incandescent light bulbs? The resistivity of a metal increases with temperature. In a light bulb, if you have a bit of wire that is a bit hotter than the rest, its resistivity will be higher, and that will cause it to have more Joule heating, and get hotter. If nothing else came into play, we'd have a feedback loop, and the hot bit would just keep getting hotter, and having higher resistivity, until it vaporized. Boom. Fortunately, the power of light radiated from that hot bit of wire will increase faster than its resistivity goes up (because \(T^4\) is serious!), preventing the death spiral, and saving the day!

Planck radiation law

Having found the total power radiated, a fair question is how much of that power is at each possible frequency. This defines the black body spectrum. Each mode has an occupancy \(\langle n\rangle\) that is the same as that of the harmonic oscillator from Monday. But the power radiated also depends on how many modes there are at a given frequency. This may be more clear if we solve for the internal energy in a different way: \begin{align} U &= \sum_j^{\text{modes}} \langle n_j\rangle \hbar\omega_j \\ &= \sum_j^{\text{modes}} \frac{\hbar\omega_j}{e^{\beta\hbar\omega_j}-1} \\ &\approx \iiint_{-\infty}^{\infty} \frac{\frac{hc}{L}\sqrt{n_x^2+n_y^2+n_z^2}}{e^{\beta\frac{\hbar 2\pi c}{L}\sqrt{n_x^2+n_y^2+n_z^2}}-1}dn_xdn_ydn_z \\ &= \int_0^{\infty}\frac{\frac{hc}{L}n}{e^{\beta\frac{hc}{L}n}-1}4\pi n^2dn \end{align} Now we can transform the integral from \(n\) to \(\omega\) via \(\omega_n = c 2\pi n/L\). \begin{align} U &= \left(\frac{L}{c 2\pi}\right)^3 \int_0^{\infty}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \\ &= \frac{V}{c^3 (2\pi)^3} \int_0^{\infty}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \end{align} At this point we can identify the internal energy per frequency as \begin{align} \frac{dU}{d\omega} &= \frac{V\omega^2}{c^3 2\pi^2} \frac{\hbar\omega}{e^{\beta\hbar\omega}-1} \end{align} which is proportional to the spectral density of power, which is proportional to \(\omega^3\) at low frequency, and dies off exponentially at high frequency.


Kirchoff law and Surface temperature

Low temperature heat capacity

Much of the same physics that we have considered here applies also to solids. One of the mysteries of classical physics was why the heat capacity of an insulator drops at low temperatures. Experimentally, it was found that \begin{align} C_p \propto T^3 \end{align} at low temperatures, which was pretty mysterious. Why would solids have their energy drop off like this? Based on the equipartition theorem classically, the heat capacity should be independent of temperature provided all the terms in the energy are quadratic.

Einstein model

Einstein proposed that we view a solid as a whole bunch of harmonic oscillators, all with the same frequency (for simplicity). In this case, the internal energy is given by \begin{align} U &= \frac{N\hbar\omega}{e^{\beta\hbar\omega}-1} + \frac12N\hbar\omega \end{align}

Small groups
What is the heat capacity at low temperatures according to this picture? Roughly sketch this by hand.
At low temperature, the exponential on the bottom is crazy big, so we can see that the internal energy will be \begin{align} U &\approx N\hbar\omega e^{-\beta\hbar\omega} \\ C_V &= \left(\frac{\partial U}{\partial T}\right)_V \\ &\approx Nk_B \left(\frac{\hbar\omega}{kT}\right)^2e^{-\beta\hbar\omega} \end{align} This dies off very quickly as the temperature becomes \(kT\ll \hbar\omega\). We call it “exponential” scaling, but it's exponential in \(\beta\) not \(T\).

This happens (we see exponential low-T scaling in the heat capacity) whenever we have a finite energy gap between the ground state and the first excited state.

Debye theory

For smallish \(\vec k\), the frequency of a phonon is proportional to \(|\vec k|\), with the speed of sound as the proportionality constant. We won't have time for this in class, but the reasoning is very similar.

The key thing Debye realized was that unlike light, the phonon \(\vec k\) has a maximum value, which is determined by the number of atoms per unit volume, since one wavelength of sound must include at least a couple of atoms. This means that when we compute the internal energy of a crystal full of phonons, there is a maximum \(k\) value, and thus a maximum value for \(\sqrt{n_x^2+n_y^2+n_z^2}\) and a maximum value for the frequency, which we call \(\omega_D\). The internal energy is thus \begin{align} U &= \left(\frac{L}{v_s 2\pi}\right)^3 \int_0^{\omega_D}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \\ &= \frac{V}{c^3 (2\pi)^3} \int_0^{\omega_D}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \end{align}

At low temperature we can RUN OUT OF TIME PREPARING FOR LECTURE!

Planck distribution blackbody radiation photon statistical mechanics
Learning Outcomes