Activity: Thermal radiation and Planck distribution

Thermal and Statistical Physics 2020
These notes from the fourth week of Thermal and Statistical Physics cover blackbody radiation and the Planck distribution. They include a number of small group activities.

Here are the lecture notes for the fourth week of Thermal and Statistical Physics.

Reading: K&K 4, Schroeder 7.4

This week we will be tackling things that reduce to a bunch of simple harmonic oscillators. Any system that classically reduces to a set of normal modes each with its own frequency falls in this category. We will start with just an ordinary simple harmonic oscillator, and will move on to look at radiation (photons) and sound in solids (phonons).

Harmonic oscillator

You will recall that the energy eigenvalues of a single simple harmonic oscillator are given by \begin{align} E_n &= (n+\tfrac12)\hbar\omega \end{align} Note : The text uses \(s\) rather than \(n\) for the quantum number, but that is annoying, and on the blackboard my \(s\) looks too much like my \(S\), so I'll stick with \(n\). The text also omits the zero-point energy. It does make the math simpler, but I think it's worth seeing how when the zero-point energy disappears from the results.

We will begin by solving for the properties of a single simple harmonic oscillator at a given temperature. You already did this once using multiplicities, but it's easier this way. \begin{align} Z &= \sum_{n=0}^{\infty} e^{-(n+\tfrac12)\beta\hbar\omega} \\ &= e^{-\tfrac12\beta\hbar\omega}\sum_{n=0}^{\infty} e^{-n\beta\hbar\omega} \end{align} Now the sum is actually a harmonic (or geometric) sum, which has a little trick to solve: \begin{align} Z &= e^{-\tfrac12\beta\hbar\omega}\sum_{n=0}^{\infty} \left(e^{-\beta\hbar\omega}\right)^n \\ \xi &\equiv e^{-\beta\hbar\omega} \end{align} The trick involves multiplying the series by \(\xi\) and subtracting: \begin{align} \Xi &= \sum_{n=0}^{\infty} \xi^n \\ &= 1 + \xi + \xi^2 + \cdots \\ \xi\Xi &= \xi + \xi^2 + \cdots \\ \Xi - \xi\Xi &= 1 \\ \Xi &= \frac1{1-\xi} \end{align} Thus we find that the partition function is simply \begin{align} Z &= \frac{e^{-\tfrac12\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \end{align} This gives us the free energy \begin{align} F &= -kT\ln Z \\ &= -kT\ln\left(\frac{e^{-\tfrac12\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\right) \\ &= \frac12\hbar\omega + kT\ln\left(1 - e^{-\beta\hbar\omega}\right) \end{align} We can see now that the ground state energy just ends up as a constant that we add to the free energy, which is what you probably would have guessed. Kittel was able to omit this constant simply by redefining the zero of energy.

Small groups
Solve for the high temperature limit of the free energy.
Answer
At high temperatures, \(\beta\hbar\omega \ll 1\), which means \begin{align} e^{-\beta\hbar\omega} &= 1 - \beta\hbar\omega + \frac12(\beta\hbar\omega)^2+\cdots \\ \ln(1-e^{-\beta\hbar\omega}) &= \ln\left(\beta\hbar\omega - \tfrac12(\beta\hbar\omega)^2+\cdots\right) \\ F &\approx \tfrac12\hbar\omega + kT\ln\left(\frac{\hbar\omega}{kT}\right) \end{align} So far this doesn't tell us much, but from it we can quickly tell the high temperature limits of the entropy and internal energy: \begin{align} S &\approx -k\ln\left(\frac{\hbar\omega}{kT}\right) - kT\frac{kT}{\hbar\omega}\left(-\frac{\hbar\omega}{kT^2}\right) \\ &= k\left(\ln\left(\frac{kT}{\hbar\omega}\right)+1\right) \end{align} The entropy increases as we increase temperature, as it always must. The manner in which \(S\) increases logarithmically with temperature tells us that the number of accessible microstates must be proportional to \(\frac{kT}{\hbar\omega}\).
\begin{align} S &= -\left(\frac{\partial F}{\partial T}\right)_V \\ &= -k\ln\left(1 - e^{-\beta\hbar\omega}\right) + kT\frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{kT^2} \\ &= -k\ln\left(1 - e^{-\beta\hbar\omega}\right) + \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{T} \end{align}

Planck distribution

Finally, we can find the internal energy and the average quantum number (or number of “phonons”). The latter is known as the Planck distribution. \begin{align} U &= F + TS \\ &= \frac12\hbar\omega + T \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\frac{\hbar\omega}{T} \\ &= \left(\frac12 + \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}}\right)\hbar\omega \\ U &= \left(\langle n\rangle + \tfrac12\right)\hbar\omega \\ \langle n\rangle &= \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \\ &= \frac{1}{e^{\beta\hbar\omega}-1} \end{align} So far, all we've done is a straightforward application of the canonical formalism from last week: we computed a partition function, took a log, and from that found the entropy and internal energy.

Small groups
Solve for the high-temperature and low-temperature limits of the internal energy and/or the average number of quanta \(\langle n\rangle\).
High temperature answer
First we consider \(\frac{kT}{\hbar\omega}\gg 1\) or \(\beta\hbar\omega\ll 1\). In this case, the exponential is going to be very close to one, and we can use a power series approximation for it. \begin{align} \langle n\rangle &= \frac{1}{e^{\beta\hbar\omega}-1} \\ &\approx \frac{1}{\left(1 + \beta\hbar\omega + \tfrac12(\beta\hbar\omega)^2 +\cdots\right) -1} \\ &= \frac{kT}{\hbar\omega} \frac1{1 + \tfrac12\beta\hbar\omega + \cdots} \\ &= \frac{kT}{\hbar\omega} \left(1 - \tfrac12\beta\hbar\omega + \cdots\right) \\ &\approx \frac{kT}{\hbar\omega} - \frac12 \end{align} The first term is our equipartition term: \(\frac12kT\) each for the kinetic and potential energy. The second term is our next-order correction, which you need not necessarily include. There would be a next term which would be proportional to \(1/T\), but we have omitted.
Low temperature answer
At low temperature \(\beta\hbar\omega\gg 1\), and we would rather look the other representation: \begin{align} \langle n\rangle &= \frac{1}{e^{\beta\hbar\omega}-1} \\ &= \frac{e^{-\beta\hbar\omega}}{1 - e^{-\beta\hbar\omega}} \end{align} because now the exponentials are small (rather than large), which means we can expand the denominator as a power series. \begin{align} \langle n\rangle &= e^{-\beta\hbar\omega}\left(1 + e^{-\beta\hbar\omega} + \cdots\right) \\ &\approx e^{-\beta\hbar\omega} + e^{-2\beta\hbar\omega} \end{align} Once again, I kept one more term than is absolutely needed. Clearly at low temperature we have a very low number of quanta, which should be no shock. I hope you all expected that the system would be in the ground state at very low temperature.

Summing over microstates

I realized that we haven't spent much time talking about how to sum over microstates. Once you “get it,” summing over microstates is very easy. Unfortunately, this makes it less obvious that this requires teaching, and I have a tendency to skim over this summation.

A nice example of this was the second homework, which involved the paramagnet again. You needed to find the partition function for \(N\) dipoles. After spending a week working with multiplicities, it would be very natural to take the \begin{align} Z \equiv \sum_\mu e^{-\beta E_\mu} \end{align} and think of the \(\mu\) as having something to do with spin excess, and to think that this sum should involve multiplicities. You can write a solution here using multiplicities and summing over all possible energies, but that is the hard way. The easy way only looks easy once you know how to do it. The easy way involves literally summing over every possible sequence of spins. \begin{align} Z &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1}e^{-\beta E(s_1,s_2,\cdots,s_N} \end{align} This may look messy, but things simplify when we consider the actual energy (unless we try to simplify that by expressing it in terms of \(N_\uparrow\) or the spin excess). \begin{align} E(s_1,s_2,\cdots,s_N) &= -s_1mB - s_2mB - \cdots - s_NmB \end{align} Now this may look pretty nasty, but it's actually beautiful, because each \(s_i\) has a separate term that is added together, which means that it separates! I'll use fewer words for a bit... \begin{align} Z &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1} e^{\beta (s_1mB + s_2mB + \cdots + s_NmB)} \\ &= \sum_{s_1=\pm1}\sum_{s_2=\pm1}\cdots\sum_{s_N=\pm1} e^{\beta s_1mB}e^{\beta s_2mB}\cdots \\ &= \sum_{s_1=\pm1}e^{\beta s_1mB}\sum_{s_2=\pm1}e^{\beta s_2mB}\cdots \sum_{s_N=\pm1}e^{\beta s_NmB} \\ &= \left(\sum_{s_1=\pm1}e^{\beta s_1mB}\right)\cdots \left(\sum_{s_N=\pm1}e^{\beta s_NmB}\right) \\ &= \left(\sum_{s=\pm1}e^{\beta s mB}\right)\cdots \left(\sum_{s=\pm1}e^{\beta smB}\right) \\ &= \left(\sum_{s=\pm1}e^{\beta s mB}\right)^N \end{align} The important steps above were

  1. Writing the sum over states as a nested sum over every quantum number of the system.
  2. Breaking the exponential into a product, which we can do because the energy is a sum of terms each of which involve just one quantum number.
  3. Doing each sum separately, and finding the result as the product of all those sums.

Note that the final result here is a free energy that is just \(N\) times the free energy for a system consisting of a single spin. And thus we could alternatively do our computation for a system with a single spin, and then multiply everything that is extensive by \(N\). The latter is a valid shortcut, but you should know why it gives a correct answer, and when (as when we have identical particles) you could run into trouble.

Black body radiation

Researchers in 1858-1860 realized that materials emitted light in strict proportion to their ability to absorb it, and hence a perfectly black body would be emit the most radiation when heated. Planck and others realized that we should be able to use thermal physics to predict the spectrum of black body radation. A key idea was to recognize that the light itself should be in thermal equilibrium.

One example of a “black body” is a small hole in a large box. Any light that goes into the hole will bounce around so many times before coming out of the hole, that it will all be absorbed. This leads to the idea of studying the radiation in a closed box, which should match that of a black body, when it is in thermal equilibrium.

Eigenstates of an empty box

So what are the properties of an empty box? Let's assume metal walls, and not worry too much about details of the boundary conditions, which shouldn't make much difference provided the box is pretty big. The reasoning is basically the same as for a particle in a 3D box: the waves must fit in the box. As for the particle in a box, we can choose either periodic boundary conditions or put nodes at the boundaries. I generally prefer periodic (which gives both positive and negative \(\vec k\)), rather than dealing with sine waves (which are superpositions of the above). A beautiful thing about periodic boundary conditions is that your set of \(\vec k\) vectors is independent of the Hamiltonian, so this looks very much like the single atom in a box we did last week. \begin{align} k_x &= n_x \frac{2\pi}{L} & n_x &= \text{any integer} \end{align} and similarly for \(k_y\) and \(k_z\), which gives us \begin{align} \omega(\vec k) &= c|\vec k| \\ \omega_{n_xn_yn_z} &= \frac{2\pi c}{L}\sqrt{n_x^2+n_y^2+n_z^2} \end{align} where now we need to be extra-careful to remember that in this expression \(n_x\) is not a number of photons, even though \(n\) is a number of photons. Fortunately, we will soon be done with our \(n_x\), once we finish summing. The possible energies of a single mode are those of a simple harmonic oscillator, so for each of the \(n_x\),\(n_y\),\(n_z\) triples there is a different quantum number \(n\), and an energy given by \begin{align} E_n &= n\hbar\omega \end{align} where technically there will also be a zero-point energy (like for the physical harmonic oscillator), but we won't want to include the zero point energy for a couple of reasons. Firstly, it can't be extracted from the vacuum, so including it would make it harder to reason about the quantity of radiation leaking from a hole. Secondly, the total zero point energy of the vacuum will be infinite, which makes it a nuisance.

In your homework, you will use a summation over all the normal modes to solve for the thermodynamic properties of the vacuum, and will show that \begin{align} F &= 8\pi \frac{V(kT)^4}{h^3c^3} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \\ &= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3} \\ &= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3} \end{align} provided the box is big enough that \(\frac{\hbar c}{LkT}\ll 1\). At first this looks freaky, because the free energy is always negative while you know that the energy is always positive. This just means that entropy is dominating the free energy.

The entropy is given by \begin{align} S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3 \\ &= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3 \end{align} which is a comfortingly positive quantity, and the energy is \begin{align} \frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3} \\ &= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3} \end{align} which is also nicely positive.

Note also that these quantities are also nicely extensive, as you would hope.

Knowing the thermodynamic properties of the vacuum is handy, but doesn't tell us yet about the properties of a black body. To do that we'll have to figure out how much of this radiation will escape through a little hole.

Sefan-Boltzmann law of radiation

To find the radiation power, we need do a couple of things. One is to multiply the energy per volume by the speed of light, which would tell us the energy flux through a hole if all the energy were passing straight through that hole. However, there is an additional geometrical term we will need to find the actual magnitude of the power, since the radiation is travelling equally in all directions. This will give us another dimensionless factor.

If we define the velocity as \(c\hat k\) where \(c\) is the speed of light and \(\hat k\) is its direction, the power flux (or intensity) in the \(\hat z\) direction will be given by the energy density times the average value of the positive \(\hat z\) component of the velocity. When \(v_z<0\), the light doesn't come out the hole at all. This average can be written as \begin{align} I &= \frac{U}{V}\frac{\int_0^{2\pi}\int_0^{\frac{\pi}{2}}v_z\sin\theta d\theta d\phi}{4\pi} \\ &= \frac{U}{V}\frac{\int_0^{2\pi}\int_0^{\frac{\pi}{2}}c\cos\theta\sin\theta d\theta d\phi}{4\pi} \\ &= \frac{U}{V}\frac{c}2\int_0^{1}\xi d\xi \\ &= \frac{U}{V}\frac{c}{4} \\ &= -6\pi \frac{(kT)^4}{h^3c^2} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \end{align} This is the famous Stefan-Boltzmann law of radiation. Since the constants are all mostly a nuisance, they are combined into the Stefan-Boltzmann constant: \begin{align} I &\equiv \text{power radiated per area of surface} \\ &= \sigma_B T^4 \\ \sigma_B &= -6\pi \frac{k_B^4}{h^3c^2} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \end{align}

Side note
Why is this \(T^4\) law important for incandescent light bulbs? The resistivity of a metal increases with temperature. In a light bulb, if you have a bit of wire that is a bit hotter than the rest, its resistivity will be higher, and that will cause it to have more Joule heating, and get hotter. If nothing else came into play, we'd have a feedback loop, and the hot bit would just keep getting hotter, and having higher resistivity, until it vaporized. Boom. Fortunately, the power of light radiated from that hot bit of wire will increase faster than its resistivity goes up (because \(T^4\) is serious!), preventing the death spiral, and saving the day!

Planck radiation law

Having found the total power radiated, a fair question is how much of that power is at each possible frequency. This defines the black body spectrum. Each mode has an occupancy \(\langle n\rangle\) that is the same as that of the harmonic oscillator from Monday. But the power radiated also depends on how many modes there are at a given frequency. This may be more clear if we solve for the internal energy in a different way: \begin{align} U &= \sum_j^{\text{modes}} \langle n_j\rangle \hbar\omega_j \\ &= \sum_j^{\text{modes}} \frac{\hbar\omega_j}{e^{\beta\hbar\omega_j}-1} \\ &\approx \iiint_{-\infty}^{\infty} \frac{\frac{hc}{L}\sqrt{n_x^2+n_y^2+n_z^2}}{e^{\beta\frac{\hbar 2\pi c}{L}\sqrt{n_x^2+n_y^2+n_z^2}}-1}dn_xdn_ydn_z \\ &= \int_0^{\infty}\frac{\frac{hc}{L}n}{e^{\beta\frac{hc}{L}n}-1}4\pi n^2dn \end{align} Now we can transform the integral from \(n\) to \(\omega\) via \(\omega_n = c 2\pi n/L\). \begin{align} U &= \left(\frac{L}{c 2\pi}\right)^3 \int_0^{\infty}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \\ &= \frac{V}{c^3 (2\pi)^3} \int_0^{\infty}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \end{align} At this point we can identify the internal energy per frequency as \begin{align} \frac{dU}{d\omega} &= \frac{V\omega^2}{c^3 2\pi^2} \frac{\hbar\omega}{e^{\beta\hbar\omega}-1} \end{align} which is proportional to the spectral density of power, which is proportional to \(\omega^3\) at low frequency, and dies off exponentially at high frequency.

Skipped:

Kirchoff law and Surface temperature

Low temperature heat capacity

Much of the same physics that we have considered here applies also to solids. One of the mysteries of classical physics was why the heat capacity of an insulator drops at low temperatures. Experimentally, it was found that \begin{align} C_p \propto T^3 \end{align} at low temperatures, which was pretty mysterious. Why would solids have their energy drop off like this? Based on the equipartition theorem classically, the heat capacity should be independent of temperature provided all the terms in the energy are quadratic.

Einstein model

Einstein proposed that we view a solid as a whole bunch of harmonic oscillators, all with the same frequency (for simplicity). In this case, the internal energy is given by \begin{align} U &= \frac{N\hbar\omega}{e^{\beta\hbar\omega}-1} + \frac12N\hbar\omega \end{align}

Small groups
What is the heat capacity at low temperatures according to this picture? Roughly sketch this by hand.
Answer
At low temperature, the exponential on the bottom is crazy big, so we can see that the internal energy will be \begin{align} U &\approx N\hbar\omega e^{-\beta\hbar\omega} \\ C_V &= \left(\frac{\partial U}{\partial T}\right)_V \\ &\approx Nk_B \left(\frac{\hbar\omega}{kT}\right)^2e^{-\beta\hbar\omega} \end{align} This dies off very quickly as the temperature becomes \(kT\ll \hbar\omega\). We call it “exponential” scaling, but it's exponential in \(\beta\) not \(T\).

This happens (we see exponential low-T scaling in the heat capacity) whenever we have a finite energy gap between the ground state and the first excited state.

Debye theory

For smallish \(\vec k\), the frequency of a phonon is proportional to \(|\vec k|\), with the speed of sound as the proportionality constant. We won't have time for this in class, but the reasoning is very similar.

The key thing Debye realized was that unlike light, the phonon \(\vec k\) has a maximum value, which is determined by the number of atoms per unit volume, since one wavelength of sound must include at least a couple of atoms. This means that when we compute the internal energy of a crystal full of phonons, there is a maximum \(k\) value, and thus a maximum value for \(\sqrt{n_x^2+n_y^2+n_z^2}\) and a maximum value for the frequency, which we call \(\omega_D\). The internal energy is thus \begin{align} U &= \left(\frac{L}{v_s 2\pi}\right)^3 \int_0^{\omega_D}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \\ &= \frac{V}{c^3 (2\pi)^3} \int_0^{\omega_D}\frac{\hbar\omega}{e^{\beta\hbar\omega}-1}4\pi \omega^2d\omega \end{align}

At low temperature we can RUN OUT OF TIME PREPARING FOR LECTURE!

  • assignment Radiation in an empty box

    assignment Homework

    Radiation in an empty box
    Thermal physics Radiation Free energy Thermal and Statistical Physics 2020

    As discussed in class, we can consider a black body as a large box with a small hole in it. If we treat the large box a metal cube with side length \(L\) and metal walls, the frequency of each normal mode will be given by: \begin{align} \omega_{n_xn_yn_z} &= \frac{\pi c}{L}\sqrt{n_x^2 + n_y^2 + n_z^2} \end{align} where each of \(n_x\), \(n_y\), and \(n_z\) will have positive integer values. This simply comes from the fact that a half wavelength must fit in the box. There is an additional quantum number for polarization, which has two possible values, but does not affect the frequency. Note that in this problem I'm using different boundary conditions from what I use in class. It is worth learning to work with either set of quantum numbers. Each normal mode is a harmonic oscillator, with energy eigenstates \(E_n = n\hbar\omega\) where we will not include the zero-point energy \(\frac12\hbar\omega\), since that energy cannot be extracted from the box. (See the Casimir effect for an example where the zero point energy of photon modes does have an effect.)

    Note
    This is a slight approximation, as the boundary conditions for light are a bit more complicated. However, for large \(n\) values this gives the correct result.

    1. Show that the free energy is given by \begin{align} F &= 8\pi \frac{V(kT)^4}{h^3c^3} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \\ &= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3} \\ &= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3} \end{align} provided the box is big enough that \(\frac{\hbar c}{LkT}\ll 1\). Note that you may end up with a slightly different dimensionless integral that numerically evaluates to the same result, which would be fine. I also do not expect you to solve this definite integral analytically, a numerical confirmation is fine. However, you must manipulate your integral until it is dimensionless and has all the dimensionful quantities removed from it!

    2. Show that the entropy of this box full of photons at temperature \(T\) is \begin{align} S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3 \\ &= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3 \end{align}

    3. Show that the internal energy of this box full of photons at temperature \(T\) is \begin{align} \frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3} \\ &= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3} \end{align}

  • assignment Free energy of a harmonic oscillator

    assignment Homework

    Free energy of a harmonic oscillator
    Helmholtz free energy harmonic oscillator Thermal and Statistical Physics 2020

    A one-dimensional harmonic oscillator has an infinite series of equally spaced energy states, with \(\varepsilon_n = n\hbar\omega\), where \(n\) is an integer \(\ge 0\), and \(\omega\) is the classical frequency of the oscillator. We have chosen the zero of energy at the state \(n=0\) which we can get away with here, but is not actually the zero of energy! To find the true energy we would have to add a \(\frac12\hbar\omega\) for each oscillator.

    1. Show that for a harmonic oscillator the free energy is \begin{equation} F = k_BT\log\left(1 - e^{-\frac{\hbar\omega}{k_BT}}\right) \end{equation} Note that at high temperatures such that \(k_BT\gg\hbar\omega\) we may expand the argument of the logarithm to obtain \(F\approx k_BT\log\left(\frac{\hbar\omega}{kT}\right)\).

    2. From the free energy above, show that the entropy is \begin{equation} \frac{S}{k_B} = \frac{\frac{\hbar\omega}{kT}}{e^{\frac{\hbar\omega}{kT}}-1} - \log\left(1-e^{-\frac{\hbar\omega}{kT}}\right) \end{equation}

      Entropy of a simple harmonic oscillator
      Heat capacity of a simple harmonic oscillator
      This entropy is shown in the nearby figure, as well as the heat capacity.

  • assignment Quantum harmonic oscillator

    assignment Homework

    Quantum harmonic oscillator
    Entropy Quantum harmonic oscillator Frequency Energy Thermal and Statistical Physics 2020
    1. Find the entropy of a set of \(N\) oscillators of frequency \(\omega\) as a function of the total quantum number \(n\). Use the multiplicity function: \begin{equation} g(N,n) = \frac{(N+n-1)!}{n!(N-1)!} \end{equation} and assume that \(N\gg 1\). This means you can make the Sitrling approximation that \(\log N! \approx N\log N - N\). It also means that \(N-1 \approx N\).

    2. Let \(U\) denote the total energy \(n\hbar\omega\) of the oscillators. Express the entropy as \(S(U,N)\). Show that the total energy at temperature \(T\) is \begin{equation} U = \frac{N\hbar\omega}{e^{\frac{\hbar\omega}{kT}}-1} \end{equation} This is the Planck result found the hard way. We will get to the easy way soon, and you will never again need to work with a multiplicity function like this.

  • face Quantum Reference Sheet

    face Lecture

    5 min.

    Quantum Reference Sheet
    Central Forces 2022 (4 years)
  • assignment Potential energy of gas in gravitational field

    assignment Homework

    Potential energy of gas in gravitational field
    Potential energy Heat capacity Thermal and Statistical Physics 2020 Consider a column of atoms each of mass \(M\) at temperature \(T\) in a uniform gravitational field \(g\). Find the thermal average potential energy per atom. The thermal average kinetic energy is independent of height. Find the total heat capacity per atom. The total heat capacity is the sum of contributions from the kinetic energy and from the potential energy. Take the zero of the gravitational energy at the bottom \(h=0\) of the column. Integrate from \(h=0\) to \(h=\infty\). You may assume the gas is ideal.
  • assignment Pressure of thermal radiation

    assignment Homework

    Pressure of thermal radiation
    Thermal radiation Pressure Thermal and Statistical Physics 2020

    (modified from K&K 4.6) We discussed in class that \begin{align} p &= -\left(\frac{\partial F}{\partial V}\right)_T \end{align} Use this relationship to show that

    1. \begin{align} p &= -\sum_j \langle n_j\rangle\hbar \left(\frac{d\omega_j}{dV}\right), \end{align} where \(\langle n_j\rangle\) is the number of photons in the mode \(j\);

    2. Solve for the relationship between pressure and internal energy.

  • face Phase transformations

    face Lecture

    120 min.

    Phase transformations
    Thermal and Statistical Physics 2020

    phase transformation Clausius-Clapeyron mean field theory thermodynamics

    These lecture notes from the ninth week of Thermal and Statistical Physics cover phase transformations, the Clausius-Clapeyron relation, mean field theory and more. They include a number of small group activities.
  • assignment Pressure and entropy of a degenerate Fermi gas

    assignment Homework

    Pressure and entropy of a degenerate Fermi gas
    Fermi gas Pressure Entropy Thermal and Statistical Physics 2020
    1. Show that a Fermi electron gas in the ground state exerts a pressure \begin{align} p = \frac{\left(3\pi^2\right)^{\frac23}}{5} \frac{\hbar^2}{m}\left(\frac{N}{V}\right)^{\frac53} \end{align} In a uniform decrease of the volume of a cube every orbital has its energy raised: The energy of each orbital is proportional to \(\frac1{L^2}\) or to \(\frac1{V^{\frac23}}\).

    2. Find an expression for the entropy of a Fermi electron gas in the region \(kT\ll \varepsilon_F\). Notice that \(S\rightarrow 0\) as \(T\rightarrow 0\).

  • assignment Einstein condensation temperature

    assignment Homework

    Einstein condensation temperature
    Einstein condensation Density Thermal and Statistical Physics 2020

    Einstein condensation temperature Starting from the density of free particle orbitals per unit energy range \begin{align} \mathcal{D}(\varepsilon) = \frac{V}{4\pi^2}\left(\frac{2M}{\hbar^2}\right)^{\frac32}\varepsilon^{\frac12} \end{align} show that the lowest temperature at which the total number of atoms in excited states is equal to the total number of atoms is \begin{align} T_E &= \frac1{k_B} \frac{\hbar^2}{2M} \left( \frac{N}{V} \frac{4\pi^2}{\int_0^\infty\frac{\sqrt{\xi}}{e^\xi-1}d\xi} \right)^{\frac23} T_E &= \end{align} The infinite sum may be numerically evaluated to be 2.612. Note that the number derived by integrating over the density of states, since the density of states includes all the states except the ground state.

    Note: This problem is solved in the text itself. I intend to discuss Bose-Einstein condensation in class, but will not derive this result.

  • assignment Zapping With d 1

    assignment Homework

    Zapping With d 1
    Energy and Entropy 2021 (2 years)

    Find the differential of each of the following expressions; zap each of the following with \(d\):

    1. \[f=3x-5z^2+2xy\]

    2. \[g=\frac{c^{1/2}b}{a^2}\]

    3. \[h=\sin^2(\omega t)\]

    4. \[j=a^x\]

    5. \[k=5 \tan\left(\ln{\left(\frac{V_1}{V_2}\right)}\right)\]


Learning Outcomes