These lecture notes from week 7 of Thermal and Statistical Physics apply the grand canonical ensemble to fermion and bosons ideal gasses. They include a few small group activities.
Week 7
Reading: K&K 7, Schroeder 7
This week in 2019, I have been less on top of things in terms of putting up notes. I'm adding here bits of what we did in class, and leaving below the more complete notes from the previous year, which I did not quite follow.
I began by having you (the students) solve for \(N(\mu,T)\) for the Fermi or Bose gas using their respective distribution function functions. This was a pain, and was meant to lay the groundwork for the density of states, which is the main topic this week. We ended with an equation like \begin{align} N &= (2s+1)\frac{\pi}{2}\int_{0}^\infty f\left(\frac{\hbar^2}{2m}\frac{\pi^2}{L^2} n^2\right)n^2dn \end{align} Sadly, a change of variables didn't make this a beautiful dimensionless integral, because the presence of \(\mu\) in the distribution function messed things up. Nevertheless, we can simplify this integral with a change of variables in a way that motivates the density of states: \begin{align} \varepsilon &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} n^2 & d\varepsilon &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} 2n dn \end{align} This doesn't look quite as pretty as some substitutions we have made in the past, but has the advantage that \(\varepsilon\) is something that we can reason about physically, since it is just the orbital energy. \begin{align} N &= (2s+1)\frac{\pi}{2}\int_{0}^\infty f(\varepsilon)\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac12} \sqrt{\varepsilon}\frac12\frac{2mL^2}{\hbar^2\pi^2}d\varepsilon \\ &= \int_{0}^\infty f(\varepsilon){\color{red}{(2s+1)\frac{\pi}{4}\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac32} \sqrt{\varepsilon}}}d\varepsilon \end{align} This transformation from quantum number to energy is one that we could do for any set of orbital energies, for a relativistic particle (see one of your homeworks), for phonons in matter, for light, for particles confined in two dimensions, for real crystals... anything. Which makes it worthwhile to give this special “Jacobian” a name: density of states. Thus the density of states for an ideal gas is: \begin{align} \mathcal{D}(\varepsilon) &= (2s+1)\frac{\pi}{4}\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac32} \sqrt{\varepsilon} \\ &= (2s+1)\frac{V}{4\pi^2}\left(\frac{2m}{\hbar^2}\right)^{\frac32} \sqrt{\varepsilon} \end{align} You can see that the density of states is an extensive quantity, since it is proportional to \(V\). This is necessary if you are going to use the density of states to predict extensive properties. However, it is also not uncommon to refer to \(\frac{\mathcal{D}(\varepsilon)}{V}\) as the density of states, particularly when describing a material, since the intensive ratio is independent of volume size.
In homework you will solve for the density of states for a 2D gas. This would be suitable for most planar sheet materials. A 2D relativistic gas would give a density of states applicable to graphene.
To solve for properties of any system of non-interacting particles once we have the density of states, we just need to
This week we will look at Fermi and Bose gases. These consist of noninteracting fermions or bosons. There is no point studying these at high temperatures and/or low densities, since that is just where they are identical to the classical ideal gas, which we covered last week. So we'll be low-temperature all week. What happens at low temperatures, and where do we see these gasses in real life?
The Fermi gas is most widely seen in metals and semiconductors. In both cases, the electrons (or possibly holes) can be sufficiently dense that “low temperature” corresponds to room temperature or even far above. Now, you might wonder in what insane world it makes sense to think of the electrons in a metal as “noninteracting.” If so, you could read my little note about “actual electrons” towards the end of the section on the Fermi-Dirac distribution. In any case, it is reasonable and useful to treat metals as a non-interacting Fermi gas. Room temperature is pretty low, as it turns out, from the perspective of the electrons in a metal, and it's not hard to get things colder than room temperature.
Bose gases at effectively low temperatures are less commonly found, and thus in some ways are more cool. Partly this is because there are fewer boson particles. You need to look at atoms with integer spin, such as \(^4\text{He}\). The “new” quantum thing that Bose gases do is to condense at low temperatures. This condensate is similar to a superfluid, but not the same thing. It is also analogous to superconductivity, but again, not the same thing. The first actual Bose-Einstein condensate wasn't formed until 1995, out of rubidium atoms at 170 nanokelvins. So “low temperature” in this case was actually pretty chilly.
We have found ourselves often writing summations over all the orbitals, such as \begin{align} N &= \sum_{n_x} \sum_{n_y} \sum_{n_z} f(\varepsilon_{n_xn_yn_z}) \\ &= \iiint f(\varepsilon_{n_xn_yn_z}) dn_xdn_ydn_z \\ &= \int_0^{\infty} f(\varepsilon(n)) 4\pi n^2 dn \end{align} Then we make the integral dimensionless, etc. This can be tedious to do over and over again. In the classical limit we can often use a derivative trick to write an answer as a derivative of a sum we have already solved, but that that doesn't work at low temperatures (i.e. the quantum limit). There is another approach, which is to solve for the density of states and then use that. (Note that while it is called “density of states” it is more accurately described as a density of orbitals, since it refers to the solutions to the one-particle problem.)
The density of states is the number of orbitals per unit energy at a given energy. So basically it does two things for us. First, it turns the 3D integral into a 1D integral. Secondly, it converts from an “\(n\)” integral into an energy integral. This isn't as nice as a dimensionless integral, but we can still do that ourselves later.
We use a density of states by converting \begin{align} \sum_{n_x}\sum_{n_y}\sum_{n_z} F(\varepsilon_{n_xn_yn_z}) &= \int d\varepsilon F(\varepsilon) \mathcal{D}(\varepsilon) \end{align} where \(F\) is any function of the orbital energy \(\varepsilon\). You can see why it is convenient, particularly because the density of states is often itself a very simple function.
Kittel gives a method for finding the density of states which involves first integrating to find the number of states under a given energy \(\varepsilon\), and then taking a derivative of that. This is a perfectly find approach, but I think a simpler method involves just using a Dirac \(\delta\)-function. \begin{align} \mathcal{D}(\varepsilon) &= \sum_i^{\text{all orbitals}} \delta(\varepsilon_i-\varepsilon) \end{align} where you do need to be certain to turn the summation correctly into an integral before making use of the \(\delta\)-function.
Solve for the density of states of an electron gas (or of any other spin-\(\frac12\) gas, which will have the same expression. You need to know that \begin{align} \varepsilon_{n_xn_yn_z} &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} (n_x^2+n_y^2+n_z^2) \end{align} where \(n_x\) and the other range from \(1\) to \(\infty\). This corresponds to hard-wall boundary conditions, since we're putting a half-wavelength in the box. You should also keep in mind that each combination of \(n_x\), \(n_y\), and \(n_z\) will correspond to two orbitals, one for each possible spin state.
I should also perhaps warn you that when integrating \(\delta\)-functions you will always want to perform a change of variables such that the integration variable is present inside the \(\delta\)-function and is not multiplied by anything.
Once we have the density of states, we can solve for various interesting properties of quantum (or classical) gasses. The easiest thing to do is a fermi gas at zero temperature, since the Fermi-Dirac function turns into a step function at that limit. We can start by solving for the Fermi energy of a fermi gas, which is equal to the chemical potential when the temperature is zero. We do this by solving for the number assuming we know \(\varepsilon_F\), and then backwards solving for \(\varepsilon_F\). I will do a couple of extra steps here to remind you how this relates to what we did last week. \begin{align} N &= \sum_i^{\text{all orbitals}} f(\varepsilon_i) \\ &= \sum_i^{\text{all orbitals}} \frac1{e^{\beta(\varepsilon_i-\mu)}-1} \\ &= \int_0^\infty \mathcal{D}(\varepsilon) \frac1{e^{\beta(\varepsilon-\mu)}-1}d\varepsilon \\ &= \int_0^{\varepsilon_F} \mathcal{D}(\varepsilon)d\varepsilon \end{align} In the last step, I made the assumption that \(T=0\), so I could turn the Fermi-Dirac function into a step function, which simply changes the bounds of the integral. It is all right to start with this assumption, when doing computations at zero temperature. Now I'll put in the density of states. \begin{align} N &= \int_0^{\varepsilon_F} \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12}d\varepsilon \\ &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \int_0^{\varepsilon_F} \varepsilon^{\frac12}d\varepsilon \\ &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \frac23 \varepsilon_F^{\frac32} \end{align} Now we can just solve for the Fermi energy! \begin{align} \varepsilon_F &= \left(\frac{N}{V}3\pi^2\left(\frac{\hbar^2}{2m}\right)^{\frac32}\right)^{\frac23} \\ &= \frac{\hbar^2}{2m}\left(\frac{N}{V}3\pi^2\right)^{\frac23} \end{align} This is the energy of the highest occupied orbital in the gas, when the temperature is zero. As you will see, many of the properties of a metal (which is the Fermi gas that you use on an everyday basis) depend fundamentally on the Fermi energy. For this reason, we also like to define other properties of electrons at the Fermi energy: momentum, velocity (technically speed, but it is called Fermi velocity), and even “temperature”. \begin{align} k_F &= \left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ p_F &= \hbar k_F \\ &= \hbar\left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ v_F &= \frac{p_F}{m} \\ &= \frac{\hbar}{m}\left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ T_F &= \frac{\varepsilon_F}{k_B} \end{align} The text contains a table of properties of metals at the Fermi energy for a number of simple metals. I don't expect you to remember them, but it's worth having them down somewhere so you can check the reasonableness of an answer from time to time. Basically, they all come down to \begin{align} \varepsilon_F &\sim 4\text{eV} \quad \text{(with }\sim\times2\text{ variation)} \\ k_F &\sim 10^{8}\text{cm}^{-1} \\ v_F &\sim 10^{8}\text{cm s}^{-1} \end{align}
Before we move on, it is worth showing you how we can simplify the density of states now that we know what the Fermi energy is: \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{(2\pi)^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12} \\ &= \frac32 N \varepsilon_F^{-\frac32} \varepsilon^{\frac12} \end{align} There is nothing particularly deep here, but this is somewhat more compact, and often the factors of \(\varepsilon_F\) will end up canceling out.
The Fermi gas is more exciting (and more... thermal?) when the temperature is not precisely zero. Let's start with the heat capacity at low temperatures, which is one area where metals inherently differ from semiconductors and insulators.
We are looking at a metal with \(n\) electrons per unit volume, at temperature \(T\), where \(kT\ll \varepsilon_F\). We are looking to find out the heat capacity \(C_V\).
We can find \(U\) by integrating with the density of states and the Fermi-Dirac distribution. This is a new variant of our usual \begin{align} U &= \sum_i^{\text{all }\mu\text{states}} P_i E_i \end{align} In this case, we will instead sum over all orbitals the energy contribution of each orbital, again in effect treating each orbital as a separate system. \begin{align} U &= \sum_i^{\text{all orbitals}} f(\varepsilon_i) \varepsilon_i \\ &= \int \varepsilon f(\varepsilon) \mathcal{D}(\varepsilon)d\varepsilon \end{align} Remember that for an electron gas \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{(2\pi)^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12} \\ &= \frac32 \frac{N}{\varepsilon_F^\frac32}\sqrt{\varepsilon} \end{align} Sometimes one or the other of these may be more convenient.
We can begin with a hand-waving version of solving for the heat capacity. We look at the Fermi-Dirac function at both finite and zero temperature, and we can note that the red and blue shaded areas, representing the probability of orbitals being unoccupied below \(\varepsilon_F\) and the probability of excited orbitals above \(\varepsilon_F\) being occupied are equal (this was your homework).
To find the energy, of course, we need the Fermi-Dirac function times the density of states. You might think that the red and blue areas will now be unequal, since we are multiplying the blue region by a larger density of states than the red region. However, provided the number of electrons is fixed (as is usual), the chemical potential must shift such that the two areas are equal.
So how do we find the heat capacity? We can work out a rough equation for the internal energy change (relative to zero), and then take a derivative. Now the width of the red and blue regions is \(\sim kT\). We know this from your first homework problem last week, where you showed that the slope at the chemical potential is \(\frac1{4kT}\). A steeper slope meens a proportionately wider region that is neither zero nor one. \begin{align} U(T)-U(0) &\sim \left(\text{\it \# electrons excited}\right)\left(\Delta\text{\it energy}\right) \\ &\sim\left(\mathcal{D}(\varepsilon_F)kT\right) \left(kT\right) \\ C_V &= \left(\frac{\partial U}{\partial T}\right)_{N,V} \\ &\sim \mathcal{D}(\varepsilon_F)k^2T \end{align} which tells us that the heat capacity vanishes at low temperatures, and is proportional to \(T\), which is a stark contrast to insulators, for which \(C_V\propto T^3\) as predicted by the Debye model.
To find the heat capacity more carefully, we could set up this integral, noting that the Fermi-Dirac function is the only place where temperature dependence arises: \begin{align} C_V &= \left(\frac{\partial U}{\partial T}\right)_{N,V} \\ &= \int_0^\infty \varepsilon \mathcal{D}(\varepsilon)\frac{\partial f}{\partial T} d\varepsilon \\ &= \int_0^\infty \varepsilon \mathcal{D}(\varepsilon) \frac{(\varepsilon-\varepsilon_F)e^{\beta(\varepsilon-\varepsilon_F)}}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right)^2} \frac1{kT^2} d\varepsilon \end{align} where in the last stage I assumed that the chemical potential would not be changing significantly over our (small) temperature range. An interesting question is what the shape of \(\frac{\partial f}{\partial T}\) is. The exponential on top causes it to drop exponentially at \(\varepsilon-\varepsilon_F\gg kT\), while the expoential on the bottom causes it to drop at low energies where \(\varepsilon_F-\varepsilon\gg kT\). This makes it sharply peaked, provided \(kT\ll \varepsilon_F\), which can justify evaluating the density of states at the Fermi energy. We can also for the same reason set the bounds on the integral to be all energies \begin{align} C_V &\approx \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon \frac{(\varepsilon-\varepsilon_F)e^{\beta(\varepsilon-\varepsilon_F)}}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right)^2} d\varepsilon \end{align} Now we have an integral that we would love to make dimensionless, but which has an annoying \(\varepsilon\) that does not have \(\varepsilon_F\) subtracted from it. Let's look at even and oddness. The ratio does not look either very even or very odd, but we can make it do so by multiplying by \(e^{-\beta(\varepsilon-\varepsilon_F)}\) top and bottom. \begin{align} C_V &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon(\varepsilon-\varepsilon_F) \frac{1}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right) \left(e^{-\beta(\varepsilon-\varepsilon_F)}+1\right)} d\varepsilon \\ &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon(\varepsilon-\varepsilon_F) \frac{1}{ e^{\beta(\varepsilon-\varepsilon_F)} +e^{-\beta(\varepsilon-\varepsilon_F)}+2} d\varepsilon \end{align} Now we can do a change of variables \begin{align} \xi &= \beta(\varepsilon-\varepsilon_F) & d\xi &= \beta d\varepsilon \end{align} This makes our integral almost dimensionless: \begin{align} C_V &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \left(kT\xi + \cancelto{\text{odd}}{\varepsilon_F}\quad\quad\right) (kT\xi) \frac{1}{ e^\xi +e^{-\xi}+2} kT d\xi \\ &= \mathcal{D}(\varepsilon_F)k^2T \int_{-\infty}^\infty \frac{\xi^2}{ e^\xi +e^{-\xi}+2} d\xi \end{align} So here is our answer, expressed in terms of a dimensionless integral. Wolfram alpha tells me this integral is \(\frac{\pi^2}{3}\).
\begin{align} f(\varepsilon) &= \frac1{e^{\beta(\varepsilon-\mu)}-1} \end{align}
The divergence of the Bose-Einstein distribution means that \(\mu\) must be always less than the minimum orbital energy, i.e. \(\mu<0\). As before, the total number is given by \begin{align} N &= \int_0^\infty f(\varepsilon)\mathcal{D}(\varepsilon)d\varepsilon \\ &= \left(V \cdots\right)\int_0^\infty f(\varepsilon)\varepsilon^{\frac12}d\varepsilon \end{align} where in the second step I assumed a three-dimensional gas, and omitted the various constants in the density of states for brevity.
What happens when \(kT\) gets low enough that quantum mechanics matters, i.e. such that \(n_Q\approx n\)? Alternatively, we can ask ourselves what happens as \(N\) becomes large, such that \(\frac{N}{V} \ge n_Q\)? If we fix \(T\), we can increase \(N\) only by shifting \(\mu\) to the right, which increases \(f(\varepsilon)\) at every energy. But there is a limit! If we try to make \(N\) too big, we will find that even if we set \(\mu=0\), we don't get enough particles. What does that mean? Surely there cannot be a maximum value to \(N\)?!
No, the \(\mathcal{D}(\varepsilon)\) integral we are using accounts for all of the states except for one! The ground state doesn't show up, because \(\vec k=0\), so \(\varepsilon=0\) (or if you like non-periodic boundary conditions, when \(n_x=n_y=n_z=1\), and \(\varepsilon=\frac{\hbar^2}{2m}\left(\frac{\pi}{L}\right)^2\)). This state isn't counted in the integral, and all the extra bosons end up in it. If you like, \(\mu\) doesn't become zero, but rather approaches the ground state energy. And as it does, the number of particles occupying the ground state diverges. We can find the number of particles in the ground state by subtracting the total number of particles from the number of particles in all the excited states: \begin{align} N_{\text{ground}} &= N - \int_0^\infty f(\varepsilon, \mu=0)\mathcal{D}(\varepsilon) d\varepsilon \end{align} When \(N_{\text{ground}}>0\) according to this formula, we say that we have a Bose-Einstein condensate. The temperature at which this transition happens (for a fixed density) is the Bose-Einstein transition temperature. Your last homework this week is to solve for this transition temperature.
assignment Homework
For electrons with an energy \(\varepsilon\gg mc^2\), where \(m\) is the mass of the electron, the energy is given by \(\varepsilon\approx pc\) where \(p\) is the momentum. For electrons in a cube of volume \(V=L^3\) the momentum takes the same values as for a non-relativistic particle in a box.
Show that in this extreme relativistic limit the Fermi energy of a gas of \(N\) electrons is given by \begin{align} \varepsilon_F &= \hbar\pi c\left(\frac{3n}{\pi}\right)^{\frac13} \end{align} where \(n\equiv \frac{N}{V}\) is the number density.
Show that the total energy of the ground state of the gas is \begin{align} U_0 &= \frac34 N\varepsilon_F \end{align}
assignment Homework
Show that a Fermi electron gas in the ground state exerts a pressure \begin{align} p = \frac{\left(3\pi^2\right)^{\frac23}}{5} \frac{\hbar^2}{m}\left(\frac{N}{V}\right)^{\frac53} \end{align} In a uniform decrease of the volume of a cube every orbital has its energy raised: The energy of each orbital is proportional to \(\frac1{L^2}\) or to \(\frac1{V^{\frac23}}\).
Find an expression for the entropy of a Fermi electron gas in the region \(kT\ll \varepsilon_F\). Notice that \(S\rightarrow 0\) as \(T\rightarrow 0\).
face Lecture
120 min.
ideal gas particle in a box grand canonical ensemble chemical potential statistical mechanics
These notes from week 6 of Thermal and Statistical Physics cover the ideal gas from a grand canonical standpoint starting with the solutions to a particle in a three-dimensional box. They include a number of small group activities.assignment Homework
assignment Homework
Einstein condensation temperature Starting from the density of free particle orbitals per unit energy range \begin{align} \mathcal{D}(\varepsilon) = \frac{V}{4\pi^2}\left(\frac{2M}{\hbar^2}\right)^{\frac32}\varepsilon^{\frac12} \end{align} show that the lowest temperature at which the total number of atoms in excited states is equal to the total number of atoms is \begin{align} T_E &= \frac1{k_B} \frac{\hbar^2}{2M} \left( \frac{N}{V} \frac{4\pi^2}{\int_0^\infty\frac{\sqrt{\xi}}{e^\xi-1}d\xi} \right)^{\frac23} T_E &= \end{align} The infinite sum may be numerically evaluated to be 2.612. Note that the number derived by integrating over the density of states, since the density of states includes all the states except the ground state.
Note: This problem is solved in the text itself. I intend to discuss Bose-Einstein condensation in class, but will not derive this result.
face Lecture
30 min.
thermodynamics statistical mechanics
These are notes, essentially the equation sheet, from the final review session for Thermal and Statistical Physics.assignment Homework
Let us imagine a new mechanics in which the allowed occupancies of an orbital are 0, 1, and 2. The values of the energy associated with these occupancies are assumed to be \(0\), \(\varepsilon\), and \(2\varepsilon\), respectively.
Derive an expression for the ensemble average occupancy \(\langle N\rangle\), when the system composed of this orbital is in thermal and diffusive contact with a resevoir at temperature \(T\) and chemical potential \(\mu\).
Return now to the usual quantum mechanics, and derive an expression for the ensemble average occupancy of an energy level which is doubly degenerate; that is, two orbitals have the identical energy \(\varepsilon\). If both orbitals are occupied the toal energy is \(2\varepsilon\). How does this differ from part (a)?
assignment Homework
Nuclei of a particular isotope species contained in a crystal have spin \(I=1\), and thus, \(m = \{+1,0,-1\}\). The interaction between the nuclear quadrupole moment and the gradient of the crystalline electric field produces a situation where the nucleus has the same energy, \(E=\varepsilon\), in the state \(m=+1\) and the state \(m=-1\), compared with an energy \(E=0\) in the state \(m=0\), i.e. each nucleus can be in one of 3 states, two of which have energy \(E=\varepsilon\) and one has energy \(E=0\).
Find the Helmholtz free energy \(F = U-TS\) for a crystal containing \(N\) nuclei which do not interact with each other.
Find an expression for the entropy as a function of temperature for this system. (Hint: use results of part a.)
assignment Homework
Find the chemical potential of an ideal monatomic gas in two dimensions, with \(N\) atoms confined to a square of area \(A=L^2\). The spin is zero.
Find an expression for the energy \(U\) of the gas.
Find an expression for the entropy \(\sigma\). The temperature is \(kT\).
assignment Homework