Lecture: Fermi and Bose gases

Thermal and Statistical Physics 2020
These lecture notes from week 7 of https://paradigms.oregonstate.edu/courses/ph441 apply the grand canonical ensemble to fermion and bosons ideal gasses. They include a few small group activities.
  • Media
    • 2582/fermi-gas.py
    • 2582/gas-density-of-states.svg
    • 2582/fermi-dirac-distribution.svg
    • 2582/fermi-gas-product.svg
    • 2582/bose-gas-product.svg
    • 2582/bose-einstein-distribution.svg
    • 2582/bose-gas.py
Week 7

Week 7: Fermi and Bose gases

Reading: K&K 7, Schroeder 7

This week in 2019, I have been less on top of things in terms of putting up notes. I'm adding here bits of what we did in class, and leaving below the more complete notes from the previous year, which I did not quite follow.

I began by having you (the students) solve for \(N(\mu,T)\) for the Fermi or Bose gas using their respective distribution function functions. This was a pain, and was meant to lay the groundwork for the density of states, which is the main topic this week. We ended with an equation like \begin{align} N &= (2s+1)\frac{\pi}{2}\int_{0}^\infty f\left(\frac{\hbar^2}{2m}\frac{\pi^2}{L^2} n^2\right)n^2dn \end{align} Sadly, a change of variables didn't make this a beautiful dimensionless integral, because the presence of \(\mu\) in the distribution function messed things up. Nevertheless, we can simplify this integral with a change of variables in a way that motivates the density of states: \begin{align} \varepsilon &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} n^2 & d\varepsilon &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} 2n dn \end{align} This doesn't look quite as pretty as some substitutions we have made in the past, but has the advantage that \(\varepsilon\) is something that we can reason about physically, since it is just the orbital energy. \begin{align} N &= (2s+1)\frac{\pi}{2}\int_{0}^\infty f(\varepsilon)\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac12} \sqrt{\varepsilon}\frac12\frac{2mL^2}{\hbar^2\pi^2}d\varepsilon \\ &= \int_{0}^\infty f(\varepsilon){\color{red}{(2s+1)\frac{\pi}{4}\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac32} \sqrt{\varepsilon}}}d\varepsilon \end{align} This transformation from quantum number to energy is one that we could do for any set of orbital energies, for a relativistic particle (see one of your homeworks), for phonons in matter, for light, for particles confined in two dimensions, for real crystals... anything. Which makes it worthwhile to give this special “Jacobian” a name: density of states. Thus the density of states for an ideal gas is: \begin{align} \mathcal{D}(\varepsilon) &= (2s+1)\frac{\pi}{4}\left(\frac{2mL^2}{\hbar^2\pi^2}\right)^{\frac32} \sqrt{\varepsilon} \\ &= (2s+1)\frac{V}{4\pi^2}\left(\frac{2m}{\hbar^2}\right)^{\frac32} \sqrt{\varepsilon} \end{align} You can see that the density of states is an extensive quantity, since it is proportional to \(V\). This is necessary if you are going to use the density of states to predict extensive properties. However, it is also not uncommon to refer to \(\frac{\mathcal{D}(\varepsilon)}{V}\) as the density of states, particularly when describing a material, since the intensive ratio is independent of volume size.

Mathematically we can define...
Small groups
Sketch this density of states for an ordinary 3D ideal gas.
Density of states for a 3D ideal gas

In homework you will solve for the density of states for a 2D gas. This would be suitable for most planar sheet materials. A 2D relativistic gas would give a density of states applicable to graphene.

To solve for properties of any system of non-interacting particles once we have the density of states, we just need to

Notes from last year

This week we will look at Fermi and Bose gases. These consist of noninteracting fermions or bosons. There is no point studying these at high temperatures and/or low densities, since that is just where they are identical to the classical ideal gas, which we covered last week. So we'll be low-temperature all week. What happens at low temperatures, and where do we see these gasses in real life?

The Fermi gas is most widely seen in metals and semiconductors. In both cases, the electrons (or possibly holes) can be sufficiently dense that “low temperature” corresponds to room temperature or even far above. Now, you might wonder in what insane world it makes sense to think of the electrons in a metal as “noninteracting.” If so, you could read my little note about “actual electrons” towards the end of the section on the Fermi-Dirac distribution. In any case, it is reasonable and useful to treat metals as a non-interacting Fermi gas. Room temperature is pretty low, as it turns out, from the perspective of the electrons in a metal, and it's not hard to get things colder than room temperature.

Bose gases at effectively low temperatures are less commonly found, and thus in some ways are more cool. Partly this is because there are fewer boson particles. You need to look at atoms with integer spin, such as \(^4\text{He}\). The “new” quantum thing that Bose gases do is to condense at low temperatures. This condensate is similar to a superfluid, but not the same thing. It is also analogous to superconductivity, but again, not the same thing. The first actual Bose-Einstein condensate wasn't formed until 1995, out of rubidium atoms at 170 nanokelvins. So “low temperature” in this case was actually pretty chilly.

Density of (orbital) states

We have found ourselves often writing summations over all the orbitals, such as \begin{align} N &= \sum_{n_x} \sum_{n_y} \sum_{n_z} f(\varepsilon_{n_xn_yn_z}) \\ &= \iiint f(\varepsilon_{n_xn_yn_z}) dn_xdn_ydn_z \\ &= \int_0^{\infty} f(\varepsilon(n)) 4\pi n^2 dn \end{align} Then we make the integral dimensionless, etc. This can be tedious to do over and over again. In the classical limit we can often use a derivative trick to write an answer as a derivative of a sum we have already solved, but that that doesn't work at low temperatures (i.e. the quantum limit). There is another approach, which is to solve for the density of states and then use that. (Note that while it is called “density of states” it is more accurately described as a density of orbitals, since it refers to the solutions to the one-particle problem.)

The density of states is the number of orbitals per unit energy at a given energy. So basically it does two things for us. First, it turns the 3D integral into a 1D integral. Secondly, it converts from an “\(n\)” integral into an energy integral. This isn't as nice as a dimensionless integral, but we can still do that ourselves later.

We use a density of states by converting \begin{align} \sum_{n_x}\sum_{n_y}\sum_{n_z} F(\varepsilon_{n_xn_yn_z}) &= \int d\varepsilon F(\varepsilon) \mathcal{D}(\varepsilon) \end{align} where \(F\) is any function of the orbital energy \(\varepsilon\). You can see why it is convenient, particularly because the density of states is often itself a very simple function.

Finding the density of states

Kittel gives a method for finding the density of states which involves first integrating to find the number of states under a given energy \(\varepsilon\), and then taking a derivative of that. This is a perfectly find approach, but I think a simpler method involves just using a Dirac \(\delta\)-function. \begin{align} \mathcal{D}(\varepsilon) &= \sum_i^{\text{all orbitals}} \delta(\varepsilon_i-\varepsilon) \end{align} where you do need to be certain to turn the summation correctly into an integral before making use of the \(\delta\)-function.

Small groups

Solve for the density of states of an electron gas (or of any other spin-\(\frac12\) gas, which will have the same expression. You need to know that \begin{align} \varepsilon_{n_xn_yn_z} &= \frac{\hbar^2}{2m}\frac{\pi^2}{L^2} (n_x^2+n_y^2+n_z^2) \end{align} where \(n_x\) and the other range from \(1\) to \(\infty\). This corresponds to hard-wall boundary conditions, since we're putting a half-wavelength in the box. You should also keep in mind that each combination of \(n_x\), \(n_y\), and \(n_z\) will correspond to two orbitals, one for each possible spin state.

I should also perhaps warn you that when integrating \(\delta\)-functions you will always want to perform a change of variables such that the integration variable is present inside the \(\delta\)-function and is not multiplied by anything.

\begin{align} \mathcal{D}(\varepsilon) &= 2 \sum_{n_x=1}^\infty \sum_{n_y} \sum_{n_z} \delta\left(\frac{\hbar^2}{2m}\frac{\pi^2}{L^2} (n_x^2+n_y^2+n_z^2)-\varepsilon\right) \\ &= 2\iiint_0^\infty \delta\left(\frac{\hbar^2}{2m}\frac{\pi^2}{L^2} (n_x^2+n_y^2+n_z^2)-\varepsilon\right)dn_xdn_ydn_z \\ &= \frac28\iiint_{-\infty}^\infty \delta\left(\frac{\hbar^2}{2m}\frac{\pi^2}{L^2} (n_x^2+n_y^2+n_z^2)-\varepsilon\right)dn_xdn_ydn_z \end{align} At this point I have converted to an integral over all “space”. Now we'll switch into spherical coordinates before doing a change of variables. Actually, it is useful (although I don't expect students to come up with this), to switch into \(k\)-space before going into an energy integral. \(\vec k \equiv \frac{\pi}{L}\vec n\) \begin{align} \mathcal{D}(\varepsilon) &= 2\left(\frac{L}{2\pi}\right)^3\iiint_{-\infty}^\infty \delta\left(\frac{\hbar^2k^2}{2m}-\varepsilon\right)d^3k \end{align} I hope that \(\vec k\) as a vector feels more comfortable to you than a vector of quantum numbers \(\vec n\). In any case, we now want to do a change of variables into spherical coordinates \begin{align} \mathcal{D}(\varepsilon) &= 2\left(\frac{L}{2\pi}\right)^3\int_{0}^\infty \delta\left(\frac{\hbar^2k^2}{2m}-\varepsilon\right)4\pi k^2dk \end{align} At this point I'd like to pause and point out that an integral over momenta will always end up looking basically like this (except in some cases the \(\varepsilon(k)\) will be different), and this is an acceptable starting point for your solutions on homework or an exam. If we have fewer dimensions, we may have an area or line element rather than a volume derivative in \(k\) space, and we would have fewer factors of \(\frac{L}{2\pi}\). Now we can do a change of variables into an energy variable: \begin{align} \epsilon &= \frac{\hbar^2k^2}{2m} & d\epsilon &= \frac{\hbar^2k}{m} dk \\ k^2 &= \frac{2m}{\hbar^2}\epsilon & kdk &= \frac{m}{\hbar^2} d\epsilon \end{align} And now putting all these things into our integral we find \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{\pi^2}\int_{0}^\infty \delta\left(\epsilon-\varepsilon\right) \sqrt{\frac{2m}{\hbar^2}\epsilon}\frac{m}{\hbar^2} d\epsilon \\ &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \int_{0}^\infty \delta\left(\epsilon-\varepsilon\right) \epsilon^{\frac12} d\epsilon \end{align} And now our integral is in a form where we can finally make use of the delta function! If we had not transformed it in this way (as I suspect is a common error), we would get something with incorrect dimensions! \(\ddot\frown\) \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12} \end{align} And that is the density of states for a non-relativistic particle in 3 dimensions. For your homework, you will get to solve for the properties of a highly relativistic particle, which has the same density of states as a photon (apart from a factor of two due to polarization, and any factor due to spin).
Common error
A very common error which I made myself when writing these notes is to forget the factor of two due to spin. Of course, if you have a spin \(\frac32\) fermion, it would be a factor of four. One advantage of using a density of states is that it already includes this factor, so you no longer need to remember it! \(\ddot\smile\)

Using the density of states

Once we have the density of states, we can solve for various interesting properties of quantum (or classical) gasses. The easiest thing to do is a fermi gas at zero temperature, since the Fermi-Dirac function turns into a step function at that limit. We can start by solving for the Fermi energy of a fermi gas, which is equal to the chemical potential when the temperature is zero. We do this by solving for the number assuming we know \(\varepsilon_F\), and then backwards solving for \(\varepsilon_F\). I will do a couple of extra steps here to remind you how this relates to what we did last week. \begin{align} N &= \sum_i^{\text{all orbitals}} f(\varepsilon_i) \\ &= \sum_i^{\text{all orbitals}} \frac1{e^{\beta(\varepsilon_i-\mu)}-1} \\ &= \int_0^\infty \mathcal{D}(\varepsilon) \frac1{e^{\beta(\varepsilon-\mu)}-1}d\varepsilon \\ &= \int_0^{\varepsilon_F} \mathcal{D}(\varepsilon)d\varepsilon \end{align} In the last step, I made the assumption that \(T=0\), so I could turn the Fermi-Dirac function into a step function, which simply changes the bounds of the integral. It is all right to start with this assumption, when doing computations at zero temperature. Now I'll put in the density of states. \begin{align} N &= \int_0^{\varepsilon_F} \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12}d\varepsilon \\ &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \int_0^{\varepsilon_F} \varepsilon^{\frac12}d\varepsilon \\ &= \frac{V}{2\pi^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \frac23 \varepsilon_F^{\frac32} \end{align} Now we can just solve for the Fermi energy! \begin{align} \varepsilon_F &= \left(\frac{N}{V}3\pi^2\left(\frac{\hbar^2}{2m}\right)^{\frac32}\right)^{\frac23} \\ &= \frac{\hbar^2}{2m}\left(\frac{N}{V}3\pi^2\right)^{\frac23} \end{align} This is the energy of the highest occupied orbital in the gas, when the temperature is zero. As you will see, many of the properties of a metal (which is the Fermi gas that you use on an everyday basis) depend fundamentally on the Fermi energy. For this reason, we also like to define other properties of electrons at the Fermi energy: momentum, velocity (technically speed, but it is called Fermi velocity), and even “temperature”. \begin{align} k_F &= \left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ p_F &= \hbar k_F \\ &= \hbar\left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ v_F &= \frac{p_F}{m} \\ &= \frac{\hbar}{m}\left(\frac{N}{V}3\pi^2\right)^{\frac13} \\ T_F &= \frac{\varepsilon_F}{k_B} \end{align} The text contains a table of properties of metals at the Fermi energy for a number of simple metals. I don't expect you to remember them, but it's worth having them down somewhere so you can check the reasonableness of an answer from time to time. Basically, they all come down to \begin{align} \varepsilon_F &\sim 4\text{eV} \quad \text{(with }\sim\times2\text{ variation)} \\ k_F &\sim 10^{8}\text{cm}^{-1} \\ v_F &\sim 10^{8}\text{cm s}^{-1} \end{align}

Biographical sidenote
My PhD advisor insisted that I memorize these numbers (and a few more) prior to my oral exam. He said that experimentalists think that theorists don't know anything about the real world, and hence it is important to be able to estimate things. Sure enough, on my exam I had to estimate the frequency of radiation that passes unabsorbed through at typical superconductor (which is in the microwave band).
Units sidenote
My advisor also insisted that I memorize these results in \(cgs\) units rather than SI (i.e. \(mks\)) units, since that is what any faculty would be comfortable with. You may have noticed that Kittel preferentially uses \(cgs\) units, although not exclusively. Berkeley (where Kittel taught) was one of the last hotbeds of \(cgs\) units, and as a result all of my physics courses used Gaussian units (which is synonymous with \(cgs\)).

Before we move on, it is worth showing you how we can simplify the density of states now that we know what the Fermi energy is: \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{(2\pi)^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12} \\ &= \frac32 N \varepsilon_F^{-\frac32} \varepsilon^{\frac12} \end{align} There is nothing particularly deep here, but this is somewhat more compact, and often the factors of \(\varepsilon_F\) will end up canceling out.

Small groups
Solve for the internal energy at zero temperature of a Fermi gas.
We just need to do a different integral. \begin{align} U &= \int_0^{\varepsilon_F} \mathcal{D}(\varepsilon)\varepsilon d\varepsilon \\ &= \frac32 N \varepsilon_F^{-\frac32} \int_0^{\varepsilon_F} \varepsilon^{\frac12}\varepsilon d\varepsilon \\ &= \frac32 N \varepsilon_F^{-\frac32} \frac25\varepsilon_F^{\frac52} \\ &= \frac{3}{5} N\varepsilon_F \end{align} We generally find when looking at Fermi gasses that things with dimensions of energy end up proportional to the Fermi energy. The \(N\) we could also have predicted, in order to end up with an extensive internal energy.

Fermi gas at finite temperature

The Fermi gas is more exciting (and more... thermal?) when the temperature is not precisely zero. Let's start with the heat capacity at low temperatures, which is one area where metals inherently differ from semiconductors and insulators.

We are looking at a metal with \(n\) electrons per unit volume, at temperature \(T\), where \(kT\ll \varepsilon_F\). We are looking to find out the heat capacity \(C_V\).

Small whiteboards
How would you approach this?
Remember that \begin{align} C_V &= \left(\frac{\partial U}{\partial T}\right)_{V,N} = T\left(\frac{\partial S}{\partial T}\right)_{V,N} \end{align} which means that we need either \(S\) or \(U\) in order to find the heat capacity at fixed volume. We could do either, but given what we know about the electron gas, \(U\) is easier to find.

We can find \(U\) by integrating with the density of states and the Fermi-Dirac distribution. This is a new variant of our usual \begin{align} U &= \sum_i^{\text{all }\mu\text{states}} P_i E_i \end{align} In this case, we will instead sum over all orbitals the energy contribution of each orbital, again in effect treating each orbital as a separate system. \begin{align} U &= \sum_i^{\text{all orbitals}} f(\varepsilon_i) \varepsilon_i \\ &= \int \varepsilon f(\varepsilon) \mathcal{D}(\varepsilon)d\varepsilon \end{align} Remember that for an electron gas \begin{align} \mathcal{D}(\varepsilon) &= \frac{V}{(2\pi)^2} \left(\frac{2m}{\hbar^2}\right)^{\frac32} \varepsilon^{\frac12} \\ &= \frac32 \frac{N}{\varepsilon_F^\frac32}\sqrt{\varepsilon} \end{align} Sometimes one or the other of these may be more convenient.

Fermi function for zero and finite temperature

Hand-waving version of heat capacity

We can begin with a hand-waving version of solving for the heat capacity. We look at the Fermi-Dirac function at both finite and zero temperature, and we can note that the red and blue shaded areas, representing the probability of orbitals being unoccupied below \(\varepsilon_F\) and the probability of excited orbitals above \(\varepsilon_F\) being occupied are equal (this was your homework).

Fermi function times density of states

To find the energy, of course, we need the Fermi-Dirac function times the density of states. You might think that the red and blue areas will now be unequal, since we are multiplying the blue region by a larger density of states than the red region. However, provided the number of electrons is fixed (as is usual), the chemical potential must shift such that the two areas are equal.

So how do we find the heat capacity? We can work out a rough equation for the internal energy change (relative to zero), and then take a derivative. Now the width of the red and blue regions is \(\sim kT\). We know this from your first homework problem last week, where you showed that the slope at the chemical potential is \(\frac1{4kT}\). A steeper slope meens a proportionately wider region that is neither zero nor one. \begin{align} U(T)-U(0) &\sim \left(\text{\it \# electrons excited}\right)\left(\Delta\text{\it energy}\right) \\ &\sim\left(\mathcal{D}(\varepsilon_F)kT\right) \left(kT\right) \\ C_V &= \left(\frac{\partial U}{\partial T}\right)_{N,V} \\ &\sim \mathcal{D}(\varepsilon_F)k^2T \end{align} which tells us that the heat capacity vanishes at low temperatures, and is proportional to \(T\), which is a stark contrast to insulators, for which \(C_V\propto T^3\) as predicted by the Debye model.

Heat capacity without so much waving

To find the heat capacity more carefully, we could set up this integral, noting that the Fermi-Dirac function is the only place where temperature dependence arises: \begin{align} C_V &= \left(\frac{\partial U}{\partial T}\right)_{N,V} \\ &= \int_0^\infty \varepsilon \mathcal{D}(\varepsilon)\frac{\partial f}{\partial T} d\varepsilon \\ &= \int_0^\infty \varepsilon \mathcal{D}(\varepsilon) \frac{(\varepsilon-\varepsilon_F)e^{\beta(\varepsilon-\varepsilon_F)}}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right)^2} \frac1{kT^2} d\varepsilon \end{align} where in the last stage I assumed that the chemical potential would not be changing significantly over our (small) temperature range. An interesting question is what the shape of \(\frac{\partial f}{\partial T}\) is. The exponential on top causes it to drop exponentially at \(\varepsilon-\varepsilon_F\gg kT\), while the expoential on the bottom causes it to drop at low energies where \(\varepsilon_F-\varepsilon\gg kT\). This makes it sharply peaked, provided \(kT\ll \varepsilon_F\), which can justify evaluating the density of states at the Fermi energy. We can also for the same reason set the bounds on the integral to be all energies \begin{align} C_V &\approx \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon \frac{(\varepsilon-\varepsilon_F)e^{\beta(\varepsilon-\varepsilon_F)}}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right)^2} d\varepsilon \end{align} Now we have an integral that we would love to make dimensionless, but which has an annoying \(\varepsilon\) that does not have \(\varepsilon_F\) subtracted from it. Let's look at even and oddness. The ratio does not look either very even or very odd, but we can make it do so by multiplying by \(e^{-\beta(\varepsilon-\varepsilon_F)}\) top and bottom. \begin{align} C_V &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon(\varepsilon-\varepsilon_F) \frac{1}{ \left(e^{\beta(\varepsilon-\varepsilon_F)}+1\right) \left(e^{-\beta(\varepsilon-\varepsilon_F)}+1\right)} d\varepsilon \\ &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \varepsilon(\varepsilon-\varepsilon_F) \frac{1}{ e^{\beta(\varepsilon-\varepsilon_F)} +e^{-\beta(\varepsilon-\varepsilon_F)}+2} d\varepsilon \end{align} Now we can do a change of variables \begin{align} \xi &= \beta(\varepsilon-\varepsilon_F) & d\xi &= \beta d\varepsilon \end{align} This makes our integral almost dimensionless: \begin{align} C_V &= \frac{\mathcal{D}(\varepsilon_F)}{kT^2} \int_{-\infty}^\infty \left(kT\xi + \cancelto{\text{odd}}{\varepsilon_F}\quad\quad\right) (kT\xi) \frac{1}{ e^\xi +e^{-\xi}+2} kT d\xi \\ &= \mathcal{D}(\varepsilon_F)k^2T \int_{-\infty}^\infty \frac{\xi^2}{ e^\xi +e^{-\xi}+2} d\xi \end{align} So here is our answer, expressed in terms of a dimensionless integral. Wolfram alpha tells me this integral is \(\frac{\pi^2}{3}\).

Bose gas

\begin{align} f(\varepsilon) &= \frac1{e^{\beta(\varepsilon-\mu)}-1} \end{align}

The Bose-Einstein distribution.

The divergence of the Bose-Einstein distribution means that \(\mu\) must be always less than the minimum orbital energy, i.e. \(\mu<0\). As before, the total number is given by \begin{align} N &= \int_0^\infty f(\varepsilon)\mathcal{D}(\varepsilon)d\varepsilon \\ &= \left(V \cdots\right)\int_0^\infty f(\varepsilon)\varepsilon^{\frac12}d\varepsilon \end{align} where in the second step I assumed a three-dimensional gas, and omitted the various constants in the density of states for brevity.

Bose function times density of states

What happens when \(kT\) gets low enough that quantum mechanics matters, i.e. such that \(n_Q\approx n\)? Alternatively, we can ask ourselves what happens as \(N\) becomes large, such that \(\frac{N}{V} \ge n_Q\)? If we fix \(T\), we can increase \(N\) only by shifting \(\mu\) to the right, which increases \(f(\varepsilon)\) at every energy. But there is a limit! If we try to make \(N\) too big, we will find that even if we set \(\mu=0\), we don't get enough particles. What does that mean? Surely there cannot be a maximum value to \(N\)?!

No, the \(\mathcal{D}(\varepsilon)\) integral we are using accounts for all of the states except for one! The ground state doesn't show up, because \(\vec k=0\), so \(\varepsilon=0\) (or if you like non-periodic boundary conditions, when \(n_x=n_y=n_z=1\), and \(\varepsilon=\frac{\hbar^2}{2m}\left(\frac{\pi}{L}\right)^2\)). This state isn't counted in the integral, and all the extra bosons end up in it. If you like, \(\mu\) doesn't become zero, but rather approaches the ground state energy. And as it does, the number of particles occupying the ground state diverges. We can find the number of particles in the ground state by subtracting the total number of particles from the number of particles in all the excited states: \begin{align} N_{\text{ground}} &= N - \int_0^\infty f(\varepsilon, \mu=0)\mathcal{D}(\varepsilon) d\varepsilon \end{align} When \(N_{\text{ground}}>0\) according to this formula, we say that we have a Bose-Einstein condensate. The temperature at which this transition happens (for a fixed density) is the Bose-Einstein transition temperature. Your last homework this week is to solve for this transition temperature.

Fermi level fermion boson Bose gas Bose-Einstein condensate ideal gas statistical mechanics phase transition
Learning Outcomes