Student handout: Boltzmann probabilities and Helmholtz

Thermal and Statistical Physics 2020
These notes, from the third week of https://paradigms.oregonstate.edu/courses/ph441 cover the canonical ensemble and Helmholtz free energy. They include a number of small group activities.

(K&K 3, Schroeder 6)

This week we will be deriving the Boltzmann ratio and the Helmholtz free energy, continuing the microcanonical approach we started last week. Last week we saw that when two systems were considered together in a microcanonical picture, the energy of each system taken individually is not fixed. This provides our stepping stone to go from a microcanonical picture where all possible microstates have the same energy (and equal probability) to a canonical picture where all energies are possible and the probability of a given microstate being observed depends on the energy of that state.

Spring 2020: We skipped this section last week, and will skip this section this week

We ended last week by finding that the following quantity is equal in two systems in thermal equalibrium \begin{align} \beta &= \left(\frac{\partial \ln g}{\partial E}\right)_{V} \end{align} where \(g(E)\) is the multiplicity in the microcanonical ensemble. To more definitively connect this with temperature, we will again consider two systems in thermal equilibrium using a microcanonical ensemble, but this time we will make one of those two systems huge. In fact, it will be so huge that we can treat it using classical thermodynamics, i.e. we can conclude that the above equation applies, and we can assume that the temperature of this huge system is unaffected by the small change in energy that could happen due to differences in the small system.

Let us now examine the multiplicity of our combined system, making \(B\) be our large system: \begin{align} g_{AB}(E) &= \sum_{E_A}g_A(E_A) g_B(E_{AB}-E_A) \end{align} We can further find the probability of any particular energy being observed from \begin{align} P_{A}(E_A|E_{AB}) &= \frac{g_A(E_A)g_B(E_{AB}-E_A)}{ \sum_{E_A'}g_A(E_A') g_B(E_{AB}-E_A') } \end{align} where we are counting how many microstatesstates of the combined system have this particular energy in system \(A\), and dividing by the total number of microstates of the combined system to create a probability. So far this is identical to what we had last week. The difference is that we are now claiming that system \(B\) is huge. This means that we can approximate \(g_B\). Doing so, however requires some care.

Warning wrong!

We might be tempted to simply Taylor expand \(g_B\) \begin{align} g_B(E_{AB}-E_A) &\approx g_B(E_{AB}) - \beta g_B(E_{AB}) E_A +\cdots \\ &\approx g_B(E_{AB})(1 - \beta E_A) \end{align} This however, would be wrong unless \(\beta E_A\ll 1\). One way to see that this expansion must have limited range is that if \(\beta E_a\ge 1\) then we will end up with a negative multiplicity, which is meaningless. The trouble is that we only assumed that \(E_A\) was small enough not to change the temperature (or \(\beta\)), which does not mean that \(\beta E_A<1\). Thus this expansion is guaranteed to fail.

When we run into this problem, we can consider that \(\ln g(E)\) is generally a smoother function than \(g(E)\). Based on the Central Limit Theorem, we expect \(g(E)\) to typically have a Gaussian shape, which is one of our analytic functions that is least well approximated by a polynomial. In contrast, \(\ln g\) will be parabolic (to the extent that \(g\) is Gaussian), which makes it a prime candidate for a Taylor expansion.

Right way
The right way to do this is to Taylor expand the \(\ln g\) (which will be entropy), since the derivative of \(\ln g\) is the thing that equilibrates, and thus we can assume that this derivative won't change much when we make a small change to a large system. \begin{align} \ln g_B(E_{AB}-E_A) &\approx \ln g_B(E_{AB}) - \beta E_A +\cdots \\ g_B(E_{AB}-E_A) &\approx g_B(E_{AB})e^{-\beta E_A} \end{align}

Now we can plug this into the probability equation above to find that \begin{align} P_{A}(E_A) &= \frac{g_A(E_A)\cancel{g_B(E_{AB})}e^{-\beta E_A} }{ \sum_{E_A'}g_A(E_A')\cancel{g_B(E_{AB})}e^{-\beta E_A'} } \\ &= \frac{g_A(E_A) e^{- \frac{E_A}{k_BT}}}{ \sum_{E_A'}g_A(E_A') e^{- \frac{E_A'}{k_BT}} } \end{align} Now this looks a bit different than the probabilities we saw previously (two weeks ago), because this is the probability that we see an energy \(E_A\), not the probability for a given microstate, and thus it has the factors of \(g_A\), and it sums over energies rather than microstates. To find the probability of a given microstate, we just need to divide the probability of its energy by the number of microstates at that energy, i.e. drop the factor of g: \begin{align} P_i^A &= \frac{e^{- \beta E_i}}{Z} \\ Z &= \sum_{E}^{\text{all energies}} g(E)e^{-\beta E} \\ &= \sum_i^{\text{all $\mu$states}} e^{- \beta E} \end{align} This is all there is to show the Boltzmann probability distribution from the microcanonical picture: Big system with little system, treat big system thermodynamically, count microstates.

Note
We still haven't shown (this time around) that \(\beta = \frac1{k_BT}\). Right now \(\beta\) is just still a particular derivative that equalizes when two systems are in thermal equilibrium.

Internal energy

Now that we have the set of probabilities expressed again in terms of \(\beta\), there are a few things we can solve for directly, namely any quantities that are directly defined from probabilities. Most specifically, the internal energy \begin{align} U &= \sum_i P_i E_i \\ &= \sum_i E_i \frac{e^{-\beta E_i}}{Z} \\ &= \frac1Z \sum_i E_i e^{-\beta E_i} \end{align} Now doing yet another summation will often feel tedious. There are a couple of ways to make this easier. The simplest is to examine the sum above and notice how very similar it is to the partition function itself. If you take a derivative of the partition function with respect to \(\beta\), you will find: \begin{align} \left(\frac{\partial Z}{\partial \beta}\right)_{V} &= \sum_i e^{-\beta E_i} (-E_i) \\ &= -UZ \\ U &= -\frac{1}{Z}\left(\frac{\partial Z}{\partial \beta}\right)_{V} \\ &= -\left(\frac{\partial \ln Z}{\partial \beta}\right)_{V} \end{align}

Big Warning
In this class, I do not want you beginning any solution (either homework or exam) with a formula for \(U\) in terms of \(Z\)! This step is not that hard, and you need to do it every time. What you need to remember is definitions, which in this case is how \(U\) comes from probability. The reasoning here is that I've all too often seen students who years after taking thermal physics can only remember that there is some expression for \(U\) in terms of \(Z\). It is easier and more correct to remember that \(U\) is a weighted average of the energy.

Pressure

How do we compute pressure? So far, everything we have done has kept the volume fixed. Pressure tells us how the energy changes when we change the volume, i.e. how much work is done. From Energy and Entropy, we know that \begin{align} dU &= TdS - pdV \\ p &= -\left(\frac{\partial U}{\partial V}\right)_S \end{align} So how do we find the pressure? We need to find the change in internal energy when we change the volume at fixed entropy.

Small white boards
How do we keep the entropy fixed when changing the volume?
Answer
Experimentally, we would avoid allowing any heating by insulating the system. Theoretically, this is less easy. When we consider the Gibbs entropy, if we could keep all the probabilities fixed while expanding, we would also fix the entropy! In quantum mechanics, we can show that such a process is possible using time-dependent perturbation theory. Under certain conditions, if you perturb a system sufficiently slowly, it will remain in the “same” eigenstate it was in originally. Although the eignestate changes, and its energy changes, they do so continuously.

If we take a derivative of \(U\) with respect to volume while holding the probabilities fixed, we obtain the following result: \begin{align} p &= -\left(\frac{\partial U}{\partial V}\right)_S \\ &= -\left(\frac{\partial \sum_i E_i P_i}{\partial V}\right)_S \\ &= -\sum_i P_i \frac{d E_i}{d V} \\ &= -\sum_i \frac{e^{-\beta E_i}}{Z} \frac{d E_i}{d V} \end{align} So the pressure is just a weighted sum of derivatives of energy eigenvalues with respect to volume. We can apply the derivative trick to this also: \begin{align} p&= \frac1{\beta Z}\left(\frac{\partial Z}{\partial V}\right)_\beta \\ &= \frac1{\beta}\left(\frac{\partial \ln Z}{\partial V}\right)_\beta \end{align} Now we have an expression in terms of \(\ln Z\) and \(\beta\).

Helmholtz free energy

We saw a hint above that \(U\) somehow relates to \(\ln Z\), which hinted that \(\ln Z\) might be something special, and now \(\ln Z\) also turns out to relate to the pressure somehow. Let's put this into thermodynamics language.*

\begin{align} U &= -\left(\frac{\partial \ln Z}{\partial \beta}\right)_{V} \\ d\ln Z &= -Ud\beta + \left(\frac{\partial \ln Z}{\partial V}\right)_{\beta} dV \\ d\ln Z &= -Ud\beta +\beta p dV \end{align} We can already see work (i.e. \(-pdV\)) showing up here. So now we're going to try a switch to a \(dU\) rather than a \(d\beta\), since we know something about \(dU\). \begin{align} d(\beta U) &= Ud\beta + \beta dU \\ d\ln Z &= -\left(d(\beta U) - \beta dU\right) +\beta p dV \\ &= \beta dU -d(\beta U) +\frac{p}\beta dV \\ \beta dU &= d\left(\ln Z + \beta U\right) -\beta p dV \\ dU &= \frac1\beta d\left(\ln Z + \beta U\right) - p dV \end{align} Comparing this result with the thermodynamic identity tells us that \begin{align} S &= k_B\ln Z + U/T \\ F &\equiv U-TS \\ &= U - T\left(k_B\ln Z + U/T\right) \\ &= U - k_BT\ln Z + U \\ &= - k_BT\ln Z \end{align} That was a bit of a differentials slog, but got us the same result for the Helmholtz free energy without assuming the Gibbs entropy \(S = -k\sum_i P_i\ln P_i\). It did, however, demonstrate a not-quite-contradiction, in that the expression we found for the entropy is not mathematically equal to the Boltmann entropy. It approaches the same thing for large systems, although I won't prove that now.
Memorizing
This expression for the Helmholtz free energy I will absolutely support you using as a starting point for homework or exam answers, and it is well worth memorizing. \begin{align} F &= -kT\ln Z \end{align}

In effect, this expression gives us a physical meaning for the partition function.

Small groups
Consider a system with \(g\) eigenstates, each with energy \(E_0\). What is the free energy?
Answer

We begin by writing down the partition function \begin{align} Z &= \sum_i e^{-\beta E_i} \\ &= g e^{-\beta E_0} \end{align} Now we just need a log and we're done. \begin{align} F &= -kT\ln Z \\ &= -kT \ln\left(g e^{-\beta E_0}\right) \\ &= -kT \left(\ln g + \ln e^{-\beta E_0}\right) \\ &= E_0 - Tk\ln g \end{align} This is just what we would have concluded about the free energy if we had used the Boltzmann expression for the entropy in this microcanonical ensemble.

Waving our hands, we can understand \(F=-kT\ln Z\) in two ways:

  1. If there are more accessible microstates, \(Z\) is bigger, which means \(S\) is bigger and \(F\) must be more negative.
  2. If we only consider the most probable energies, to find the energy from \(Z\), we need the negative logarithm, and a \(kT\) to cancel out the \(\beta\).

Using the free energy

Why the big deal with the free energy? One way to put it is that it is relatively easy to compute. The other is that once you have an analytic expression for the free energy, you can solve for pretty much anything else you want.

Recall: \begin{align} F &\equiv U-TS \\ dF &= dU - SdT - TdS \\ &= -SdT - pdV \\ -S &= \left(\frac{\partial F}{\partial T}\right)_V \\ -p &= \left(\frac{\partial F}{\partial V}\right)_T \end{align} Thus by taking partial derivatives of \(F\) we can find \(S\) and \(p\) as well as \(U\) with a little arithmetic. You have all seen the Helmholtz free energy before so this shouldn't be much of a surprise. Practically, the Helmholtz free energy is why finding an analytic expression for the partition function is so valuable.

In addition to the “fundamental” physical parameters, we can also find response functions, such as heat capacity or compressibility which are their derivatives. Of particular interest is the heat capacity at fixed volume. The heat capacity is vaguely defined as: \begin{align} C_V &\equiv \sim \left(\frac{\bar d Q}{\partial T}\right)_V \end{align} by which I mean the amount of heat required to change the temperature by a small amount, divided by that small amount, while holding the volume fixed. The First Law tells us that the heat is equal to the change in internal energy, provided no work is done (i.e. holding volume fixed), so \begin{align} C_V &= \left(\frac{\partial U}{\partial T}\right)_V \end{align} which is a nice equation, but can be a nuisance because we often don't know \(U\) as a function of \(T\), which is not one of its natural variables. We can also go back to our Energy and Entropy relationship between heat and entropy where \(\bar dQ = TdS\), and use that to find the ratio that defines the heat capacity: \begin{align} C_V &= T \left(\frac{\partial S}{\partial T}\right)_V. \end{align} Note that this could also have come from a manipulation of the previous derivative of the internal energy. However, the “heat” reasoning allows us to recognize that the heat capacity at constant pressure will have the same form when expressed as an entropy derivative. This expression is also convenient when we compute the entropy from the Helmholtz free energy, because we already know the entropy as a function of \(T\).

Ideal gas with just one atom

Let us work on the free energy of a particle in a 3D box.

Small groups (5 minutes)
Work out (or write down) the energy eigenstates for a particle confined to a cubical volume with side length \(L\). You may either use periodic boundary conditions or an infinite square well. When you have done so, write down an expression for the partition function.
Answer
The energy is just the kinetic energy, given by \begin{align} T &= \frac{\hbar^2 |\vec k|^2}{2m} \end{align} The allowed values of \(k\) are determined by the boundary conditions. If we choose periodic boundary conditions, then \begin{align} k_x &= n_x \frac{2\pi}{L} & n_x &= \text{any integer} \end{align} and similarly for \(k_y\) and \(k_z\), which gives us \begin{align} E_{n_xn_yn_z} &= \frac{2\pi^2\hbar^2}{mL^2}\left(n_x^2+n_y^2+n_z^2\right) \end{align} where \(n_x\), \(n_y\), and \(n_z\) take any integer values. If we chose the infinite square well boundary conditions instead, our integers would be positive values only, and the prefactor would differ by a factor of four.

From this point, we just need to sum over all states to find \(Z\), and from that the free energy and everything else! So how do we sum all these things up? \begin{align} Z &= \sum_{n_x=-\infty}^{\infty} \sum_{n_y=-\infty}^{\infty} \sum_{n_z=-\infty}^{\infty} e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}\left(n_x^2+n_y^2+n_z^2\right) } \\ &= \sum_{n_x}\sum_{n_y} \sum_{n_z} e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_x^2 } e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_y^2 } e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_z^2 } \\ &= \left(\sum_{n_x} e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_x^2 }\right) \left(\sum_{n_y} e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_y^2 }\right) \left(\sum_{n_z} e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n_z^2 }\right) \\ &= \left(\sum_{n=-\infty}^\infty e^{-\beta \frac{2\pi^2\hbar^2}{mL^2}n^2 }\right)^3 \end{align} The last bit here basically looks a lot like separation of variables. Our energy separates into a sum of \(x\), \(y\) and \(z\) portions (which is why we can use separation of variables for the quantum problem), but that also causes things to separate (into a product) when we compute the partition function.

This final sum here is now something we would like to approximate. If our box is reasonably big (and our temperature is not too low), we can assume that \(\frac{4\pi^2\hbar^2}{k_BTmL^2} \ll 1\), which is the classical limit. In this limit, the “thing in the exponential” hardly changes when we change \(n\) by 1, so we can reasonably replace this summation with an integral.

Note
You might have thought to use a power series expansion (which is a good instinct!) but in this case that won't work, because \(n\) gets arbitrarily large.
\begin{align} Z &\approx \left(\int_{-\infty}^\infty e^{-\frac{2\pi^2\hbar^2}{k_BTmL^2}n^2 } dn\right)^3 \end{align} We can now do a \(u\) substitution to simplify this integral. \begin{align} \xi &= \sqrt{\frac{2\pi^2\hbar^2}{k_BTmL^2}}n & d\xi &= \sqrt{\frac{2\pi^2\hbar^2}{k_BTmL^2}}dn \end{align} This gives us a very easy integral. \begin{align} Z &= \left(\sqrt{\frac{k_BTmL^2}{2\pi^2\hbar^2}}\int_{-\infty}^\infty e^{-\xi^2 } d\xi\right)^3 \\ &= \left(\frac{k_BTmL^2}{2\pi^2\hbar^2}\right)^\frac32 \left(\int_{-\infty}^\infty e^{-\xi^2 } d\xi\right)^3 \\ &= V\left(\frac{k_BTm}{2\pi^2\hbar^2}\right)^\frac32\pi^\frac32 \\ &= V\left(\frac{k_BTm}{2\pi\hbar^2}\right)^\frac32 \end{align} So there we have our partition function for a single atom in a big box. Let's go on to find exciting things! First off, let's give a name to the nasty fraction to the \(\frac32\) power. It has dimensions of inverse volume, or number per volume, and it has \(\hbar\) in it (which makes it quantum) so let's call it \(n_Q\), since I use \(n=N/V\) for number density. \begin{align} n_Q &= \left(\frac{k_BTm}{2\pi\hbar^2}\right)^\frac32 \\ F &= -kT\ln Z \\ &= -kT\ln \left(Vn_Q\right) \end{align} This looks like (and is) a very simple formula, but you need to keep in mind that \(n_Q\) depends on temperature, so it's not quite as simple as it looks. Now that we have the Helmholtz free energy, we can solve for the entropy, pressure, and internal energy pretty quickly.
Small groups
Solve for the entropy, pressure, and internal energy. \begin{align} S &= -\left(\frac{\partial F}{\partial T}\right)_V \\ &= k\ln \left(Vn_Q\right) +\frac{kT}{Vn_Q}V\frac{dn_Q}{dT} \\ &= k\ln \left(Vn_Q\right) +\frac{kT}{n_Q}\frac32 \frac{n_Q}{T} \\ &= k\ln \left(Vn_Q\right) +\frac32k_B \end{align} You could find \(U\) by going back to the weighted average definition and using the derivative trick from the partition function, but with the free energy and entropy it is just algebra. \begin{align} U &= F + TS \\ &= -kT\ln \left(Vn_Q\right) + kT\ln \left(Vn_Q\right) +\frac32k_BT \\ &= \frac32k_BT \end{align} The pressure derivative gives a particularly simple result. \begin{align} p &= -\left(\frac{\partial F}{\partial V}\right)_T \\ &= \frac{kT}{V} \end{align}

Ideal gas with multiple atoms

Extending from a single atom to several requires just a bit more subtlety. Naively, you could just argue that because we understand extensive and intensive quantities, we should be able to go from a single atom to \(N\) atoms by simply scaling all extensive quantities. That is almost completely correct (if done correctly).

The entropy has an extra term (the “entropy of mixing”), which also shows up in the free energy. Note that while we may think of this “extra term” as an abstract counting thing, it is physically observable, provided we do the right kind of experiment (which turns out to need to involve changing \(N\), so we won't discuss it in detail until we talk about changing \(N\) later).

There are a few different ways we could imagine putting \(N\) non-interacting atoms together. I will discuss a few here, starting from simplest, and moving up to the most complex.

Different atoms, same box

One option is to consider a single box with volume \(V\) that holds \(N\) different atoms, each of a different kind, but all with the same mass. In this case, each microstate of the system will consist of a microstate for each atom. Quantum mechanically, the wave function for the entire state with \(N\) atoms will separate and will be a product of \(N\) single-particle states (or orbitals) \begin{align} \Psi_{\text{microstate}}(\vec r_1, \vec r_2, \cdots, \vec r_N) &= \prod_{i=1}^{N}\phi_{n_{xi},n_{yi},n_{zi}}(\vec r_i) \end{align} and the energy will just be a sum of different energies. The result of this will be that the partition function of the entire system will just be the product of the partition functions of all the separate non-interacting systems (which happen to all be equal). This is mathematically equivalent to what already happened with the three \(x\), \(y\) and \(z\) portions of the partition function. \begin{align} Z_N &= Z_1^N \\ F_N &= N F_1 \end{align} This results in simply scaling all of our extensive quantities by \(N\) except the volume, which didn't increase when we added more atoms.

This result sounds great, in that it seems to be perfectly extensive, but when we look more closely, we cna see that it is actually not extensive! \begin{align} F_N &= -NkT\ln(Vn_Q) \end{align} If we double the size of our system, so \(N\rightarrow 2N\) and \(V\rightarrow 2V\), you can see that the free energy does not simply double, because the \(V\) in the logarithm doubles while \(n_Q\) remains the same (because it is intensive). So there must be an error here, which turns out to be caused by having treated all the atoms as distinct. If each atom is a unique snowflake, then it doesn't quite make sense to expect the result to be extensive, since you aren't scaling up “interchangeable” things.

Identical atoms, but different boxes
We can also consider saying all atoms are truly identical, but each atom is confined into a different box, each with identical (presumably small) size. In this case, the same reasoning as we used above applies, but now we also scale the total volume up by \(N\). This is a more natural application of the idea of extensivity. \begin{align} Z_N &= Z_1^N \\ F_N &= N F_1 \\ V &= N V_1 \end{align} This is taking the idea of extensivity to an extreme: we keep saying that a system with half as much volume and half as many atoms is “half as much” until there is only one atom left. You would be right to be skeptical that putting one atom per box hasn't resulted is an error.
Identical atoms, same box

This is the picture for a real ideal gas. All of our atoms are the same, or perhaps some fraction are a different isotope, but who cares about that? Since they are all in the same box, we will want to write the many-atom wavefunction as a product of single-atom wavefunctions (sometimes called orbitals). Thus the wave function looks like our first option of “different atoms, same box”, but we have fewer distinct microstates, since swapping the quantum numbers of two atoms doesn't change the microstate.

How to remove this duplication, which is sort of a fundamental problem when our business is counting microstates? Firstly, we will consider it vanishingly unlikely for two atoms to be in the same orbital (when we study Bose condensation, we will see this assumption breaking down). Then we need to figure out exactly how many times we counted each orbital, so we can correct our number of microstates (and our partition function). That number is equal to the number of permutations of \(N\) distinct numbers, which is \(N!\), if we can assume that there is negligible probability that two atoms are in an identical state. Thus we have a corrected partition function \begin{align} Z_N &= \frac1{N!}Z_1^N \\ F_N &= N F_1 + k_BT \ln N! \\ &\approx N F_1 + Nk_BT(\ln N - 1) \\ &= -NkT\ln(Vn_Q) + NkT(\ln N - 1) \\ &= NkT\left(\ln\left(\frac{N}{Vn_Q}\right) - 1\right) \\ &= NkT\left(\ln\left(\frac{n}{n_Q}\right) - 1\right) \end{align} This answer is extensive, because now we have a ratio of \(V\) and \(N\) in the logarithm. So yay.

We now have the true free energy for an ideal gas at high enough temperature.

Small groups (10-15 minutes)
Given this free energy, solve for \(S\), \(U\), and \(p\).
Answer

This is very similar to what we did with just one atom, but now it will give us the true answer for the monatomic ideal gas. \begin{align} S &= -\left(\frac{\partial F}{\partial T}\right)_V \\ &= -Nk(\ln\left(\frac{n}{n_Q}\right) - 1) - NkT\frac{\partial}{\partial T}\ln\left(\frac{n}{n_Q}\right) \\ &= -Nk\ln\left(\frac{n}{n_Q}\right) +Nk + NkT\frac{\partial}{\partial T}\ln n_Q \\ &= -Nk\ln\left(\frac{n}{n_Q}\right) +Nk + \frac32 Nk \\ &= -Nk\ln\left(\frac{n}{n_Q}\right) + \frac52 Nk \end{align} This is called the Sackur-Tetrode equation. The quantum mechanics shows up here (\(\hbar^2/m\)), even though we took the classical limit, because the entropy of a truly classical ideal gas has no minimum value. So quantum mechanics sets the zero of entropy. Note that the zero of entropy is a bit tricky to measure experimentally (albeit possible). The zero of entropy is in fact set by the Third Law of Thermodynamics, which you probably haven't heard of.

Now we can solve for the internal energy: \begin{align} U &= F + TS \\ &= NkT\left(\ln\left(\frac{n}{n_Q}\right) - 1\right) -NkT\ln\left(\frac{n}{n_Q}\right) + \frac52 NkT \\ &= \frac32 NkT \end{align} This is just the standard answer you're familiar with. You can notice that it doesn't have any quantum mechanics in it, because we took

The pressure is easier than the entropy, since the volume is only inside the log: \begin{align} p &= -\left(\frac{\partial F}{\partial V}\right)_T \\ &= \frac{NkT}{V} \end{align} This is the ideal gas law. Again, the quantum mechanics has vanished in the classical limit.


Keywords
ideal gas entropy canonical ensemble Boltzmann probability Helmholtz free energy statistical mechanics
Learning Outcomes