Lecture: Chemical potential and Gibbs distribution

Thermal and Statistical Physics 2020
These notes from the fifth week of https://paradigms.oregonstate.edu/courses/ph441 cover the grand canonical ensemble. They include several small group activities.
Here is the instructors guide, week five

Week 5: Chemical potential and Gibbs distribution

Reading: K&K 9, Schroeder 7.1

This week be looking at scenarios where the number of particles in a system changes. We could technically always manage to solve problems without doing such a system, but allowing $N$ to change is often a lot easier, just as letting the energy change made things easier. In both case, we enable ourselves to consider a smaller system, which tends to be both conceptually and mathematically simpler.

Small white boards (3 minutes)
Talk with your neighbor for a moment about how you expect the density of the atmosphere to vary with altitude.

The atmosphere

Let's talk about the atmosphere for a moment. Each atmosphere has a potential energy. We can solve this problem using the canonical ensemble as we have learned. We will consider just one atom, but now with gravitational potential energy as well as kinetic energy. This time around we'll do this classically rather than quantum mechanically. We can work out the probability of this atom having any particular momentum and position. \begin{align} P_1(\vec p, \vec r) &= \frac{e^{-\beta \left(\frac{p^2}{2m} + mgz\right)}}{ Z_1 } \\ &= \frac{e^{-\beta \frac{p^2}{2m} -\beta mgz}}{ Z_1 } \end{align} This tells us that the probability of this atom being at any height drops exponentially with height. If we extend this to many atoms, clearly the density must drop exponentially with height. Thiw week we'll be looking at easier approaches to explain this sort of phenomenon. You can see the obvious fact that that potential energy will affect density, and hence pressure. We will be generalizing the idea of potential energy into what is called chemical potential.

Chemical potential

Imagine for a moment what happens if you allow just two systems to exchange particles as well as energy. Clearly they will exchange particles for a while, and then things will settle down. If we hold them at fixed temperature, their combined Helmholtz free energy will be maximized. This means that the derivative of the Helmholtz free energy with respect to $N$ must be equal on both sides. This defines the chemical potential. \begin{align} \mu &= \left(\frac{\partial F}{\partial N}\right)_{T,V} \end{align} This expands our total differential of the free energy \begin{align} dF &= -SdT -pdV + \mu dN \end{align} which also expands our understanding of the thermodynamic identity \begin{align} dU &= TdS - pdV + \mu dN \end{align} which tells us that the chemical potential is also \begin{align} \mu &= \left(\frac{\partial U}{\partial N}\right)_{S,V} \end{align} The chemical potential expands our set of thermodynamic variables, and allows all sorts of nice excitement. Specifically, we now have three extensive variables that the internal energy depends on, as well as their derivatives, the temperature, pressure, and chemical potential.

Note
In general, there is one chemical potential for each kind of particle, thus the word “chemical” in chemical potential. Thus the “three” I discuss is actually a bit flexible.

Internal and external

The chemical potential is in fact very much like potential energy. We can distinguish between external chemical potential, which is basically ordinary potential energy, and internal chemical potential, which is the chemical potential that we compute as a property of a material. We'll do a fair amount of computing of the internal chemical potential this week, but keep in mind that the total chemical potential is what becomes equal in systems that are in equilibrium. The total chemical potential at the top of the atmosphere, is equal to the chemical potential at the bottom. If it were not, then atoms would diffuse from one place to the other.

Ideal gas chemical potential

Recall the Helmholtz free energy of an ideal gas is given by \begin{align} F &= NF_1 + k_BT \ln N! \\ &= -Nk_BT \ln\left(V\left(\frac{mk_BT}{2\pi\hbar^2}\right)^{\frac32}\right) + k_BT N(\ln N-1) \\ &= -Nk_BT \ln\left(Vn_Q\right) + k_BT N(\ln N-1) \\ &= NkT \ln\left(\frac{N}{V}\frac1{n_Q}\right) - NkT \end{align}

Small groups
Find the chemical potential of the ideal gas.
To find the chemical potential, we just need to take a derivative. \begin{align} \mu &= \left(\frac{\partial F}{\partial N}\right)_{V,T} \\ &= k_BT\ln\left(\frac{N}{V}\frac1{n_Q}\right) \\ &= k_BT\ln\left(\frac{n}{n_Q}\right) \end{align} where the number density $n$ is given by $n\equiv N/V$.

This equation can be solved to find the density in terms of the chemical potential: \begin{align} n &= n_Q e^{\beta \mu} \end{align} This might remind you of the Boltzmann relation. In fact, it's very closely related to the Boltzmann relation. We do want to keep in mind that the $\mu$ above is the internal chemical potential.

The total chemical potential is given by the sum of the internal chemical potential and the external chemical potential, and that total is what is equalized between systems that are in diffusive contact. \begin{align} \mu_{tot} &= \mu_{int} + mgz \\ &= k_BT\ln\left(\frac{n}{n_Q}\right) + mgz \end{align} We can solve for the density now, as a function of position. \begin{align} k_BT\ln\left(\frac{n}{n_Q}\right) &= \mu_{tot}-mgz \\ n &= n_Q e^{-\beta (\mu_{tot}-mgz)} \end{align} This is just telling us the same result we already knew, which is that the density must drop exponentially with height.

Interpreting the chemical potential

The chemical potential can be challenging to understand intuitively, for myself as well as for you. The ideal gas expression \begin{align} n &= n_Q e^{\beta\mu} \end{align} can help with this. This tells us that the density increases as we increase the chemical potential. Particles spontaneously flow from high chemical potential to low chemical potential, just like heat flows from high temperature to low. This fits with the idea that at high $\mu$ the density is high, since I expect particles to naturally flow from a high density region to a low density region.

The distinction between internal and external chemical potential allows us to reason about systems like the atmosphere. Where the external chemical potential is high (at high altitude), the internal chemical potential must be lower, and there is lower density. This is because particles have already fled the high-$\mu$ region to happier locations closer to the Earth.

Gibbs factor and sum

Let's consider how we maximize entropy when we allow not just microstates with different energy, but also microstates with different number of particles. The problem is the same was we dealt with the first week. We want to maximize the entropy, but need to fix the total probability, the average energy and now the average number. \begin{align} \langle N\rangle &= N =\sum_i P_i N_i \\ \langle E\rangle &= U = \sum_i P_i E_i \\ 1 &= \sum_i P_i \end{align} To solve for the probability $P_i$ we will want to maximize the entropy $S=-k\sum_i P_i\ln P_i$ subject to the above constraints. Like what I did the first week of class, we will need to use Lagrange multipliers.

The Lagrangian which we want to maximize will look like \begin{multline} \mathcal{L} = -k\sum_i P_i \ln P_i + k\alpha\left(1 - \sum_i P_i\right)\\ + k\beta\left(U - \sum_i P_i E_i\right)\\ + k\gamma\left(N - \sum_i P_i N_i\right) \end{multline}

Small groups
Solve for the probabilities $P_i$ that maximize this Lagrangian, subject to the above constraints. Eliminate $\alpha$ from the expression for probability, so you will end up with probabilities that depend on the other two Lagrange multipliers, one of which is our usual $\beta$, while the other one we will relate to chemical potential.
We maximize $\mathcal{L}$ by setting its derivatives equal to zero. \begin{align} 0 &= -\frac1{k}\frac{\partial \mathcal{L}}{\partial P_i} \\ &= \ln P_i + 1 + \alpha + \beta E_i + \gamma N_i \\ P_i &= e^{-1-\alpha - \beta E_i - \gamma N_i} \end{align} Now as before we'll want to apply the normalization constraint. \begin{align} 1 &= \sum_i P_i \\ &= \sum_i e^{-1-\alpha - \beta E_i - \gamma N_i} \\ &= e^{-1-\alpha} \sum_i e^{-\beta E_i - \gamma N_i} \\ e^{-1-\alpha} &= \frac1{\sum_i e^{-\beta E_i - \gamma N_i}} \end{align} Thus we find that the probability of a given microstate is \begin{align} P_i &= \frac{e^{-\beta E_i - \gamma N_i}}{\mathcal{Z}} \\ \mathcal{Z} &\equiv \sum_i e^{-\beta E_i - \gamma N_i} \end{align} where we will call the new quantity $\mathcal{Z}$ the grand partition function or Gibbs sum.

We have already identified $\beta$ as $\frac1{kT}$, but what is this $\gamma$? It is a dimensionless quantity. We expect that $\gamma$ will relate to a derivative of the entropy with respect to $N$ (since it is the Lagrange multiplier for the $N$ constraint). We can figure this out by examining the newly expanded total differential of entropy: \begin{align} dU &= TdS - pdV + \mu dN \\ dS &= \frac1T dU + \frac{p}T dV - \frac\mu{T} dN \end{align}

Small groups
I'd like you to repeat your first ever homework problem in this class, but now with the $N$-changing twist. Given the above set of probabilities, along with the Gibbs entropy $S = -k\sum P\ln P$, find the total differential of entropy in terms of $dU$ and $dN$, keeping in mind that $V$ is inherently held fixed by holding the energy eigenvalues fixed. Equate this total differential to the $dS$ above to identify $\beta$ and $\gamma$ with thermodynamic quantities.
\begin{align} S &= -k\sum_i P_i\ln P_i \\ &= -k \sum_i P_i \ln\left(\frac{e^{-\beta E_i-\gamma N_i}}{\mathcal{Z}}\right) \\ &= -k\sum_i P_i(-\beta E_i - \gamma N_i - \ln\mathcal{Z}) \\ &= k\beta U + k\gamma N + k\ln\mathcal{Z} \end{align} Now we can zap this with $d$ to find its derivatives: \begin{align} dS &= k\beta dU + kUd\beta + k\gamma dN + kNd\gamma + k\frac{d\mathcal{Z}}{\mathcal{Z}} \end{align} Now we just need to find $d\mathcal{Z}$... \begin{align} d\mathcal{Z} &= \frac{\partial\mathcal{Z}}{\partial\beta}d\beta + \frac{\partial\mathcal{Z}}{\partial\gamma}d\gamma \\ &= -\sum_i E_ie^{-\beta E_i-\gamma N_i}d\beta - \sum_i N_ie^{-\beta E_i-\gamma N_i}dN \\ &= -U\mathcal{Z}d\beta -N\mathcal{Z}d\gamma \end{align} Putting $dS$ together gives \begin{align} dS &= k\beta dU + k\gamma dN \\ &= \frac1TdU - \frac{\mu}{T}dN \end{align} Thus, we conclude that \begin{align} k\beta &= \frac1T & k\gamma &= -\frac{\mu}{T} \\ \beta &= \frac1{kT} & \gamma &= -\beta\mu \end{align}

Actual Gibbs sum (or grand sum)

Putting this interpretation for $\gamma$ into our probabilities we find the Gibbs factor and Gibbs sum (or grand sum or grand partition function) to be: \begin{align} P_j &= \frac{-\beta \left(E_j - \mu N_j\right)}{\mathcal{Z}} \\ \mathcal{Z} &\equiv \sum_i e^{-\beta(E_i - \mu N_i)} \end{align} where you must keep in mind that the sums are over all microstates (including states with different $N$). We can go back to our expressions for internal energy and number \begin{align} U &= \sum_i P_i E_i \\ &= \frac1{\mathcal{Z}}\sum_i E_i e^{-\beta (E_i -\mu N_i)} \\ N &= \sum_i P_i N_i \\ &= \frac1{\mathcal{Z}}\sum_i N_i e^{-\beta (E_i - \mu N_i)} \end{align} We can now use the derivative trick to relate $U$ and $N$ to the Gibbs sum $\mathcal{Z}$, should we so desire.

Small groups
Work out the partial derivative tricks to compute $U$ and $N$ from the grand sum.
Let's start by exploring the derivative with respect to $\beta$, which worked so nicely with the partition function. \begin{align} \frac1{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \beta} &= -\frac1{\mathcal{Z}}\sum_i (E_i-\mu N_i) e^{-\beta(E_i - \mu N_i)} \\ &= -U + \mu N \end{align} Now let's examine a derivative with respect to $\mu$. \begin{align} \frac1{\mathcal{Z}} \frac{\partial \mathcal{Z}}{\partial \mu} &= \frac1{\mathcal{Z}}\sum_i (\beta N_i) e^{-\beta(E_i - \mu N_i)} \\ &= \beta N \end{align} Arranging these to find $N$ and $U$ is not hard.
Small groups
Show that \begin{align} \left(\frac{\partial N}{\partial\mu}\right)_{T,V} > 0 \end{align}
\begin{align} N &= \sum_i N_i P_i \\ &= kT \frac1{\mathcal{Z}} \left(\frac{\partial\mathcal{Z}}{\partial \mu}\right)_{\beta} \end{align} So the derivative we seek will be \begin{align} \left(\frac{\partial N}{\partial\mu}\right)_{T,V} &= kT \left(\frac{\partial^2\mathcal{Z}}{\partial \mu^2}\right)_{\beta} \\ &= \sum_i N_i \left(\frac{\partial P_i}{\partial \mu}\right)_{\beta} \\ &= \sum_i N_i \left(\beta N_i P_i - \frac{P_i}{\mathcal{Z}}\left(\frac{\partial\mathcal{Z}}{\partial \mu}\right)_{\beta}\right) \\ &=\sum_i N_i \left(\beta N_i P_i - \beta \langle N\rangle P_i \right) \end{align} We can simplify this the notation by expressing things in terms of averages, since we've got sums of $P_i$ times something. \begin{align} &= \beta\left<N_i(N_i - \langle N\rangle)\right> \\ &= \beta\left(\left<N^2\right> - \langle N\rangle^2\right) \\ &= \beta\left<\left( N - \langle N\rangle\right)^2\right> \end{align} This is positive, because it is an average of something squared. The last step is a common step when examining variances of distributions, and relies on the fact that $\left<N - \langle N\rangle\right>=0$.

Euler's homogeneous function theorem

There is a nice theorem we can use to better understand the chemical potential, and how it relates to the Gibbs free energy. This involves reasoning about how internal energy changes when all the extensive variables are changed simultaneously, and connects with Euler's homogeneous function theorem.

Suppose we have a glass that we will slowly pour water into. We will define our “system” to be all the water in the glass. The glass is open, so the pressure remains constant. Since the water is at room temperature (and let's just say the room humidity is 100%, to avoid thinking about evaporation), the temperature remains constant as well.

Small white boards
What is the initial internal energy (and entropy and volume and $N$) of the system? i.e. when there is not yet any water in the glass...
Since these are extensive quantities, they must all be zero when there is no water in the glass.
Small white boards
Suppose the water is added at a rate $\frac{dN}{dt}$. Suppose you know the values of $N$, $S$, $V$, and $U$ for a given amount of room temperature water (which we can call $N_0$, $S_0$, etc.). Find the rate of change of these quantities.
Because these are extensive quantities, they must all be increasing with equal proportion \begin{align} \frac{dV}{dt} &= \frac{V_0}{N_0}\frac{dN}{dT} \\ \frac{dS}{dt} &= \frac{S_0}{N_0}\frac{dN}{dT} \\ \frac{dU}{dt} &= \frac{U_0}{N_0}\frac{dN}{dT} \end{align}

This tells us that differential changes to each of these quantities must be related in the same way, for this process of pouring in more identical water. And we can drop the 0 subscript, since the ratio of quantities is the same regardless of how much water we have. \begin{align} dV &= \frac{V_0}{N_0}dN \\ dV &= \frac{V}{N}dN \\ dU &= \frac{U}N dN \end{align} Thus given the thermodynamic identity \begin{align} dU &= TdS - pdV + \mu dN \\ \frac{U}{N} dN &= T\frac{S}{N}dN -p\frac{S}{N}dN +\mu dN \\ U &= TS -pV + \mu N \end{align} This is both crazy and awesome. It feels very counter-intuitive, and you might be wondering why we didn't tell you this way back in Energy and Entropy to save you all this trouble with derivatives. The answer is that it is usually not directly all that helpful, since we now have a closed-form expression for $U$ in terms of six mutually dependent variables! So you can't use this form in order to evaluate derivatives (much).

This expression is however very helpful in terms of understanding the chemical potential. Consider the Gibbs free energy: \begin{align} G &\equiv U -TS +pV \\ &= \mu N \end{align} which tells us that the chemical potential is just the Gibbs free energy per particle. If we have several chemical species, this expression just becomes \begin{align} G &= \sum_i \mu_i N_i \end{align} so each chemical potential is a partial Gibbs free energy per molecule.

This explains why the chemical potential is seldom discussed in chemistry courses: they spend all their time talking about the Gibbs free energy, which just turns out to be the same thing as the chemical potential.

Side note
There is another interesting thing we can do with the relationship that \begin{align} U &= TS-pV+\mu N \end{align} and that involves zapping it with $d$. This tells us that \begin{align} dU & TdS + SdT - pdV - Vdp + \mu dN + Nd\mu \end{align} which looks downright weird, since it's got twice as many terms as we normally see. This tells us that the extra terms must add to zero: \begin{align} 0 &= SdT - Vdp + Nd\mu \end{align} This relationship (called the Gibbs-Duhem equation) tells us just how $T$, $p$ and $\mu$ must change in order to keep our extensive quantities extensive and our intensive quantities intensive.

Chemistry

Chemical equilibrium is somewhat different than the diffusive equilibrium that we have considered so far. In diffusive equilibirum, two systems can exchange particles, and the two systems at equilibirum must have equal chemical potentials. In chemistry, particles can be turned into other particles, so we have a more complicated scenario, but it still involves changing the number of particles in a system. In chemical equilibrium, when a given reaction is in equilibrium the sum of the chemical potentials of the reactants must be equal to the sum of the chemical potentials of the products.

An example may help. Consider for instance making water from scratch: \begin{align} 2\text{H}_2 + \text{O}_2 \rightarrow 2\text{H}_2\text{O} \end{align} In this case in chemical equilibrium \begin{align} 2\mu_{\text{H}_2} + \mu_{\text{O}_2} &= 2\mu_{\text{H}_2\text{O}} \end{align} We can take this simple equation, and turn it into an equation involving activities, which is productive if you think of an activity as being something like a concentration (and if you care about equilibrium concentrations): \begin{align} e^{\beta(2\mu_{\text{H}_2\text{O}}-2\mu_{\text{H}_2} - \mu_{\text{O}_2})} &= 1 \\ \frac{\lambda_{\text{H}_2\text{O}}^2}{\lambda_{\text{O}_2}\lambda_{\text{H}_2}^2} &= 1 \end{align} Now this looks sort of like the law of mass action, except that our equilibrium constant is 1. To get to the more familiar law of mass action, we need to introduce (a caricature of) the chemistry version of activity. The thing in square brackets is actually a relative activity, not a concentration as is often taught in introductory classes (and was considered correct prior to the late nineteenth century). It is only proportional to concentration to the extent that the substance obeys the ideal gas relationship between chemical potential and concentration. Fortunately, this is satisfied for just about anything at low concentration. For solvents (and dense materials like a solid reactant or product) the chemical potential doesn't (appreciably) change as the reaction proceeds, so it is normally omitted from the mass action equation. When I was taught this in a chemistry class back in the nineties, I was taught that the “concentration” of such a substance was dimensionless and had value 1.

Specifically, we define the thing in square brackets as \begin{align} [\text{H}_2\text{O}] &\equiv n^*_{\text{H}_2\text{O}} e^{\beta(\mu_{\text{H}_2\text{O}} - \mu^*_{\text{H}_2\text{O}})} \\ &= n^*_{\text{H}_2\text{O}} \frac{\lambda_{\text{H}_2\text{O}}}{\lambda_{\text{H}_2\text{O}}^*} \end{align} where $n^*$ is a reference concentration, and $\mu^*$ is the chemical potential of the fluid at that reference density. Using this notation, we can solve for the activity \begin{align} \lambda_{\text{H}_2\text{O}} &= \lambda_{\text{H}_2\text{O}}^* \frac{[\text{H}_2\text{O}]}{n^*_{\text{H}_2\text{O}}} \end{align} So now we can rewrite our weird mass action equation from above \begin{align} \frac{ \left(\lambda_{\text{H}_2\text{O}}^* \frac{[\text{H}_2\text{O}]}{n^*_{\text{H}_2\text{O}}}\right)^2 }{ \left(\lambda_{\text{O}_2}^* \frac{[\text{O}_2]}{n^*_{\text{O}_2}}\right) \left(\lambda_{\text{H}_2}^* \frac{[\text{H}_2]}{n^*_{\text{H}_2}}\right)^2 } &= 1 \end{align} and then we can solve for the equilibrium constant for the reaction \begin{align} \frac{ [\text{H}_2\text{O}]^2 }{ [\text{O}_2][\text{H}_2]^2 } &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } \frac{ \lambda_{\text{O}_2}^* (\lambda_{\text{H}_2}^*)^2 }{ (\lambda_{\text{H}_2\text{O}}^*)^2 } \\ &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } e^{\beta( \mu_{\text{O}_2}^* + 2\mu_{\text{H}_2}^* - 2\mu_{\text{H}_2\text{O}}^* )} \\ &= \frac{ (n^*_{\text{H}_2\text{O}})^2 }{ n^*_{\text{O}_2} (n^*_{\text{H}_2})^2 } e^{-\beta\Delta G^*} \end{align} where at the last step I defined $\Delta G^*$ as the difference in Gibbs free energy between products and reactants, and used the fact that the chemical potential is the Gibbs free energy per particle.

This expression for the chemical equilibrium constant is the origin of the intuition that a reaction will go forward if the Gibbs free energy of the products is lower than that of the reactants.

I hope you found interesting this little side expedition into chemistry. I find fascinating where these fundamental chemistry relations come from, and also that the relationship between concentrations arises from an ideal gas approximation! Which is why it is only valid in the limit of low concentration, and why the solvent is typically omitted from the equilibrium constant, since its activity is essentially fixed.

Keywords
chemical potential Gibbs distribution grand canonical ensemble statistical mechanics
Learning Outcomes