Activity: Phase transformations

Thermal and Statistical Physics 2020
These lecture notes from the ninth week of Thermal and Statistical Physics cover phase transformations, the Clausius-Clapeyron relation, mean field theory and more. They include a number of small group activities.
  • Media
    • activity_media/Phase_diagram_of_water.svg
    • activity_media/vdw-gibbs.svg
    • activity_media/vdw-free-volume.svg
    • activity_media/vdw-pressure-volume.svg
    • activity_media/vdw-pressure.svg
    • activity_media/vdw.py
Week 9

Week 9: Phase transformations

Reading: K&K 10, Schroeder 5.3

We will be ending this class by looking at phase transformations, such as the transformation from liquid to solid, or from liquid to gas. The existence of phase transformations---which are ubiquitous in nature---requires interactions between particles, which up to now we have neglected. Hence, we will be reverting to a thermodynamics approach, since incorporating interactions into statistical mechanics is not so easy.

One of the key aspects for most phase transformations is coexistence. It is posssible to have both ice and liquid water in equilibrium with each other, coexisting happily. The existence of coexistence in fact breaks some assumptions that we have made. For instance, starting way back in Energy and Entropy, we have assured you that you could describe the state of a system using practically any pair of state variables (or triple, now that we include \(N\)). However, if ice and water are coexisting, then there must be an ambiguity, because at that temperature and pressure the system could be either ice or water, which are different!

Phase diagram of water, from Wikipedia

For your online edification (probably not much in class), I include here a phase diagram of water, which includes not only the liquid, vapor and solid phases, but also a dozen or so different crystal phases that you can reach at some combination of high pressure or low temperature.

A phase diagram of an ordinary pure material will have two interesting points, and three interesting lines. The two interesting points are the triple point (at which solid, liquid, and vapor can all coexist), and the critical point, at which the distinction between liquid and vapor vanishes. The three lines represent coexistence between solid and gas (or vapor), coexistence between liquid and gas, and coexistence between liquid and solid.

Coexistence

To understand a phase transformation, we first need to understand the state of coexistence.

Question
If we view the liquid and solid here as two separate systems that are in equilibrium with each other, what can you tell me about those two systems?
Answer
They must be at the same temperature (since they can exchange energy), they must be at the same pressure (since they can exchange volume), and least obvious they must be at the same chemical potential, since they can exchange molecules.

The first two properties define why we can draw the coexistence as a line on a pressure-temperature diagram, since when the two phases coexist they must have the same pressure and temperature. If we drew a volume-temperature diagram, the coexisting phases would not lie at the same point. The final property, that the chemical potentials must be identical, may seem obvious in retrospect. This also means that the Gibbs free energy per particle of the two phases must be equal (since this is equal to the chemical potential).

Clausius-Clapeyron

When you look at the phase diagram in its usual pressure versus temperature representation, you can now think of the lines as representing the points where two chemical potentials are equal (e.g. the chemical potential of water and ice). A natural question would be whether you could predict the slopes of these curves? Or alternatively, does knowing the slopes of these curves tell you anything about the materials in question?

We can begin by considering two very close points on the liquid-vapor curve, separated by \(dp\) and \(dT\). We know that \begin{align} \mu_g(T,p) &= \mu_\ell(T,p) \\ \mu_g(T+dT, p+dp) &= \mu_\ell(T+dT, p+dp) \end{align} We can now expand the small difference in terms of differentials \begin{multline} \cancel{\mu_g(T,p)} + \left(\frac{\partial \mu_g}{\partial T}\right)_{p,N}dT + \left(\frac{\partial \mu_g}{\partial p}\right)_{T,N}dp \\ =\cancel{\mu_\ell(T,p)} + \left(\frac{\partial \mu_\ell}{\partial T}\right)_{p,N}dT + \left(\frac{\partial \mu_\ell}{\partial p}\right)_{T,N}dp \end{multline} We can now collect the two differentials and find their ratio. \begin{multline} \left(\left(\frac{\partial \mu_g}{\partial T}\right)_{p,N} - \left(\frac{\partial \mu_\ell}{\partial T}\right)_{p,N}\right)dT \\ = \left(\left(\frac{\partial \mu_\ell}{\partial p}\right)_{T,N} - \left(\frac{\partial \mu_g}{\partial p}\right)_{T,N}\right)dp \end{multline} Thus the derivative of the coexistence curve is given by \begin{align} \frac{dp}{dT} &= \frac{ \left(\frac{\partial \mu_g}{\partial T}\right)_{p,N} - \left(\frac{\partial \mu_\ell}{\partial T}\right)_{p,N} }{ \left(\frac{\partial \mu_\ell}{\partial p}\right)_{T,N} - \left(\frac{\partial \mu_g}{\partial p}\right)_{T,N} } \\ &= -\frac{ \left(\frac{\partial \mu_g}{\partial T}\right)_{p,N} - \left(\frac{\partial \mu_\ell}{\partial T}\right)_{p,N} }{ \left(\frac{\partial \mu_g}{\partial p}\right)_{T,N} - \left(\frac{\partial \mu_\ell}{\partial p}\right)_{T,N} } \end{align}

Small groups
Find an expression for these derivatives to express this ratio in terms of thermal variables that are more comfortable. You will want to make use of the fact we derived a few weeks ago, which says that the chemical potential is the Gibbs free energy per particle, where \(G=U-TS+pV\).
Answer
\begin{align} G &= U-TS+pV \\ &= \mu N\\ dG &= dU -TdS -SdT +pdV - Vdp \\ &= -SdT +Vdp + \mu dN \\ Nd\mu &= -SdT +Vdp + \mu dN \\ d\mu &= -\frac{S}{N}dT +\frac{V}{N}dp + \frac{\mu}{N}dN \end{align} From this differential we can see that \begin{align} -\frac{S}{N} &= \left(\frac{\partial \mu}{\partial T}\right)_{p,N} \\ \frac{V}{N} &= \left(\frac{\partial \mu}{\partial p}\right)_{T,N} \end{align} Thus we can put these into the ratios above, and we will find thatthe \(N\)s will cancel, and the minus sign on the entropy will cancel the minus sign that was out front. \begin{align} \frac{dp}{dT} &= \frac{ \frac{S_g}{N_g} - \frac{S_\ell}{N_\ell} }{ \frac{V_g}{N_g} - \frac{V_\ell}{N_\ell} } \end{align} This looks like a bit of a nuisance having all these \(N\) values on the bottom. It looks cleaner if we just define \(s\equiv \frac{S}{N}\) as the specific entropy (or entropy per atom) and \(v\equiv{V}{N}\) as the specific volume (or volume per atom). Thus we have \begin{align} \frac{dp}{dT} &= \frac{ s_g - s_\ell }{ v_g - v_\ell } \end{align} This is the famous Clausius-Clapeyron equation, and is true for any phase coexistence curve in the pressure-temperature phase diagram.

We can further expand this by interpreting the chance in entropy as a latent heat. If the entropy changes discontinuously, since the phase transformation happens entirely at a single temperature, we can use the relationship between heat and entropy to find that \begin{align} Q &= T\delta S \end{align} We call the heat needed to change from one phase to another the latent heat \(L\), which gives us that \begin{align} \frac{dp}{dT} &= \frac{L}{T\Delta V} \end{align} This equation can be a bit tricky to use right, since you could get the direction of \(\Delta V\) wrong. The one with entropy and volume is easier, since as long as both changes are in the same direction (vapor minus liquid or vice versa) it is still correct.

From Clausius-Clapeyron we can see that so long as the volume increases as the entropy also increases, the coexistence curve will have a positive slope.

Question
When would the slope ever be negative? It requires a high-entropy phase that also has lower volume!
Answer
Ice and water! Water has higher entropy, but also has lower volume than ice (i.e. is more dense). This is backwards from most other materials, and causes the melting curve to slope up and to the left for ice.

van der Waals

When we talk about phase transformations, we require some sort of system in which there are interactions between particles, since that is what leads to a phase transformation. We can either do this from the bottom up, by constructing a system in which there are interactions and then solving for the properties of that system, or we could use a more empirical approach, in which we use an approximate set of equations of state (or a free energy) that behaves much like a real system.

The van der Waals fluid is sort of in between these two approaches. I will describe how we would “derive” the van der Waals free energy in a slightly hand-waving manner, and then we will use it as an effectively empirical system that we can use to explore how a phase transition might happen. The van der Waals fluid in essence is a couple of corrections to the ideal gas law, which together add enough interactions to give a plausible description of a liquid-vapor phase transition.

Small white boards
What kind of interactions might exist in a real gas that are ignored when we treat it as an ideal gas?
Answer
Repulsion and attraction! \(\ddot\smile\) Atoms will have a very high energy if they sit on top of another atom, but atoms that are at an appropriate distance will feel an attractive interaction.

In fluids, attraction and repulsion tend to be treated very differently. Repulsion tends to primarily decrease entropy rather than increasing energy, because the atoms can simply avoid being on top of each other. In contrast, attraction often has little effect on entropy (except when there is a phase transformation), but can decrease the energy. It has little effect on entropy because the attraction is often very weak, so it doesn't (much) affect where the atoms are, but does affect the energy, provided the atoms are close enough.

Building up the free energy: repulsion

Let's start by looking at the ideal gas free energy: \begin{align} F_{\text{ideal}} &= -NkT\left(\ln\left(\frac{n_QV}{N}\right) + 1\right) \end{align} This free energy depends on both volume and temperature (also \(N\), but let's keep that fixed). The temperature dependence is out front and in \(n_Q\). The volume dependence is entirely in the logarithm. When we add the repulsive interaction, we can wave our hands a bit and argue that the effect of repulsion is to keep atoms from sitting too close to one another, and that results in each atom having less volume it could be placed in. The volume available for a given atom will be the total volume \(V\), minus the volume occupied by all the other atoms, which we can call \(Nb\) where \(N\) is the number of atoms, and \(b\) is the excluded volume per atom. You might argue (correctly) that the excluded volume should be \((N-1)b\), but we will be working in the limit of \(N\gg1\) and can ignore that fine distinction. Making this substitution gives us \begin{align} F_{\text{with repulsion}} &= -NkT\left(\ln\left(\frac{n_Q(V-Nb)}{N}\right) + 1\right) \end{align} This free energy is going to be higher than the ideal gas free energy, because we are making the logarithm lesser, but there is a minus sign out front. That is good, because we would hardly expect including repulsion to lower the free energy.

In your homework you will (incidentally) show that this free energy gives an internal energy that is identical to the ideal gas free energy, which bears out what I said earlier about repulsion affecting the entropy rather than the energy.

Adding attraction: mean field theory

When we want to add in attraction to the free energy, the approach we will use is called mean field theory. I prefer to talk about it as first-order thermodynamic perturbation theory. (Actually, mean field theory is often more accurately described as a poor approximation to first-order perturbation theory, as it is common in mean-field theory to ignore any correlations in the reference fluid.) You know perturbation theory from quantum mechanics, but the fundamental ideas can be applied to any theory, including statistical mechanics.

The fundamental idea of perturbation theory is to break your Hamiltonian into two terms: one that you are able to solve, and a second term that is small. In this case, in order to derive (or motivate?) the van der Waals equation, our reference would be the system with repulsion only, and the perturbation would be the attraction between our atoms. We want to solve this purely classically, since we don't know how to solve the energy eigenvalue equation with interactions between particles included.

Classically, we would begin by writing down energy, and then we would work out the partition function by summing over all possible microstates in the canonical ensemble. A logarithm would then tell us the free energy. The energy will be \begin{align} E &= \sum_i^{\text{all atoms}} \frac{p_i^2}{2m} + \frac12\sum_{ij}^{\text{atom pairs}}U(|\vec r_i-\vec r_j|) \end{align} where \(U(r)\) is an attractive pair potential, which is to say, a potential energy of interaction between each pair of atoms. The first term is the kinetic energy (and is the same for the ideal gas), while the second term is a potential energy (and is zero for the ideal gas). The partition function is then \begin{align} Z &= \frac1{N!} \int\!\!d^3r_1\int\!\!d^3r_2\cdots\int\!\!d^3r_N \notag\\&\quad \int\!\!d^3p_1\int\!\!d^3p_2\cdots\int\!\!d^3p_N e^{-\beta\left(\sum \frac{p_i^2}{2m} + \frac12\sum U(|\vec r_i-\vec r_j|)\right)} \\ &= \frac1{N!} \int\!\!d^3p_1\int\!\!d^3p_2\cdots\int\!\!d^3p_N e^{-\beta\left(\sum \frac{p_i^2}{2m}\right)} \notag\\&\quad \int\!\!d^3r_1\int\!\!d^3r_2\cdots\int\!\!d^3r_N e^{-\beta\left(\frac12\sum U(|\vec r_i-\vec r_j|)\right)} \end{align} At this point I will go ahead and split this partition function into two factors, an ideal gas partition function plus a correction factor that depends on the potential energy of interaction. \begin{align} Z &= \frac{V^N}{N!} \int\!\!d^3p_1\int\!\!d^3p_2\cdots\int\!\!d^3p_N e^{-\beta\left(\sum \frac{p_i^2}{2m}\right)} \notag\\&\quad \frac1{V^N}\int\!\!d^3r_1\int\!\!d^3r_2\cdots\int\!\!d^3r_N e^{-\beta\left(\frac12\sum U(|\vec r_i-\vec r_j|)\right)} \\ &= Z_{\text{ideal}} \frac1{V^N}\int\!\!d^3r_1\cdots\int\!\!d^3r_N e^{-\beta\left(\frac12\sum U(|\vec r_i-\vec r_j|)\right)} \\ &= Z_{\text{ideal}}Z_{\text{configurational}} \end{align} Now we can express the free energy! \begin{align} F &= -kT\ln Z \\ &=-kT\ln(Z_{\text{ideal}}Z_{\text{configurational}}) \\ &= F_{\text{ideal}} -kT\ln Z_{\text{configurational}} \end{align} So we just need to approximate this excess free energy (beyond the ideal gas free energy). Let's get to the approximation bit. \begin{align} Z_{\text{config}} &= \int\!\!\frac{d^3r_1}{V}\cdots\int\!\!\frac{d^3r_N}{V} e^{-\beta\left(\frac12\sum U(|\vec r_i-\vec r_j|)\right)} \\ &\approx \int\!\!\frac{d^3r_1}{V}\cdots\int\!\!\frac{d^3r_N}{V} \left(1-\sum_{ij}\frac{\beta}2 U(r_{ij})\right) \end{align} At this point I have used a power series approximation on the exponentials, under the assumption that our attraction is sufficiently small. Now we can write this sum in a simpler manner, taking account of the symmetry between the different particles. \begin{align} Z_{\text{config}} &= 1 - \frac{\beta}2\sum_{ij}\int\!\!\frac{d^3r_1}{V}\cdots\int\!\!\frac{d^3r_N}{V} U(r_{ij}) \\ &= 1 - \frac{\beta N^2}2\int\!\!\frac{d^3r_1}{V}\int\!\!\frac{d^3r_2}{V} U(r_{12}) \\ &= 1 - \frac{\beta N^2}2\int\!\!\frac{d^3r}{V}U(r) \end{align} At this stage, I've gotten things way simpler. Note also, that I did something wrong. I assumed that the potential was always small, but the repulsive part of the potential is not small. But we'll ignore that for now. Including it properly would be doing this right, but instead we'll use the approach that leads to the van der Waals equation of state. To continue... \begin{align} F_{\text{excess}} &= -kT\ln Z_{\text{conf}} \\ &= -kT\ln\left(1 - \frac{\beta N^2}2\int\!\!\frac{d^3r}{V}U(r)\right) \\ &\approx kT\frac{\beta N^2}2\int\!\!\frac{d^3r}{V}U(r) \\ &= \frac{N^2}2\int\!\!\frac{d^3r}{V}U(r) \\ &\equiv -\frac{N^2}{V} a \end{align} where I've defined \(a \equiv -\frac12\int\!\!d^3r U(r)\). The minus sign here is to make \(a\) a positive quantity, given that \(U(r)<0\). Putting this together with the ideal gas free energy modified to include a simple repulsion term, we have the van der Waals free energy: \begin{align} F_{\text{vdW}} &= -NkT\left(\ln\left(\frac{n_Q(V-Nb)}{N}\right) + 1\right) - \frac{N^2}{V}a \end{align}

van der Waals equation of state

Small groups
Solve for the van der Waals pressure, as a function of \(N\), \(V\), and \(T\) (and of course, also \(a\) and \(b\)).
Answer
\begin{align} p &= -\left(\frac{\partial F}{\partial V}\right)_{T,N} \\ &= \frac{NkT}{V-Nb} - \frac{N^2}{V^2}a \end{align}

This equation is the van der Waals equation of state, which is often rewritten to look like: \begin{align} \left(p + \frac{N^2}{V^2}a\right)(V-Nb) = NkT \end{align} as you can see it is only slightly modified from the ideal gas law, provided \(a\ll p\) and \(Nb\ll V\).

van der Waals and liquid-vapor phase transition

Let's start by looking at the pressure as a function of volume according to the van der Waals equation: \begin{align} p &= \frac{NkT}{V-Nb} - \frac{N^2}{V^2}a \\ &= \frac{kT}{\frac{V}{N}-b} - \frac{N^2}{V^2}a \end{align} Clearly the pressure will diverge as the volume is decreased towards \(Nb\), which puts a lower bound on the volume. This reflects the fact that each atom takes \(b\) volume, so you can't compress the fluid smaller than that. At larger volumes, the pressure will definitely be positive and decreasing, since the attractive term dies off faster than the first term. However, if \(a\) is sufficiently large (or \(T\) is sufficiently small), we may find that the second term dominates when the volume is not too large.

We can also rewrite this pressure to express it in terms of the number density \(n\equiv \frac{N}{V}\), which I find a little more intuitive than imagining the volume changing: \begin{align} p &= \frac{kT}{\frac1n-b} - n^2 a \\ &= kT\frac{n}{1-nb} - n^2 a \end{align} So this tells us that as we increase the density from zero, the pressure will begin by increasing linearly. It will end by approaching infinity as the density approaches \(\frac1b\). In between, the attractive term may or may not cause the pressure to do something interesting.

The van der Waals pressure for a few temperatures.

This equation is kind of nice, but it's still pretty confusing because it has three different constants (other than \(n\)) in it. We can reduce that further by rewriting it in terms of the packing fraction \(\eta\equiv nb\), which is the fraction of the volume that is filled with atoms. \begin{align} p &= \frac{kT}{b}\frac{\eta}{1-\eta} - \frac{a}{b^2}\eta^2 \end{align} We now see that there are just two “constants” to deal with, \(\frac{kT}{b}\), and \(\frac{a}{b^2}\) each of which have dimensions of pressure. The former, of course, depends on temperature, and the ratio between them (i.e. \(\frac{kTb}{a}\)) will fully determine the shape of our pressure curve (in terms of density).

The van der Waals pressure for a few temperatures.

Clearly something interesting is happening at low temperatures. This is a phase transition. But how do we find out what the density (or equivalently, volume) of the liquid and solid are? You already know that the pressure, temperature and chemical potential must all be equal when two phases are in coexistence. From this plot we can identify triples of densities where the temperature and pressure are both identical. Which corresponds to the actual phase transition?

Question
How might you determine from this van der Waals equation of state or free energy where the phase transformation happens?
Answer
As before, we need to have identical pressure, temperature and chemical potential. So we need to check which of these equal pressure states have the same chemical potential.

Common tangent

Most approaches require us to work with the Helmholtz free energy rather than the pressure equation. If we plot the Helmholtz free energy versus volume (with number fixed) the pressure is the negative slope. We also need to ensure that the chemical potential (or Gibbs free energy) is identical at the two points.

The van der Waals free energy.
\begin{align} G &= F + pV \\ &= F - \left(\frac{\partial F}{\partial V}\right)_{N,T}V \end{align} So let us set the Gibbs free energies and pressures equal for two points: \begin{align} p_1 &= p_2 \\ -\left(\frac{\partial F}{\partial V}\right)_{N,T} &= \text{same for each} \\ G_1 &= G_2 \\ F_1 + pV_1 &= F_2 + p V_2 \\ F_1 - F_2 &= p\left(V_2 - V_1\right) \end{align} So for two points to have the same Gibbs free energy, their Helmholtz free energy (at fixed temperature) must pass through a line with slope equal to the negative of the pressure. If those two points also have that pressure as their (negative) slope, then they have both equal slope and equal chemical potential, and are our two coexisting states. This is the common tangent construction.

The common tangent construction is very commonly used when looking at multiple crystal structures, when you don't even know which ones are stable in the first place.

Note
The common tangent construction also works when we plot \(F\) versus \(n\) or \(N\).

Gibbs free energy

Another approach to solve for coexistence points is to plot the Gibbs free energy versus pressure, each of which can be computed easily from the Helmholtz free energy. When we plot the Gibbs free energy versus pressure, we find that there is a crossing and a little loop. This loop corresponds to metastable and unstable states, and the crossing point is where the two phases (liquid and vapor, in our case) coexist.

The van der Waals Gibbs free energy.

As we increase the temperature, we will find that this little loop becomes smaller, as the liquid and vapor densities get closer and closer. The critical point is where it disappears.

Why does \(G\) look like this?
We had a good question about what the “points” represent in the Gibbs free energy curve. We can understand this a bit better by thinking a bit about the differential of \(G\): \begin{align} dG &= -SdT + Vdp \end{align} This tells us that the slope of the \(G\) versus \(p\) curve (at fixed temperature) is just the volume of the system. Since the volume can vary continuously (at least in the Helmholtz free energy we constructed), this slope must continuously change as we follow the path. That explains why we have pointy points, since the slope must be the same on both sides of the curve. The points thus represent the states where the pressure has an extremum, as we change the volume. In between those two extrema is the range where increasing volume causes the pressure to increase. These states are mechanically unstable, and thus cannot be observed.

Examples of phase transitions

I'd like to spend just a bit of time talking about the wide variety of different phase transitions that can and do happen, before we discuss how these transitions can be understood in a reasonably unified way through Landau theory.

Liquid-vapor

The liquid-vapor transition is what we just discussed. The only fundamental difference between liquid and vapor is the density of the fluid. (abrupt)

Melting/freezing
Melting and freezing is similar to the liquid-vapor transition, with the difference however, that there cannot be a critical point, since we cannot go from solid to liquid without a phase transition. (abrupt)

Sublimation

Sublimation is very much like melting. Its major difference happens because of the difference between a gas and a liquid, which means that there is no temperature low enough that there will not be a gas in equilibirum with a solid at low pressure. (abrupt)

Solid/solid

Solid-solid phase transitions are interesting in that different solid phases have different crystal symmetries which make it both possible and reasonable to compute (and possibly even observe) properties for different phases at the same density and pressure. (abrupt)

Ferromagnetism

A ferromagnetic material (such as iron or nickel) will spontaneously magnetize itself, although the magnetized regions do break into domains. When the material is heated above a given temperature (called the Curie temperature) it is no longer ferromagnetic, but instead behaves as an ordinary paramagnetic material. This is therefore a phase transitions. (continuous)

Ferroelectrics

A ferroelectric material is a material that has a spontaneous electric dipole polarization at low temperatures. It behaves very much like an electrical analogue of a ferromagnetic material. (continuous)

Antiferromagnetism

An antiferromagnetic material (such as nickel oxide) will have different atoms with oppositely polarized spin. This is less easy to observe by elementary school children than ferromagnetism, but is also a distinct phase, with a phase transition in which the spins become disordered. (continuous)

Superconductivity

A superconductor at low temperatures has zero electrical resistivity. At higher temperature it is (for ordinary superconductors) an ordinary metal. Lead is a classic example of a superconductor, and has a transition temperature of \(7.19\text{K}\). You see high-\(T_c\) superconductors in demos more frequently, which have transition temperatures up to \(134\text{K}\), but are more complicated in terms of their cause and phase diagram. (continuous)

Superfluidity
A superfluid (and helium 4 is the classic example) has zero viscosity at low temperatures. For helium this transition temperature is \(2.17\text{K}\). (continuous)

Bose-Einstein condensation

The transition to having a macroscopic occupation in the ground state in a gas of bosons is another phase transition. (continuous)

Mixing of binary systems

In binary systems (e.g. salt and water, or an alloy of nickel and iron) there are many of the same phase transitions (e.g. liquid/gas/solid), but now we have an additional parameter which is the fraction of each component in the phase. Kittel and Kroemer have a whole chapter on this kind of phase transition.

Landau theory

There are so many kinds of phase transitions, you might wonder whether they are all different, or if we can understand them in the same (or a similar) way. Landau came up with an approach that allows us to view the whole wide variety of phase transitions in a unified manner.

The key idea is to identify an order parameter \(\xi\), which allows us to distinguish the two phases. This order parameter ideally should also be something that has interactions we can control through some sort of an external field. Examples of order parameters:

Liquid-vapor
volume or density
Ferromagnetism
magnetization
Ferroelectrics
electric polarization density
Superconductivity or superfluidity
quantum mechanical amplitude (including phase)
Binary mixtures
fraction of components

The key idea of Landau is to express a Helmholtz free energy as a function of the order parameter: \begin{align} F_L(\xi,T) &= U(\xi,T) - TS(\xi,T) \end{align} Now at a given temperature there is an equilibrium value for the order parameter \(\xi_0\), which is determined by minimizing the free energy, and this equilibrium order parameter defines the actual Helmholtz free energy. \begin{align} F(T) = F_L(\xi_0,T) \le F_L(\xi,T) \end{align} So far this hasn't given us much. Landau theory becomes powerful is when we expand the free energy as a power series in the order parameter (and later as a power series in temperature).

A continuous phase transition

To make things concrete, let us assume an order parameter with inversion symmetry, such as magnetization or electrical polarization. This means that \(F_L\) must be an even function of \(\xi\), so we can write that \begin{align} F_L(\xi,T) &= g_0(T) + \frac12g_2(T)\xi^2 + \frac14 g_4(T)\xi^4 + \cdots \end{align} The entire temperature dependence is now hidden in the coefficients of the power series. A simple example where we could have a phase transition, would be if the sign of \(g_2\) changed at some temperature \(T_0\). In this case, we could do a power series expansion of our coefficients around \(T_0\), and we would have something like: \begin{align} F_L(\xi,T) &= g_0(T) + \frac12\alpha(T-T_0)\xi^2 + \frac14g_4(T_0)\xi^4 \end{align} where I am ignoring the temperature dependence of \(g_4\), under the assumption that it doesn't do anything too fancy near \(T_0\). I'm leaving \(g_0(T)\) alone, because it causes no trouble, and will be useful later. I'm also going to assume that \(\alpha\) and \(g_4(T_0)\) are positive. Now we can solve for the order parameter that minimizes the free energy by setting its derivative to zero. \begin{align} \left(\frac{\partial F_L}{\partial \xi}\right)_T &= 0\\ &= \alpha(T-T_0)\xi + g_4(T_0)\xi^3 \end{align} This has two solutions: \begin{align} \xi &= 0 & \xi^2 &= (T_0-T)\frac{\alpha}{g_4(T_0)} \end{align} If \(T>T_0\) there is only one (real) solution, which is that the order parameter is zero. Thus when \(T>T_0\), we can see that \begin{align} F(T) &= g_0(T) \end{align} exactly, since \(\xi=0\) causes all the other terms in the Landau free energy to vanish.

In contrast, when \(T<T_0\), there are two solutions that are minima (\(\pm \sqrt{(T_0-T)\alpha/g_4(T_0)}\)), and one maximum at \(\xi=0\). In this case the order parameter continuously (but not smoothly) goes to zero. This tells us that the free energy at low temperatures will be given by \begin{align} F(T) &= g_0(T) - \frac{\alpha^2}{4g_4(T_0)}(T-T_0)^2 \end{align}

Small groups
Solve for the entropy of this system when the temperature is near \(T_0\).
Answer
We can find the entropy from the free energy by considering its total differential \begin{align} dF &= -SdT -pdV \end{align} which tells us that \begin{align} -S &= \left(\frac{\partial F}{\partial T}\right)_{V} \end{align} Let's start by finding the entropy for \(T<T_0\): \begin{align} S_< &= -\frac{dg_0}{dT} -\frac{\alpha^2}{2g_4(T_0)}(T_0-T) \end{align} When the temperature is high, this is easier: \begin{align} S_< &= -\frac{dg_0}{dT} \end{align} This tells us that the low-temperature phase has an extra-low entropy relative to what it would have had without the phase transition. However, the entropy is continuous, which means that there is no latent heat associated with this phase transition, which is called a continuous phase transition. An older name for this kind of phase transition (used in the text) is a second order phase transition. Currently “continuous” is prefered for describing phase transitions with no latent heat, because they are not always actually second order as is this example.

Examples of continuous phase transitions include ferromagnets and superconductors.

An abrupt phase transition

To get an abrupt phase transition with a nonzero latent heat (as for melting or boiling), we need to consider a scenario where \(g_4<0\) and \(g_6>0\). This gives us two competing local minima at different values for the order parameter. (Note that an abrupt phase transition is also known as a first order phase transition.) \begin{align} F_L &= g_0(T) + \frac12\alpha(T-T_0)\xi^2 -\frac14|g_4(T)|\xi^4 + \frac16 g_6\xi^6+\cdots \end{align}

Small groups if we have time
Find the solutions for the order parameter, and in particular find a criterion for the phase transition to happen.
Answer
We want to find minima of our free energy... \begin{align} \frac{\partial F_L}{\partial \xi} &= 0 \\ &= \alpha(T-T_0)\xi - |g_4(T)|\xi^3 + g_6\xi^5 \end{align} One solution is \(\xi=0\). Otherwise, \begin{align} 0 &= \alpha(T-T_0) - |g_4(T)|\xi^2 + g_6\xi^4 \end{align} which is just a quadratic. It has solutions when \begin{align} \xi^2 &= \frac{|g_4(T)| \pm \sqrt{g_4(T)^2 -4g_6\alpha(T-T_0)}}{2g_6} \end{align} Note that this has four solutions. Two have \(\xi<0\), and show up because our free energy is even. One of the other solutions is a local maximum, and the final solution is a local minimum. For this to have a real solution, we would need for the thing in the square root to be positive, which means \begin{align} g_4(T)^2 \ge 4g_6\alpha(T-T_0) \end{align} It would be tempting to take this as an equality when we are at the phase transition. However, that is just the point at which there is a local minimum, but we are looking for a global minimum (other than \(\xi=0\)). This global minimum will require that \begin{align} F_L(\xi>0) < F_L(\xi=0) \end{align} which leads us to conclude that \begin{align} \frac12\alpha(T-T_0)\xi^2 -\frac14|g_4(T)|\xi^4 + \frac16 g_6\xi^6 &< 0 \end{align} We can plug in the criterion for an extremum in the free energy at nonzero \(\xi\) to find: \begin{align} \tfrac12\left(|g_4(T)|\xi^2-g_6\xi^4\right)\xi^2 -\tfrac14|g_4(T)|\xi^4 + \tfrac16 g_6\xi^6 &< 0 \\ \tfrac14|g_4(T)|\xi^4 - \tfrac13 g_6\xi^6 &< 0 \\ \frac14|g_4(T)| - \frac13 g_6\xi^2 &< 0 \end{align} At this point we would want to make use of the solution for \(\xi^2\) above that used the quadratic equation. We would then have eliminated \(\xi\) from the equation, and could solve for a relationship between \(|g_4(T)|\), \(g_6\), and \(\alpha(T-T_0)\).

  • assignment Vapor pressure equation

    assignment Homework

    Vapor pressure equation
    phase transformation Clausius-Clapeyron Thermal and Statistical Physics 2020 Consider a phase transformation between either solid or liquid and gas. Assume that the volume of the gas is way bigger than that of the liquid or solid, such that \(\Delta V \approx V_g\). Furthermore, assume that the ideal gas law applies to the gas phase. Note: this problem is solved in the textbook, in the section on the Clausius-Clapeyron equation.
    1. Solve for \(\frac{dp}{dT}\) in terms of the pressure of the vapor and the latent heat \(L\) and the temperature.

    2. Assume further that the latent heat is roughly independent of temperature. Integrate to find the vapor pressure itself as a function of temperature (and of course, the latent heat).

  • face Energy and heat and entropy

    face Lecture

    30 min.

    Energy and heat and entropy
    Energy and Entropy 2021 (2 years)

    latent heat heat capacity internal energy entropy

    This short lecture introduces the ideas required for Ice Calorimetry Lab or Microwave oven Ice Calorimetry Lab.
  • face Fermi and Bose gases

    face Lecture

    120 min.

    Fermi and Bose gases
    Thermal and Statistical Physics 2020

    Fermi level fermion boson Bose gas Bose-Einstein condensate ideal gas statistical mechanics phase transition

    These lecture notes from week 7 of Thermal and Statistical Physics apply the grand canonical ensemble to fermion and bosons ideal gasses. They include a few small group activities.
  • group Hydrogen emission

    group Small Group Activity

    30 min.

    Hydrogen emission
    Contemporary Challenges 2022 (5 years)

    hydrogen atom photon energy

    In this activity students work out energy level transitions in hydrogen that lead to visible light.
  • assignment Einstein condensation temperature

    assignment Homework

    Einstein condensation temperature
    Einstein condensation Density Thermal and Statistical Physics 2020

    Einstein condensation temperature Starting from the density of free particle orbitals per unit energy range \begin{align} \mathcal{D}(\varepsilon) = \frac{V}{4\pi^2}\left(\frac{2M}{\hbar^2}\right)^{\frac32}\varepsilon^{\frac12} \end{align} show that the lowest temperature at which the total number of atoms in excited states is equal to the total number of atoms is \begin{align} T_E &= \frac1{k_B} \frac{\hbar^2}{2M} \left( \frac{N}{V} \frac{4\pi^2}{\int_0^\infty\frac{\sqrt{\xi}}{e^\xi-1}d\xi} \right)^{\frac23} T_E &= \end{align} The infinite sum may be numerically evaluated to be 2.612. Note that the number derived by integrating over the density of states, since the density of states includes all the states except the ground state.

    Note: This problem is solved in the text itself. I intend to discuss Bose-Einstein condensation in class, but will not derive this result.

  • assignment Radiation in an empty box

    assignment Homework

    Radiation in an empty box
    Thermal physics Radiation Free energy Thermal and Statistical Physics 2020

    As discussed in class, we can consider a black body as a large box with a small hole in it. If we treat the large box a metal cube with side length \(L\) and metal walls, the frequency of each normal mode will be given by: \begin{align} \omega_{n_xn_yn_z} &= \frac{\pi c}{L}\sqrt{n_x^2 + n_y^2 + n_z^2} \end{align} where each of \(n_x\), \(n_y\), and \(n_z\) will have positive integer values. This simply comes from the fact that a half wavelength must fit in the box. There is an additional quantum number for polarization, which has two possible values, but does not affect the frequency. Note that in this problem I'm using different boundary conditions from what I use in class. It is worth learning to work with either set of quantum numbers. Each normal mode is a harmonic oscillator, with energy eigenstates \(E_n = n\hbar\omega\) where we will not include the zero-point energy \(\frac12\hbar\omega\), since that energy cannot be extracted from the box. (See the Casimir effect for an example where the zero point energy of photon modes does have an effect.)

    Note
    This is a slight approximation, as the boundary conditions for light are a bit more complicated. However, for large \(n\) values this gives the correct result.

    1. Show that the free energy is given by \begin{align} F &= 8\pi \frac{V(kT)^4}{h^3c^3} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \\ &= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3} \\ &= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3} \end{align} provided the box is big enough that \(\frac{\hbar c}{LkT}\ll 1\). Note that you may end up with a slightly different dimensionless integral that numerically evaluates to the same result, which would be fine. I also do not expect you to solve this definite integral analytically, a numerical confirmation is fine. However, you must manipulate your integral until it is dimensionless and has all the dimensionful quantities removed from it!

    2. Show that the entropy of this box full of photons at temperature \(T\) is \begin{align} S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3 \\ &= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3 \end{align}

    3. Show that the internal energy of this box full of photons at temperature \(T\) is \begin{align} \frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3} \\ &= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3} \end{align}

  • accessibility_new Using Arms to Represent Overall and Relative Phase in Spin 1/2 Systems

    accessibility_new Kinesthetic

    10 min.

    Using Arms to Represent Overall and Relative Phase in Spin 1/2 Systems
    Quantum Fundamentals 2022 (2 years)

    quantum states complex numbers arms Bloch sphere relative phase overall phase

    Arms Sequence for Complex Numbers and Quantum States

    Students, working in pairs, use the Arms representations to represent states of spin 1/2 system. Through a short series of instructor-led prompts, students explore the difference between overall phase (which does NOT distinguish quantum states) and relative phase (which does distinguish quantum states).
  • accessibility_new Using Arms to Represent Time Dependence in Spin 1/2 Systems

    accessibility_new Kinesthetic

    10 min.

    Using Arms to Represent Time Dependence in Spin 1/2 Systems
    Quantum Fundamentals 2022 (2 years)

    Arms Representation quantum states time dependence Spin 1/2

    Arms Sequence for Complex Numbers and Quantum States

    Students, working in pairs, use their left arms to demonstrate time evolution in spin 1/2 quantum systems.
  • assignment Phase 2

    assignment Homework

    Phase 2
    quantum mechanics relative phase overall phase measurement probability Quantum Fundamentals 2022 (2 years) Consider the three quantum states: \[\left\vert \psi_1\right\rangle = \frac{4}{5}\left\vert +\right\rangle+ i\frac{3}{5} \left\vert -\right\rangle\] \[\left\vert \psi_2\right\rangle = \frac{4}{5}\left\vert +\right\rangle- i\frac{3}{5} \left\vert -\right\rangle\] \[\left\vert \psi_3\right\rangle = -\frac{4}{5}\left\vert +\right\rangle+ i\frac{3}{5} \left\vert -\right\rangle\]
    1. For each of the \(\left|{\psi_i}\right\rangle \) above, calculate the probabilities of spin component measurements along the \(x\), \(y\), and \(z\)-axes.
    2. Look For a Pattern (and Generalize): Use your results from \((a)\) to comment on the importance of the overall phase and of the relative phases of the quantum state vector.
  • assignment Potential energy of gas in gravitational field

    assignment Homework

    Potential energy of gas in gravitational field
    Potential energy Heat capacity Thermal and Statistical Physics 2020 Consider a column of atoms each of mass \(M\) at temperature \(T\) in a uniform gravitational field \(g\). Find the thermal average potential energy per atom. The thermal average kinetic energy is independent of height. Find the total heat capacity per atom. The total heat capacity is the sum of contributions from the kinetic energy and from the potential energy. Take the zero of the gravitational energy at the bottom \(h=0\) of the column. Integrate from \(h=0\) to \(h=\infty\). You may assume the gas is ideal.

Learning Outcomes