title, topic, keyword
Small group, whiteboard, etc
Required in-class time for activities
Leave blank to search both

Activities

Small Group Activity

30 min.

\(|\pm\rangle\) Forms an Orthonormal Basis
Student explore the properties of an orthonormal basis using the Cartesian and \(S_z\) bases as examples.

Small Group Activity

30 min.

Completeness Relations
Students use a completeness relations to write hydrogen atoms states in the energy and position bases.
Students practice using inner products to find the components of the cartesian basis vectors in the polar basis and vice versa. Then, students use a completeness relation to change bases or cartesian/polar bases and for different spin bases.

Kinesthetic

10 min.

Curvilinear Basis Vectors
Students use their arms to depict (sequentially) the different cylindrical and spherical basis vectors at the location of their shoulder (seen in relation to a specified origin of coordinates: either a set of axes hung from the ceiling of the room or perhaps a piece of furniture or a particular corner of the room).
  • symmetry curvilinear coordinate systems basis vectors
    Found in: Static Fields, Central Forces, AIMS Maxwell, Surfaces/Bridge Workshop, Problem-Solving, None course(s) Found in: Geometry of Vector Fields Sequence, Curvilinear Coordinate Sequence sequence(s)
First complete the problem Diagonalization. In that notation:
  1. Find the matrix \(S\) whose columns are \(|\alpha\rangle\) and \(|\beta\rangle\). Show that \(S^{\dagger}=S^{-1}\) by calculating \(S^{\dagger}\) and multiplying it by \(S\). (Does the order of multiplication matter?)
  2. Calculate \(B=S^{-1} C S\). How is the matrix \(E\) related to \(B\) and \(C\)? The transformation that you have just done is an example of a “change of basis”, sometimes called a “similarity transformation.” When the result of a change of basis is a diagonal matrix, the process is called diagonalization.

Lecture about finding \(\left|{\pm}\right\rangle _x\) and then \(\left|{\pm}\right\rangle _y\). There are two conventional choices to make: relative phase for \(_x\left\langle {+}\middle|{-}\right\rangle _x\) and \(_y\left\langle {+}\middle|{+}\right\rangle _x\).

So far, we've talked about how to calculate measurement probabilities if you know the input and output quantum states using the probability postulate:

\[\mathcal{P} = | \left\langle {\psi_{out}}\middle|{\psi_{in}}\right\rangle |^2 \]

Now we're going to do this process in reverse.

I want to be able to relate the output states of Stern-Gerlach analyzers oriented in different directions to each other (like \(\left|{\pm}\right\rangle _x\) and \(\left|{\pm}\right\rangle _x\) to \(\left|{\pm}\right\rangle \)). Since \(\left|{\pm}\right\rangle \) forms a basis, I can write any state for a spin-1/2 system as a linear combination of those states, including these special states.

I'll start with \(\left|{+}\right\rangle _x\) written in the \(S_z\) basis with general coefficients:

\[\left|{+}\right\rangle _x = a \left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \]

Notice that:

(1) \(a\), \(b\), and \(\phi\) are all real numbers; (2) the relative phase is loaded onto the second coefficient only.

My job is to use measurement probabilities to determine \(a\), \(b\), and \(\phi\).

I'll prepare a state \(\left|{+}\right\rangle _x\) and then send it through \(x\), \(y\), and \(z\) analyzers. When I do that, I see the following probabilities:

Input = \(\left|{+}\right\rangle _x\) \(S_x\) \(S_y\) \(S_z\)
\(P(\hbar/2)\) 1 1/2 1/2
\(P(-\hbar/2)\) 0 1/2 1/2

First, looking at the probability for the \(S_z\) components:

\[\mathcal(S_z = +\hbar/2) = | \left\langle {+}\middle|{+}\right\rangle _x |^2 = 1/2\]

Plugging in the \(\left|{+}\right\rangle _x\) written in the \(S_z\) basis:

\[1/2 = \Big| \left\langle {+}\right|\Big( a\left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \Big) \Big|^2\]

Distributing the \(\left\langle {+}\right|\) through the parentheses and use orthonormality: \begin{align*} 1/2 &= \Big| a\cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + be^{i\phi} \cancelto{0}{\left\langle {+}\middle|{-}\right\rangle } \Big|^2 \\ &= |a|^2\\[12pt] \rightarrow a &= \frac{1}{\sqrt{2}} \end{align*}

Similarly, looking at \(S_z = -\hbar/2\): \begin{align*} \mathcal(S_z = +\hbar/2) &= | \left\langle {-}\middle|{+}\right\rangle _x |^2 = 1/2 \\ 1/2 = \Big| \left\langle {-}\right|\Big( a\left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \Big) \Big|^2\\ 1/2 &= \Big| a\cancelto{0}{\left\langle {-}\middle|{+}\right\rangle } + be^{i\phi} \cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big|^2 \\ &= |be^{i\phi}|^2\\ &= |b|^2 \cancelto{1}{(e^{i\phi})(e^{-i\phi})}\\[12pt] \rightarrow b &= \frac{1}{\sqrt{2}} \end{align*}

I can't yet solve for \(\phi\) but I can do similar calculations for \(\left|{-}\right\rangle _x\):

Input = \(\left|{-}\right\rangle _x\) \(S_x\) \(S_y\) \(S_z\)
\(P(\hbar/2)\) 0 1/2 1/2
\(P(-\hbar/2)\) 1 1/2 1/2
\begin{align*} \left|{-}\right\rangle _x &= c \left|{+}\right\rangle + de^{i\gamma} \left|{-}\right\rangle \\ \mathcal(S_z = +\hbar/2) &= | \left\langle {+}\middle|{-}\right\rangle _x |^2 = 1/2\\ \rightarrow c = \frac{1}{\sqrt{2}}\\ \mathcal(S_z = +\hbar/2) &= | \left\langle {-}\middle|{-}\right\rangle _x |^2 = 1/2\\ \rightarrow d = \frac{1}{\sqrt{2}}\\ \end{align*}

So now I have: \begin{align*} \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\beta} \left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\gamma} \left|{-}\right\rangle \\ \end{align*}

I know \(\beta \neq \gamma\) because these are not the same state - they are orthogonal to each other: \begin{align*} 0 &= \,_x\left\langle {+}\middle|{-}\right\rangle _x \\ &= \Big(\frac{1}{\sqrt{2}} \left\langle {+}\right| + \frac{1}{\sqrt{2}}e^{i\beta} \left\langle {-}\right| \Big)\Big( \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\gamma} \left|{-}\right\rangle \Big)\\ \end{align*}

Now FOIL like mad and use orthonormality: \begin{align*} 0 &= \frac{1}{2}\Big(\cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + e^{i\gamma} \cancelto{0}{\left\langle {+}\middle|{-}\right\rangle } + e^{i\beta} \cancelto{0}{\left\langle {-}\middle|{+}\right\rangle } + e^{i(\gamma - \beta)}\cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big)\\ &= \frac{1}{2}\Big(1 + e^{i(\gamma - \beta} \Big) \\ \rightarrow & \quad e^{i(\gamma-\beta)} = -1 \end{align*}

This means that \(\gamma-\beta = \pi\). I don't have enough information to solve for \(\beta\) and \(\gamma\), but there is a one-time conventional choice made that \(\beta = 0\) and \(\gamma = 1\), so that: \begin{align*} \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{1}{e^{i0}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-1}{e^{i\pi}} \left|{-}\right\rangle \\[12pt] \rightarrow \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{+} \frac{1}{\sqrt{2}}\left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{-} \frac{1}{\sqrt{2}}\left|{-}\right\rangle \\[12pt] \end{align*}

When \(\left|{\pm}\right\rangle _y\) is the input state:

Input = \(\left|{+}\right\rangle _y\) \(S_x\) \(S_y\) \(S_z\)
\(P(\hbar/2)\) 1/2 1 1/2
\(P(-\hbar/2)\) 1/2 0 1/2
Input = \(\left|{-}\right\rangle _y\) \(S_x\) \(S_y\) \(S_z\)
\(P(\hbar/2)\) 1/2 0 1/2
\(P(-\hbar/2)\) 1/2 1 1/2

The calculations proceed in the same way. The \(S_z\) probabilities give me: \begin{align*} \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{1}{e^{i\alpha}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-1}{e^{i\theta}} \left|{-}\right\rangle \\ \end{align*}

The orthongality between \(\left|{+}\right\rangle _y\) and \(\left|{-}\right\rangle _y\) mean that \(\theta - \alpha = \pi\).

But I also know the \(S_x\) probabilities and how to write \(|ket{\pm}_x\) in the \(S_z\) basis. For an input of \(\left|{+}\right\rangle _y\): \begin{align*} \mathcal(S_x = +\hbar/2) &= | \,_x\left\langle {+}\middle|{+}\right\rangle _y |^2 = 1/2 \\ 1/2 &= \Big| \Big(\frac{1}{\sqrt{2}} \left\langle {+}\right| + \frac{1}{\sqrt{2}}\left\langle {-}\right|\Big) \Big( \frac{1}{\sqrt{2}}\left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\alpha} \left|{-}\right\rangle \Big) \Big|^2\\ 1/2 &= \Big| \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}}e^{i\alpha} \cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big|^2 \\ &= \frac{1}{4}|1+e^{i\alpha}|^2\\ &= \frac{1}{4} \Big( 1+e^{i\alpha}\Big) \Big( 1+e^{-i\alpha}\Big)\\ &= \frac{1}{4} \Big( 2+e^{i\alpha} + e^{-i\alpha}\Big)\\ &= \frac{1}{4} \Big( 2+2\cos\alpha\Big)\\ \frac{1}{2} &= \frac{1}{2} + \frac{1}{2}\cos\alpha \\ 0 &= \cos\alpha\\ \rightarrow \alpha = \pm \frac{\pi}{2} \end{align*}

Here, again, I can't solve exactly for alpha (or \(\theta\)), but the convention is to choose \(alpha = \frac{\pi}{2}\) and \(\theta = \frac{3\pi}{2}\), making \begin{align*} \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{i}{e^{i\pi/2}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-i}{e^{i3\pi/2}} \left|{-}\right\rangle \\ \rightarrow \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{+} \frac{\color{red}{i}}{\sqrt{2}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{-} \frac{\color{red}{i}}{\sqrt{2}} \left|{-}\right\rangle \\ \end{align*}

If I use these two convenctions for the relative phases, then I can write down \(\left|{\pm}\right\rangle _n\) in an arbitrary direction described by the spherical coordinates \(\theta\) and \(\phi\) as:

Discuss the generalize eigenstates: \begin{align*}\ \left|{+}\right\rangle _n &= \cos \frac{\theta}{2} \left|{+}\right\rangle + \sin \frac{\theta}{2} e^{i\phi} \left|{-}\right\rangle \\ \left|{-}\right\rangle _n &= \sin \frac{\theta}{2} \left|{+}\right\rangle - \cos \frac{\theta}{2} e^{i\phi} \left|{-}\right\rangle \end{align*}

And how the \(\left|{\pm}\right\rangle _x\) and \(\left|{\pm}\right\rangle _y\) are consistent.

Students use completeness relations to write a matrix element of a spin component in a different basis.

Small Group Activity

30 min.

Representations for Finding Components
In this small group activity, students draw components of a vector in Cartesian and polar bases. Students then write the components of the vector in these bases as both dot products with unit vectors and as bra/kets with basis bras.

Computational Activity

120 min.

Sinusoidal basis set
Students compute inner products to expand a wave function in a sinusoidal basis set. This activity introduces the inner product for wave functions, and the idea of approximating a wave function using a finite set of basis functions.

Problem

5 min.

Using Gibbs Free Energy
None

Lecture

30 min.

Energy, heat and entropy
Transitioning from the PDM back to thermodynamic systems

Heating

In the partial derivative machine, the change in internal energy corresponds to the work done on the left string and the right string: \begin{align} dU &= F_L dx_L + F_R dx_R \end{align} The ”thing we changed” was \(dx_L\) or \(dx_R\). From that we could determine the change in internal energy.

When we transfer energy to something by heating, it's hard to measure the “thing we changed,” which was entropy. It is, however, possible in some cases to measure the amount of energy transfered by heating, and from that we can work backwards to find out how much the entropy changed.

The amount of energy transfered into a system by heating is generally written as \(Q\). (There is a historical misconception built deeply into our language that heat is a property of a material, i.e. a state property. This is caloric theory. You don't need to know any of this history, but you do have to be careful when using the word “heat”.)

An infinitesimal amount of energy transfered by heating is called \({\mathit{\unicode{273}}} Q\). The symbol \({\mathit{\unicode{273}}} \) indicates an inexact differential, which you can think of as a “small chunk” that is not the change of something. \({\mathit{\unicode{273}}} Q\) is not a small change in the amount of energy transfered by heating, but rather is a small amount of energy transfered by heating.

When playing with the partial derivative machine, we can say the work done on the left string, \(F_Ldx_L\), is analogous to heat entering a thermodynamic system.

Latent heat

A phase transition is when a material changes state of matter, as in melting or boiling. At most phase transitions (technically, abrupt phase transitions as you will learnin the Capstone), the temperature remains constant while the material is changing from one state to the other. So you know that as long as you have ice and water coexisting in equilibrium at one atmosphere of pressure, the temperature must be \(0^\circ\)C. Similarly, as long as water is boiling at one atmosphere of pressure, the temperature must be \(100^\circ\)C. In both of these cases, you can transfer energy to the system (as we will) by heating without changing the temperature! This relates to why I keep awkwardly saying “transfer energy to a system by heating” rather than just “heating a system” which means the same thing. We have deeply ingrained the idea that “heating” is synonymous with “raising the temperature,” which does not align with the physics meaning.

So now let me define the latent heat. The latent heat is the amount of energy that must be transfered to a material by heating in order to change it from one phase to another. The latent heat of fusion is the amount of energy required to melt a solid, and the latent heat of vaporization is the amount of energy required to turn a liquid into a gas. We will be measuring both of these for water.

A question you may ask is whether the latent heat is extensive or intensive. Technically the latent heat is extensive, since if you have more material then more energy is required to melt/boil it. However, when you hear latent heat quoted, it is almost always the specific latent heat, which is the energy transfer by heating required per unit of mass. It can be confusing that people use the same words to refer to both quantities. Fortunately, dimensional checking can always give you a way to verify which is being referred to. If \(L\) is an energy per mass, then it must be the specific latent heat, while if it is an energy, then it must be the latent heat.

Heat capacity and specific heat

The heat capacity is the amount of energy transfer required per temperature to raise the temperature of a system. If we hold the pressure fixed (as in our experiment) we can write this as: \begin{align} {\mathit{\unicode{273}}} Q &= C_p dT \end{align} where \(C_p\) is the heat capacity at fixed pressure. You might think to rewrite this expression as a derivative, but we can't do that since the energy transfered by heating is not a state function.

Note that the heat capacity, like the latent heat, is an extensive quantity. The specific heat is the the heat capacity per unit mass, which is an intensive quantity that we can consider a property of a material independently of the quantity of that material.

I'll just mention as an aside that the term “heat capacity” is another one of those unfortunate phrases that reflect the inaccurate idea that heat is a property of a system.

Entropy

Finally, we can get to entropy. Entropy is the “thing that changes” when you transfer energy by heating. I'll just give this away: \begin{align} {\mathit{\unicode{273}}} Q &= TdS \end{align} where this equation is only true if you make the change quasistatically (see another lecture). This allows us to find the change in entropy if we know how much energy was transfered by heating, and the temperature in the process. \begin{align} \Delta S &= \int \frac1T {\mathit{\unicode{273}}} Q \end{align} where again, we need to know the temperature as we add heat.

  • The same model used in MacKay's book
  • Introduce key ideas from thermodynamics
  • A valuable model for figuring out how we're going to save the Earth
Let's start by visualizing the energy flow associated with driving a gasoline-powered car. We will use a box and arrow diagram, where boxes represent where energy can accumulate, and arrows show energy flow.

The energy clearly starts in the form of gasoline in the tank. Where does it go?

Actually ask this of students.
Visualize the energy as an indestructable, incompressible liquid. “Energy is conserved”

The heat can look like

  • Hot exhaust gas
  • The radiator (its job is to dissipate heat)
  • Friction heating in the drive train

The work contribute to

  • Rubber tires heated by deformation
  • Wind, which ultimately ends up as heating the atmosphere

The most important factors for a coarse-grain model of highway driving:

  1. The 75:25 split between “heat” and “work”
  2. The trail of wind behind a car
What might we have missed? Where else might energy have gone? We ignored the kinetic energy of the car, and the energy dissipated as heat in the brakes. On the interstate this is appropriate, but for city driving the dominant “work” may be in accelerating the car to 30 mph, and with that energy then converted into heat by the brakes.

Consider a system of fixed volume in thermal contact with a resevoir. Show that the mean square fluctuations in the energy of the system is \begin{equation} \left<\left(\varepsilon-\langle\varepsilon\rangle\right)^2\right> = k_BT^2\left(\frac{\partial U}{\partial T}\right)_{V} \end{equation} Here \(U\) is the conventional symbol for \(\langle\varepsilon\rangle\). Hint: Use the partition function \(Z\) to relate \(\left(\frac{\partial U}{\partial T}\right)_V\) to the mean square fluctuation. Also, multiply out the term \((\cdots)^2\).

A one-dimensional harmonic oscillator has an infinite series of equally spaced energy states, with \(\varepsilon_n = n\hbar\omega\), where \(n\) is an integer \(\ge 0\), and \(\omega\) is the classical frequency of the oscillator. We have chosen the zero of energy at the state \(n=0\) which we can get away with here, but is not actually the zero of energy! To find the true energy we would have to add a \(\frac12\hbar\omega\) for each oscillator.

  1. Show that for a harmonic oscillator the free energy is \begin{equation} F = k_BT\log\left(1 - e^{-\frac{\hbar\omega}{k_BT}}\right) \end{equation} Note that at high temperatures such that \(k_BT\gg\hbar\omega\) we may expand the argument of the logarithm to obtain \(F\approx k_BT\log\left(\frac{\hbar\omega}{kT}\right)\).

  2. From the free energy above, show that the entropy is \begin{equation} \frac{S}{k_B} = \frac{\frac{\hbar\omega}{kT}}{e^{\frac{\hbar\omega}{kT}}-1} - \log\left(1-e^{-\frac{\hbar\omega}{kT}}\right) \end{equation}

    Entropy of a simple harmonic oscillator
    Heat capacity of a simple harmonic oscillator
    This entropy is shown in the nearby figure, as well as the heat capacity.

  1. Find an expression for the free energy as a function of \(T\) of a system with two states, one at energy 0 and one at energy \(\varepsilon\).

  2. From the free energy, find expressions for the internal energy \(U\) and entropy \(S\) of the system.

  3. Plot the entropy versus \(T\). Explain its asymptotic behavior as the temperature becomes high.

  4. Plot the \(S(T)\) versus \(U(T)\). Explain the maximum value of the energy \(U\).

The goal of this problem is to show that once we have maximized the entropy and found the microstate probabilities in terms of a Lagrange multiplier \(\beta\), we can prove that \(\beta=\frac1{kT}\) based on the statistical definitions of energy and entropy and the thermodynamic definition of temperature embodied in the thermodynamic identity.

The internal energy and entropy are each defined as a weighted average over microstates: \begin{align} U &= \sum_i E_i P_i & S &= -k_B\sum_i P_i \ln P_i \end{align}: We saw in clase that the probability of each microstate can be given in terms of a Lagrange multiplier \(\beta\) as \begin{align} P_i &= \frac{e^{-\beta E_i}}{Z} & Z &= \sum_i e^{-\beta E_i} \end{align} Put these probabilities into the above weighted averages in order to relate \(U\) and \(S\) to \(\beta\). Then make use of the thermodynamic identity \begin{align} dU = TdS - pdV \end{align} to show that \(\beta = \frac1{kT}\).

Kinesthetic

10 min.

Acting Out Effective Potentials
A student is invited to “act out” motion corresponding to a plot of effective potential vs. distance. The student plays the role of the “Earth” while the instructor plays the “Sun”.
Consider a column of atoms each of mass \(M\) at temperature \(T\) in a uniform gravitational field \(g\). Find the thermal average potential energy per atom. The thermal average kinetic energy is independent of height. Find the total heat capacity per atom. The total heat capacity is the sum of contributions from the kinetic energy and from the potential energy. Take the zero of the gravitational energy at the bottom \(h=0\) of the column. Integrate from \(h=0\) to \(h=\infty\). You may assume the gas is ideal.

Small Group Activity

30 min.

Changes in Internal Energy
Students consider the change in internal energy during three different processes involving a container of water vapor on a stove. Using the 1st Law of Thermodynamics, students reason about how the internal energy would change and then compare this prediction with data from NIST presented as a contour plot.

Small Group Activity

60 min.

Gravitational Potential Energy
Students examine a plastic “surface” graph of the gravitational potential energy of an Earth-satellite system to explore the properties of gravitational potential energy for a spherically symmetric system.
Students calculate probabilities for a particle on a ring using three different notations: Dirac bra-ket, matrix, and wave function. After calculating the angular momentum and energy measurement probabilities, students compare their calculation methods for notation.
  1. Find the entropy of a set of \(N\) oscillators of frequency \(\omega\) as a function of the total quantum number \(n\). Use the multiplicity function: \begin{equation} g(N,n) = \frac{(N+n-1)!}{n!(N-1)!} \end{equation} and assume that \(N\gg 1\). This means you can make the Sitrling approximation that \(\log N! \approx N\log N - N\). It also means that \(N-1 \approx N\).

  2. Let \(U\) denote the total energy \(n\hbar\omega\) of the oscillators. Express the entropy as \(S(U,N)\). Show that the total energy at temperature \(T\) is \begin{equation} U = \frac{N\hbar\omega}{e^{\frac{\hbar\omega}{kT}}-1} \end{equation} This is the Planck result found the hard way. We will get to the easy way soon, and you will never again need to work with a multiplicity function like this.

Small Group Activity

30 min.

Heat capacity of N2
Students sketch the temperature-dependent heat capacity of molecular nitrogen. They apply the equipartition theorem and compute the temperatures at which degrees of freedom “freeze out.”

Small Group Activity

30 min.

Energy radiated from one oscillator
This lecture is one step in motivating the form of the Planck distribution.

Consider a system which has an internal energy \(U\) defined by: \begin{align} U &= \gamma V^\alpha S^\beta \end{align} where \(\alpha\), \(\beta\) and \(\gamma\) are constants. The internal energy is an extensive quantity. What constraint does this place on the values \(\alpha\) and \(\beta\) may have?

  • Found in: Energy and Entropy course(s)

(Messy algebra) Convince yourself that the expressions for kinetic energy in original and center of mass coordinates are equivalent. The same for angular momentum.

Consider a system of two particles of mass \(m_1\) and \(m_2\).

  1. Show that the total kinetic energy of the system is the same as that of two “fictitious” particles: one of mass \(M=m_1+m_2\) moving with the velocity of the center of mass and one of mass \(\mu\) (the reduced mass) moving with the velocity of the relative position.
  2. Show that the total angular momentum of the system can similarly be decomposed into the angular momenta of these two fictitious particles.

  • Found in: Central Forces course(s)
Consider a three-state system with energies \((-\epsilon,0,\epsilon)\).
  1. At infinite temperature, what are the probabilities of the three states being occupied? What is the internal energy \(U\)? What is the entropy \(S\)?
  2. At very low temperature, what are the three probabilities?
  3. What are the three probabilities at zero temperature? What is the internal energy \(U\)? What is the entropy \(S\)?
  4. What happens to the probabilities if you allow the temperature to be negative?
The Gibbs free energy, \(G\), is given by \begin{align*} G = U + pV - TS. \end{align*}
  1. Find the total differential of \(G\). As always, show your work.
  2. Interpret the coefficients of the total differential \(dG\) in order to find a derivative expression for the entropy \(S\).
  3. From the total differential \(dG\), obtain a different thermodynamic derivative that is equal to \[ \left(\frac{\partial {S}}{\partial {p}}\right)_{T} \]

The goal of this problem is to show that once we have maximized the entropy and found the microstate probabilities in terms of a Lagrange multiplier \(\beta\), we can prove that \(\beta=\frac1{kT}\) based on the statistical definitions of energy and entropy and the thermodynamic definition of temperature embodied in the thermodynamic identity.

The internal energy and entropy are each defined as a weighted average over microstates: \begin{align} U &= \sum_i E_i P_i & S &= -k_B\sum_i P_i \ln P_i \end{align} We saw in clase that the probability of each microstate can be given in terms of a Lagrange multiplier \(\beta\) as \begin{align} P_i &= \frac{e^{-\beta E_i}}{Z} & Z &= \sum_i e^{-\beta E_i} \end{align} Put these probabilities into the above weighted averages in order to relate \(U\) and \(S\) to \(\beta\). Then make use of the thermodynamic identity \begin{align} dU = TdS - pdV \end{align} to show that \(\beta = \frac1{kT}\).

  • Found in: Thermal and Statistical Physics course(s)

Suppose \(g(U) = CU^{3N/2}\), where \(C\) is a constant and \(N\) is the number of particles.

  1. Show that \(U=\frac32 N k_BT\).

  2. Show that \(\left(\frac{\partial^2S}{\partial U^2}\right)_N\) is negative. This form of \(g(U)\) actually applies to a monatomic ideal gas.

Lecture

5 min.

Energy and Entropy review
This very quick lecture reviews the content taught in https://paradigms.oregonstate.edu/courses/ph423, and is the first content in https://paradigms.oregonstate.edu/courses/ph441.

As discussed in class, we can consider a black body as a large box with a small hole in it. If we treat the large box a metal cube with side length \(L\) and metal walls, the frequency of each normal mode will be given by: \begin{align} \omega_{n_xn_yn_z} &= \frac{\pi c}{L}\sqrt{n_x^2 + n_y^2 + n_z^2} \end{align} where each of \(n_x\), \(n_y\), and \(n_z\) will have positive integer values. This simply comes from the fact that a half wavelength must fit in the box. There is an additional quantum number for polarization, which has two possible values, but does not affect the frequency. Note that in this problem I'm using different boundary conditions from what I use in class. It is worth learning to work with either set of quantum numbers. Each normal mode is a harmonic oscillator, with energy eigenstates \(E_n = n\hbar\omega\) where we will not include the zero-point energy \(\frac12\hbar\omega\), since that energy cannot be extracted from the box. (See the Casimir effect for an example where the zero point energy of photon modes does have an effect.)

Note
This is a slight approximation, as the boundary conditions for light are a bit more complicated. However, for large \(n\) values this gives the correct result.

  1. Show that the free energy is given by \begin{align} F &= 8\pi \frac{V(kT)^4}{h^3c^3} \int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi \\ &= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3} \\ &= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3} \end{align} provided the box is big enough that \(\frac{\hbar c}{LkT}\ll 1\). Note that you may end up with a slightly different dimensionless integral that numerically evaluates to the same result, which would be fine. I also do not expect you to solve this definite integral analytically, a numerical confirmation is fine. However, you must manipulate your integral until it is dimensionless and has all the dimensionful quantities removed from it!

  2. Show that the entropy of this box full of photons at temperature \(T\) is \begin{align} S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3 \\ &= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3 \end{align}

  3. Show that the internal energy of this box full of photons at temperature \(T\) is \begin{align} \frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3} \\ &= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3} \end{align}

Problem

Paramagnetism
Find the equilibrium value at temperature \(T\) of the fractional magnetization \begin{equation} \frac{\mu_{tot}}{Nm} \equiv \frac{2\langle s\rangle}{N} \end{equation} of a system of \(N\) spins each of magnetic moment \(m\) in a magnetic field \(B\). The spin excess is \(2s\). The energy of this system is given by \begin{align} U &= -\mu_{tot}B \end{align} where \(\mu_{tot}\) is the total magnetization. Take the entropy as the logarithm of the multiplicity \(g(N,s)\) as given in (1.35 in the text): \begin{equation} S(s) \approx k_B\log g(N,0) - k_B\frac{2s^2}{N} \end{equation} for \(|s|\ll N\), where \(s\) is the spin excess, which is related to the magnetization by \(\mu_{tot} = 2sm\). Hint: Show that in this approximation \begin{equation} S(U) = S_0 - k_B\frac{U^2}{2m^2B^2N}, \end{equation} with \(S_0=k_B\log g(N,0)\). Further, show that \(\frac1{kT} = -\frac{U}{m^2B^2N}\), where \(U\) denotes \(\langle U\rangle\), the thermal average energy.
  1. Consider a system that may be unoccupied with energy zero, or occupied by one particle in either of two states, one of energy zero and one of energy \(\varepsilon\). Find the Gibbs sum for this system is in terms of the activity \(\lambda\equiv e^{\beta\mu}\). Note that the system can hold a maximum of one particle.

  2. Solve for the thermal average occupancy of the system in terms of \(\lambda\).

  3. Show that the thermal average occupancy of the state at energy \(\varepsilon\) is \begin{align} \langle N(\varepsilon)\rangle = \frac{\lambda e^{-\frac{\varepsilon}{kT}}}{\mathcal{Z}} \end{align}

  4. Find an expression for the thermal average energy of the system.

  5. Allow the possibility that the orbitals at \(0\) and at \(\varepsilon\) may each be occupied each by one particle at the same time; Show that \begin{align} \mathcal{Z} &= 1 + \lambda + \lambda e^{-\frac{\varepsilon}{kT}} + \lambda^2 e^{-\frac{\varepsilon}{kT}} \\ &= (1+\lambda)\left(1+e^{-\frac{\varepsilon}{kT}}\right) \end{align} Because \(\mathcal{Z}\) can be factored as shown, we have in effect two independent systems.

For electrons with an energy \(\varepsilon\gg mc^2\), where \(m\) is the mass of the electron, the energy is given by \(\varepsilon\approx pc\) where \(p\) is the momentum. For electrons in a cube of volume \(V=L^3\) the momentum takes the same values as for a non-relativistic particle in a box.

  1. Show that in this extreme relativistic limit the Fermi energy of a gas of \(N\) electrons is given by \begin{align} \varepsilon_F &= \hbar\pi c\left(\frac{3n}{\pi}\right)^{\frac13} \end{align} where \(n\equiv \frac{N}{V}\) is the number density.

  2. Show that the total energy of the ground state of the gas is \begin{align} U_0 &= \frac34 N\varepsilon_F \end{align}

Consider a white dwarf of mass \(M\) and radius \(R\). The dwarf consists of ionized hydrogen, thus a bunch of free electrons and protons, each of which are fermions. Let the electrons be degenerate but nonrelativistic; the protons are nondegenerate.

  1. Show that the order of magnitude of the gravitational self-energy is \(-\frac{GM^2}{R}\), where \(G\) is the gravitational constant. (If the mass density is constant within the sphere of radius \(R\), the exact potential energy is \(-\frac53\frac{GM^2}{R}\)).

  2. Show that the order of magnitude of the kinetic energy of the electrons in the ground state is \begin{align} \frac{\hbar^2N^{\frac53}}{mR^2} \approx \frac{\hbar^2M^{\frac53}}{mM_H^{\frac53}R^2} \end{align} where \(m\) is the mass of an electron and \(M_H\) is the mas of a proton.

  3. Show that if the gravitational and kinetic energies are of the same order of magnitude (as required by the virial theorem of mechanics), \(M^{\frac13}R \approx 10^{20} \text{g}^{\frac13}\text{cm}\).

  4. If the mass is equal to that of the Sun (\(2\times 10^{33}g\)), what is the density of the white dwarf?

  5. It is believed that pulsars are stars composed of a cold degenerate gas of neutrons (i.e. neutron stars). Show that for a neutron star \(M^{\frac13}R \approx 10^{17}\text{g}^{\frac13}\text{cm}\). What is the value of the radius for a neutron star with a mass equal to that of the Sun? Express the result in \(\text{km}\).

In our week on radiation, we saw that the Helmholtz free energy of a box of radiation at temperature \(T\) is \begin{align} F &= -8\pi \frac{V(kT)^4}{h^3c^3}\frac{\pi^4}{45} \end{align} From this we also found the internal energy and entropy \begin{align} U &= 24\pi \frac{(kT)^4}{h^3c^3}\frac{\pi^4}{45} V \\ S &= 32\pi kV\left(\frac{kT}{hc}\right)^3 \frac{\pi^4}{45} \end{align} Given these results, let us consider a Carnot engine that uses an empty metalic piston (i.e. a photon gas).

  1. Given \(T_H\) and \(T_C\), as well as \(V_1\) and \(V_2\) (the two volumes at \(T_H\)), determine \(V_3\) and \(V_4\) (the two volumes at \(T_C\)).

  2. What is the heat \(Q_H\) taken up and the work done by the gas during the first isothermal expansion? Are they equal to each other, as for the ideal gas?

  3. Does the work done on the two isentropic stages cancel each other, as for the ideal gas?

  4. Calculate the total work done by the gas during one cycle. Compare it with the heat taken up at \(T_H\) and show that the energy conversion efficiency is the Carnot efficiency.

In this entire problem, keep results to first order in the van der Waals correction terms \(a\) and $b.

  1. Show that the entropy of the van der Waals gas is \begin{align} S &= Nk\left\{\ln\left(\frac{n_Q(V-Nb)}{N}\right)+\frac52\right\} \end{align}

  2. Show that the energy is \begin{align} U &= \frac32 NkT - \frac{N^2a}{V} \end{align}

  3. Show that the enthalpy \(H\equiv U+pV\) is \begin{align} H(T,V) &= \frac52NkT + \frac{N^2bkT}{V} - 2\frac{N^2a}{V} \\ H(T,p) &= \frac52NkT + Nbp - \frac{2Nap}{kT} \end{align}

Effects of High Altitude by Randall Munroe, at xkcd.

Problem

5 min.

Bottle in a Bottle
None
None
  • Found in: Static Fields, AIMS Maxwell, Problem-Solving course(s)

Problem

5 min.

Adiabatic Compression
None

Small Group Activity

30 min.

A glass of water
Students generate a list of properties a glass of water might have. The class then discusses and categorizes those properties.

Small Group Activity

30 min.

Mass is not Conserved

Groups are asked to analyze the following standard problem:

Two identical lumps of clay of (rest) mass m collide head on, with each moving at 3/5 the speed of light. What is the mass of the resulting lump of clay?

Whole Class Activity

10 min.

Air Hockey
Students observe the motion of a puck tethered to the center of the airtable. Then they plot the potential energy for the puck on their small whiteboards. A class discussion follows based on what students have written on their whiteboards.

Small Group Activity

30 min.

Using \(pV\) and \(TS\) Plots
  • Work as area under curve in a \(pV\) plot
  • Heat transfer as area under a curve in a \(TS\) plot
  • Reminder that internal energy is a state function
  • Reminder of First Law

Small Group Activity

30 min.

Gravitational Force
Students examine a plastic "surface" graph of the gravitational potential energy of a Earth-satellite system to make connections between gravitational force and gravitational potential energy.

Computational Activity

120 min.

Kinetic energy
Students implement a finite-difference approximation for the kinetic energy operator as a matrix, and then use numpy to solve for eigenvalues and eigenstates, which they visualize.
These notes, from the third week of https://paradigms.oregonstate.edu/courses/ph441 cover the canonical ensemble and Helmholtz free energy. They include a number of small group activities.

Small Group Activity

30 min.

Hydrogen emission
In this activity students work out energy level transitions in hydrogen that lead to visible light.

Small Group Activity

30 min.

Outer Product of a Vector on Itself
Students compute the outer product of a vector on itself to product a projection operator. Students discover that projection operators are idempotent (square to themselves) and that a complete set of outer products of an orthonormal basis is the identity (a completeness relation).