Students examine a plastic “surface” graph of the gravitational potential energy of an Earth-satellite system to explore the properties of gravitational potential energy for a spherically symmetric system.
Consider a column of atoms each of mass \(M\) at temperature \(T\) in
a uniform gravitational field \(g\). Find the thermal average
potential energy per atom. The thermal average kinetic energy is
independent of height. Find the total heat capacity per atom. The
total heat capacity is the sum of contributions from the kinetic
energy and from the potential energy. Take the zero of the
gravitational energy at the bottom \(h=0\) of the column. Integrate
from \(h=0\) to \(h=\infty\). You may assume the gas is ideal.
A student is invited to “act out” motion corresponding to a plot of effective potential vs. distance. The student plays the role of the “Earth” while the instructor plays the “Sun”.
Consider a system consisting of four point charges arranged on the corners of a square in 3D Cartesian space of coordinates \((x,y,z)\).
Write a python function that returns the potential at any point in space caused by four equal point charges forming a square. Make the sides of the square parallel to the \(x\) and \(y\) axes and on the \(z=0\) plane.
To do this you will need the expression for a the potential due to a single point charge \(V= \frac{k_Cq}{r}\) where \(r\) is the distance from the point charge. You will also need to use the fact that the total potential is the sum of the potentials due to each individual point charge.
It is important that we ask students first to create a function for the potential, and only then try to visualize the potential. This allows students to reason about the computation for a single point in space (defined in their choice of coordinate systems).
Once you have written the above function, use it to plot the electrostatic potential versus position along the three cartesian axes.
Since the students have already written a function for their potential, they can create a plot by creating an array for \(x\) (or \(y\), or \(z\)), and then passing that array to their function, along with scalars for the other two coordinates. Many students will discover this simply by modifying an example script they find on the web, replacing \(\sin(x)\) or similar with their function. It is well worth showing this easier approach to students who attempt who attempt to write a loop in order to compute the potential at each point in space.
We ask students to explicitly plot the potential along axes because students seldom spontaneously think to create a 1D plot such as this.
Label your axes.
Work out the first non-zero term in a power series approximation for the potential at large \(x\), small \(x\), etc. Plot these approximations along with your computed potential, and verify that they agree in the range that you expect. Useful 1\(^{st}\) order Taylor expansions are:
\begin{eqnarray}
\sqrt{1+\epsilon} &\sim& 1+\frac{\epsilon}{2} \\
\frac{1}{1+\epsilon} &=& 1-\epsilon
\end{eqnarray}
where \(\epsilon\) is a small quantity.
This may need to be omitted on the first Tuesday of class, since students probably will not yet have seen power series approximations. It may work in this case to at least talk about what is expected at large distance, since "it looks like a point charge" is reasoning students do make.
Students struggle with the \(x\) approximations (assuming the square is in the xy plane). Each pair will probably need to have a little lecture on grouping terms according to the power of \(x\), and keeping only those terms for which they have every instance.
Extra fun
Create one or more different visualizations of the electrostatic potential. For example a 2D representation in the \(z=0\) plane.
More extra fun
Create a plot of the potential along a straight line that is not one of the axes. Hint: start from a line on the \(z=0\) plane, then try a random straight line. You can use your browser for help.
Even more extra fun
Move the charges around (e.g., off the \(z=0\) plane) and see what happens to your graphs
Dipole fun
Repeat the above (especially the limiting cases!) for four point charges in which half are positive and half negative, with the positive charges neighbors.
Common visualizations for 2D slices of space include contour plots, color plots, and "3D plots". Another option (less easy) would be to visualize an equipotential surface in 3 dimensions. It is worth reminding students to consider other planes than those at \(x=0\), \(y=0\), and \(z=0\).
Quadrupole fun
Repeat the above (especially the limiting cases!) for four point charges in which half are positive and half negative, with the positive charges diagonal from one another. It will help in this case to place the charges on the axes (rotating the square by 45 degrees), since otherwise the potential on each axis will be zero.
electrostatic potentialpython Found in: Computational Physics Lab II course(s)Found in: Computational integrating charge distributions sequence(s)
Students solve numerically for the potential due to a spherical shell of charge. Although this potential is straightforward to compute using Gauss's Law, it serves as a nice example for numerically integrating in spherical coordinates because the correct answer is easy to recognize.
Students work in small groups to use the superposition principle
\[V(\vec{r}) =\frac{1}{4\pi\epsilon_0}\int\frac{\rho(\vec{r}^{\,\prime})}{\vert \vec{r}-\vec{r}^{\,\prime}\vert} \, d\tau^{\prime}\]
to find an integral expression for the electrostatic potential, \(V(\vec{r})\), everywhere in space, due to a ring of charge.
In an optional extension, students find a series expansion for \(V(\vec{r})\) either on the axis or in the plane of the ring, for either small or large values of the relevant geometric variable. Add an extra half hour or more to the time estimate for the optional extension.
to perform a magnetic vector potential calculation using the superposition principle;
to decide which form of the superposition principle to use, depending on the dimensions of the current density;
how to find current from total charge \(Q\), period \(T\), and the geometry of the problem, radius \(R\);
to write the distance formula \(\vec{r}-\vec{r'}\) in both the numerator and denominator of the superposition principle in an appropriate mix of cylindrical coordinates and rectangular basis vectors;
Students write python programs to compute the potential due to a square of surface charge, and then to visualize the result. This activity can be used to introduce students to the process of integrating numerically.
Students examine a plastic "surface" graph of the gravitational potential energy of a Earth-satellite system to make connections between gravitational force and gravitational potential energy.
Students observe the motion of a puck tethered to the center of the airtable. Then they plot the potential energy for the puck on their small whiteboards. A class discussion follows based on what students have written on their whiteboards.
Transitioning from the PDM back to thermodynamic systems
Heating
In the partial derivative machine, the change in internal energy corresponds to the work done on the left string and the right string:
\begin{align}
dU &= F_L dx_L + F_R dx_R
\end{align}
The ”thing we changed” was \(dx_L\) or \(dx_R\). From that we could determine the change in internal energy.
When we transfer energy to something by heating, it's hard to measure the “thing we changed,” which was entropy. It is, however, possible in some cases to measure the amount of energy transfered by heating, and from that we can work backwards to find out how much the entropy changed.
An infinitesimal amount of energy transfered by heating is called \({\mathit{\unicode{273}}} Q\). The symbol \({\mathit{\unicode{273}}} \) indicates an inexact differential, which you can think of as a “small chunk” that is not the change of something. \({\mathit{\unicode{273}}} Q\) is nota small change in the amount of energy transfered by heating, but rather is a small amount of energy transfered by heating.
When playing with the partial derivative machine, we can say the work done on the left string, \(F_Ldx_L\), is analogous to heat entering a thermodynamic system.
Latent heat
A phase transition is when a material changes state of matter, as in melting or
boiling. At most phase transitions (technically, abrupt phase transitions
as you will learnin the Capstone), the temperature remains constant while the
material is changing from one state to the other. So you know that as long as
you have ice and water coexisting in equilibrium at one atmosphere of pressure,
the temperature must be \(0^\circ\)C. Similarly, as long as water is boiling at
one atmosphere of pressure, the temperature must be \(100^\circ\)C. In both of
these cases, you can transfer energy to the system (as we will) by heating
without changing the temperature! This relates to why I keep awkwardly
saying
“transfer energy to a system by heating” rather than just “heating a system”
which means the same thing. We have deeply ingrained the idea that “heating”
is synonymous with “raising the temperature,” which does not align with the
physics meaning.
So now let me define the latent heat. The latent heat is the amount
of energy that must be transfered to a material by heating in order to change
it from one phase to another. The latent heat of fusion is the amount
of energy required to melt a solid, and the latent heat of vaporization
is the amount of energy required to turn a liquid into a gas. We will be
measuring both of these for water.
A question you may ask is whether the latent heat is extensive or intensive.
Technically the latent heat is extensive, since if you have more material
then more energy is required to melt/boil it. However, when you hear latent heat
quoted, it is almost always the specific latent heat,
which is the energy
transfer by heating required per unit of mass. It can be confusing that people
use the same words to refer to both quantities. Fortunately, dimensional checking
can always give you a way to verify which is being referred to. If \(L\) is an
energy per mass, then it must be the specific latent heat, while if it is an
energy, then it must be the latent heat.
Heat capacity and specific heat
The heat capacity is the amount of energy transfer required per
temperature to raise the temperature of a system. If we hold the pressure fixed
(as in our experiment) we can write this as:
\begin{align}
{\mathit{\unicode{273}}} Q &= C_p dT
\end{align}
where \(C_p\) is the heat capacity at fixed pressure.
You might think to rewrite this expression as a derivative, but we can't
do that since the energy transfered by heating is not a state function.
Note that the heat capacity, like the latent heat, is an extensive quantity.
The specific heat is the the heat capacity per unit mass, which is an
intensive quantity that we can consider a property of a material independently
of the quantity of that material.
I'll just mention as an aside that the term “heat capacity” is another one of
those unfortunate phrases that reflect the inaccurate idea that heat is a
property of a system.
Entropy
Finally, we can get to entropy. Entropy is the “thing that changes” when you
transfer energy by heating. I'll just give this away:
\begin{align}
{\mathit{\unicode{273}}} Q &= TdS
\end{align}
where this equation is only true if you make the change quasistatically
(see another lecture). This allows us to find the change in entropy if we know
how much energy was transfered by heating, and the temperature in the process.
\begin{align}
\Delta S &= \int \frac1T {\mathit{\unicode{273}}} Q
\end{align}
where again, we need to know the temperature as we add heat.
A valuable model for figuring out how we're going to save the Earth
Let's start by visualizing the energy flow associated with driving a gasoline-powered car. We will use a box and arrow diagram, where boxes represent where energy can accumulate, and arrows show energy flow.
The energy clearly starts in the form of gasoline in the tank. Where does it go?
Actually ask this of students.
Visualize the energy as an indestructable, incompressible liquid.
“Energy is conserved”
The heat can look like
Hot exhaust gas
The radiator (its job is to dissipate heat)
Friction heating in the drive train
The work contribute to
Rubber tires heated by deformation
Wind, which ultimately ends up as heating the atmosphere
The most important factors for a coarse-grain model of highway driving:
The 75:25 split between “heat” and “work”
The trail of wind behind a car
What might we have missed? Where else might energy have gone?
We ignored the kinetic energy of the car, and the energy dissipated as heat in the brakes. On the interstate this is appropriate, but for city driving the dominant “work” may be in accelerating the car to 30 mph, and with that energy then converted into heat by the brakes.
Consider a system of
fixed volume in thermal contact with a resevoir. Show that the mean
square fluctuations in the energy of the system is \begin{equation}
\left<\left(\varepsilon-\langle\varepsilon\rangle\right)^2\right>
= k_BT^2\left(\frac{\partial U}{\partial T}\right)_{V}
\end{equation} Here \(U\) is the conventional symbol for
\(\langle\varepsilon\rangle\). Hint: Use the partition function
\(Z\) to relate \(\left(\frac{\partial U}{\partial T}\right)_V\) to
the mean square fluctuation. Also, multiply out the term
\((\cdots)^2\).
A one-dimensional
harmonic oscillator has an infinite series of equally spaced energy
states, with \(\varepsilon_n = n\hbar\omega\), where \(n\) is an
integer \(\ge 0\), and \(\omega\) is the classical frequency of the
oscillator. We have chosen the zero of energy at the state \(n=0\)
which we can get away with here, but is not actually the zero of
energy! To find the true energy we would have to add a
\(\frac12\hbar\omega\) for each oscillator.
Show that for a harmonic oscillator the free energy is
\begin{equation}
F = k_BT\log\left(1 - e^{-\frac{\hbar\omega}{k_BT}}\right)
\end{equation} Note that at high temperatures such that
\(k_BT\gg\hbar\omega\) we may expand the argument of the logarithm
to obtain \(F\approx k_BT\log\left(\frac{\hbar\omega}{kT}\right)\).
From the free energy above, show that the entropy is
\begin{equation}
\frac{S}{k_B} =
\frac{\frac{\hbar\omega}{kT}}{e^{\frac{\hbar\omega}{kT}}-1}
- \log\left(1-e^{-\frac{\hbar\omega}{kT}}\right)
\end{equation}
Entropy of a simple harmonic oscillatorHeat capacity of a simple harmonic oscillator
This entropy is shown in the nearby figure, as well
as the heat capacity.
The goal of this problem is
to show that once we have maximized the entropy and found the
microstate probabilities in terms of a Lagrange multiplier \(\beta\),
we can prove that \(\beta=\frac1{kT}\) based on the statistical
definitions of energy and entropy and the thermodynamic definition
of temperature embodied in the thermodynamic identity.
The internal energy and
entropy are each defined as a weighted average over microstates:
\begin{align}
U &= \sum_i E_i P_i & S &= -k_B\sum_i P_i \ln P_i
\end{align}:
We saw in clase that the probability of each microstate can be given
in terms of a Lagrange multiplier \(\beta\) as
\begin{align}
P_i &= \frac{e^{-\beta E_i}}{Z}
&
Z &= \sum_i e^{-\beta E_i}
\end{align}
Put these probabilities into the above weighted averages in
order to relate \(U\) and \(S\) to \(\beta\). Then make use of the
thermodynamic identity
\begin{align}
dU = TdS - pdV
\end{align}
to show that \(\beta = \frac1{kT}\).
Students calculate probabilities for a particle on a ring using three different notations: Dirac bra-ket, matrix, and wave function. After calculating the angular momentum and energy measurement probabilities, students compare their calculation methods for notation.
Students consider the change in internal energy during three different processes involving a container of water vapor on a stove. Using the 1st Law of Thermodynamics, students reason about how the internal energy would change and then compare this prediction with data from NIST presented as a contour plot.
(4pts)
Find the electric field around an infinite, uniformly charged,
straight wire, starting from the following expression for the electrostatic
potential:
\begin{equation*}
V(\vec r)=\frac{2\lambda}{4\pi\epsilon_0}\, \ln\left( \frac{ s_0}{s} \right)
\end{equation*}
The concentration of potassium
\(\text{K}^+\) ions in the internal sap of a plant cell (for example,
a fresh water alga) may exceed by a factor of \(10^4\) the
concentration of \(\text{K}^+\) ions in the pond water in which the
cell is growing. The chemical potential of the \(\text{K}^+\) ions is
higher in the sap because their concentration \(n\) is higher there.
Estimate the difference in chemical potential at \(300\text{K}\) and
show that it is equivalent to a voltage of \(0.24\text{V}\) across the
cell wall. Take \(\mu\) as for an ideal gas. Because the values of the
chemical potential are different, the ions in the cell and in the pond
are not in diffusive equilibrium. The plant cell membrane is highly
impermeable to the passive leakage of ions through it. Important
questions in cell physics include these: How is the high concentration
of ions built up within the cell? How is metabolic energy applied to
energize the active ion transport?
David adds
You might wonder why it is even remotely plausible to consider the
ions in solution as an ideal gas. The key idea here is that the ideal
gas entropy incorporates the entropy due to position dependence, and
thus due to concentration. Since concentration is what differs between
the cell and the pond, the ideal gas entropy describes this pretty
effectively. In contrast to the concentration dependence, the
temperature-dependence of the ideal gas chemical potential will not be
so great.
A circular cylinder of radius \(R\)
rotates about the long axis with angular velocity \(\omega\). The
cylinder contains an ideal gas of atoms of mass \(M\) at temperature
\(T\). Find an expression for the dependence of the concentration
\(n(r)\) on the radial distance \(r\) from the axis, in terms of
\(n(0)\) on the axis. Take \(\mu\) as for an ideal gas.
These notes from week 6 of https://paradigms.oregonstate.edu/courses/ph441 cover the ideal gas from a grand canonical standpoint starting with the solutions to a particle in a three-dimensional box. They include a number of small group activities.
Students use a pre-written Mathematica notebook or a Geogebra applet to explore how the shape of the effective potential function changes as the various parameters (angular momentum, force constant, reduced mass) are varied.
\(\boldsymbol{\vec{K}} = yz \,\boldsymbol{\hat{x}} + xz \,\boldsymbol{\hat{y}}\)
Main ideas
Finding potential functions.
Students love this activity. Some groups will finish in 10 minutes or less;
few will require as much as 30 minutes.
*
Prerequisites
Fundamental Theorem for line integrals
The Murder Mystery Method
Warmup
none
Props
whiteboards and pens
Wrapup
Revisit integrating conservative vector fields along various paths, including
reversing the orientation and integrating around closed paths.
Details
In the Classroom
We recommend having the students work in groups of 2 on this activity, and not
having them turn anything in.
Most students will treat the last example as 2-dimensional, giving the answer
\(xyz\). Ask these students to check their work by taking the gradient; most
will include a \(\boldsymbol{\hat{z}}\) term. Let them think this through. The correct answer
of course depends on whether one assumes that \(z\) is constant; we have
deliberately left this ambiguous.
It is good and proper that students want to add together multivariable terms. Keep returning to the gradient, something they know well. It is better to discover the guidelines themselves.
Subsidiary ideas
3-d vector fields do not necessarily have a \(\boldsymbol{\hat{z}}\)-component!
Homework
A challenging question to ponder is why a surface fails to exist for nonconservative fields. Using an example such as \(y\,\boldsymbol{\hat{x}}+\boldsymbol{\hat{y}}\), prompt students to plot the field and examine its magnitude at various locations. Suggest piecing together level sets. There is serious geometry lurking that entails smoothness. Wrestling with this is healthy.
Essay questions
Write 3-5 sentences describing the connection between derivatives and integrals in the single-variable case. In other words, what is the one-dimensional version of MMM? Emphasize that much of vector calculus is generalizing concepts from single-variable theory.
Enrichment
The derivative check for conservative vector fields can be described using the
same type of diagrams as used in the Murder Mystery Method; this is just
moving down the diagram (via differentiation) from the row containing the
components of the vector field, rather than moving up (via integration). We
believe this should not be mentioned until after this lab.
When done in 3-d, this makes a nice introduction to curl --- which
however is not needed until one is ready to do Stokes' Theorem. We would
therefore recommend delaying this entire discussion, including the 2-d case,
until then.
Work out the Murder Mystery Method using polar basis vectors, by reversing the
process of taking the gradient in this basis.
Revisit the example in the Ampère's Law lab, using the Fundamental Theorem
to explain the results. This can be done without reference to a basis, but
it is worth computing \(\boldsymbol{\vec\nabla}\phi\) in a polar basis.
Students examine a plastic "surface" graph of the electric potential due to two charged plates (near the center of the plates) and explore the properties of the electric potential.
Students use a plastic surface representing the potential due to a charged sphere to explore the electrostatic potential, equipotential lines, and the relationship between potential and electric field.
Write the equation for the electrostatic potential due to a point charge.
Instructor's Guide
Prerequisite Knowledge
Students will usually have seen the electrostatic potential due to a point charge in their introductory course, but may have trouble recalling it.
Whole-Class Conversations
As students try to remember the formula, many will conflate potential, potential energy, force, and electric field. Their answers may have some aspects of each of these. We use this question to get the iconic equation into the students' working memory in preparation for subsequent activities. This question also be used to help student disambiguate these different physical quantities.
Correct answers you're likely to see
\[V=\frac{kq}{r}\]
\[V=\frac{1}{4\pi\epsilon_0}\frac{q}{r}\]
You may want to discuss which constants to use in which contexts, e.g. \(k\) is short and easy to write, but may be conflated with other uses of \(k\) in a give problem whereas \(\frac{1}{4\pi\epsilon_0}\) assumes you are working in a particular system of units.
Incorrect answers you're likely to see
Two charges instead of one
\[\cancel{V=\frac{kq_{1}q_{2}}{r}}\]
Distance squared in the denominator
\[\cancel{V=\frac{kq}{r^2}}\]
Possible follow-up questions to help with the disambiguation:
Relationship between potential and potential energy \(U = qV\)
Which function is the derivative of the other: \(1/r\) or \(1/r^2\)?
Which physical quantity (potential or electric field, potential energy or force) is the derivative of the other?
What is the electrostatic potential conceptually?
Which function falls off faster: \(1/r\) or \(1/r^2\)?
What are the dimensions of potential? Units?
Where is the zero of potential?
Wrap-up
This could be a good time to refer to the (correct) expression for the potential as an iconic equation, which will need to be further interpreted (”unpacked”) in particular physical situations. This is where the course is going next.
This SWBQ can also serve to help students learn about recall as a cognitive activity. While parts of the equations that students write may be incorrect, many other parts will be correct. Let the way in which you manage the class discussion model for the students how a professional goes about quickly disambiguating several different choices. And TELL the students that this is what you are doing. Deliberately invoke their metacognition.
Many students may not know that the electrostatic potential that we are talking about in this activity is the same quantity as what a voltmeter reads, in principle, but not in practice. You may need to talk about how a voltmeter actually works, rather than idealizing it. It helps to have a voltmeter with leads as a prop. Students often want to know about the “ground” lead. We often tie a long string to it (to symbolize making a really long wire) and send the TA out of the room with the string, “headed off to infinity” while discussing the importance of setting the zero of potential. The extra minute or two of humerous byplay gives the importance of the zero of potential a chance to sink in.
We use this small whiteboard question as a transition between The Distance Formula (Star Trek) activity, where students are learning about how to describe (algebraically) the geometric distance between two points, and the Electrostatic Potential Due to a Pair of Charges (with Series) activity, where students are using these results and the superposition principle to find the electrostatic potential due to two point charges.
This activity is the initial activity in the sequence Visualizing Scalar Fields addressing the representations of scalar fields in the context of electrostatics.
Found in: Static Fields, None course(s)Found in: Warm-Up, E&M Ring Cycle Sequence sequence(s)
The goal of this problem
is to show that once we have maximized the entropy and found the
microstate probabilities in terms of a Lagrange multiplier \(\beta\),
we can prove that \(\beta=\frac1{kT}\) based on the statistical
definitions of energy and entropy and the thermodynamic definition of
temperature embodied in the thermodynamic identity.
The internal energy and entropy are each defined as a weighted average
over microstates: \begin{align}
U &= \sum_i E_i P_i & S &= -k_B\sum_i P_i \ln P_i
\end{align} We saw in clase that the probability of each microstate
can be given in terms of a Lagrange multiplier \(\beta\) as
\begin{align}
P_i &= \frac{e^{-\beta E_i}}{Z}
&
Z &= \sum_i e^{-\beta E_i}
\end{align} Put these probabilities into the above weighted averages
in order to relate \(U\) and \(S\) to \(\beta\). Then make use of the
thermodynamic identity \begin{align}
dU = TdS - pdV
\end{align} to show that \(\beta = \frac1{kT}\).
Found in: Thermal and Statistical Physics course(s)
The Gibbs free energy,
\(G\), is given by
\begin{align*}
G = U + pV - TS.
\end{align*}
Find the total differential of \(G\). As always, show your work.
Interpret the coefficients of the total differential \(dG\) in
order to find a derivative expression for the entropy \(S\).
From the total differential \(dG\), obtain a different
thermodynamic derivative that is equal to
\[ \left(\frac{\partial {S}}{\partial {p}}\right)_{T} \]
(Messy algebra) Convince yourself that the expressions for kinetic energy in original and center of mass coordinates are equivalent. The same for angular momentum.
Consider a system of two particles of mass \(m_1\) and \(m_2\).
Show that the total kinetic energy of the system is the same as that of two
“fictitious” particles: one of mass \(M=m_1+m_2\) moving with the velocity of the
center of mass and one of mass \(\mu\) (the reduced mass) moving with the
velocity of the relative position.
Show that the total angular momentum of the system can similarly be decomposed
into the angular momenta of these two fictitious particles.
Consider a system which has an internal energy \(U\) defined by:
\begin{align}
U &= \gamma V^\alpha S^\beta
\end{align}
where \(\alpha\), \(\beta\) and \(\gamma\) are constants. The internal
energy is an extensive quantity. What constraint does this place on
the values \(\alpha\) and \(\beta\) may have?
Students sketch the temperature-dependent heat capacity of molecular nitrogen. They apply the equipartition theorem and compute the temperatures at which degrees of freedom “freeze out.”
Find the entropy of a set of \(N\) oscillators of frequency
\(\omega\) as a function of the total quantum number \(n\). Use the
multiplicity function: \begin{equation}
g(N,n) = \frac{(N+n-1)!}{n!(N-1)!}
\end{equation} and assume that \(N\gg 1\). This means you can
make the Sitrling approximation that
\(\log N! \approx N\log N - N\). It also means that
\(N-1 \approx N\).
Let \(U\) denote the total energy \(n\hbar\omega\) of the
oscillators. Express the entropy as \(S(U,N)\). Show that the total
energy at temperature \(T\) is \begin{equation}
U = \frac{N\hbar\omega}{e^{\frac{\hbar\omega}{kT}}-1}
\end{equation} This is the Planck result found the hard
way. We will get to the easy way soon, and you will never again need
to work with a multiplicity function like this.
As discussed in
class, we can consider a black body as a large box with a small hole
in it. If we treat the large box a metal cube with side length \(L\)
and metal walls, the frequency of each normal mode will be given by:
\begin{align}
\omega_{n_xn_yn_z} &= \frac{\pi c}{L}\sqrt{n_x^2 + n_y^2 + n_z^2}
\end{align} where each of \(n_x\), \(n_y\), and \(n_z\) will have
positive integer values. This simply comes from the fact that a half
wavelength must fit in the box. There is an additional quantum number
for polarization, which has two possible values, but does not affect
the frequency. Note that in this problem I'm using different
boundary conditions from what I use in class. It is worth learning to
work with either set of quantum numbers. Each normal mode is a
harmonic oscillator, with energy eigenstates \(E_n = n\hbar\omega\)
where we will not include the zero-point energy
\(\frac12\hbar\omega\), since that energy cannot be extracted from the
box. (See the
Casimir effect
for an example where the zero point energy of photon modes does have
an effect.)
Note
This is a slight approximation, as the boundary conditions for light
are a bit more complicated. However, for large \(n\) values this gives
the correct result.
Show that the free energy is given by \begin{align}
F &= 8\pi \frac{V(kT)^4}{h^3c^3}
\int_0^\infty \ln\left(1-e^{-\xi}\right)\xi^2d\xi
\\
&= -\frac{8\pi^5}{45} \frac{V(kT)^4}{h^3c^3}
\\
&= -\frac{\pi^2}{45} \frac{V(kT)^4}{\hbar^3c^3}
\end{align} provided the box is big enough that
\(\frac{\hbar c}{LkT}\ll 1\). Note that you may end up with a
slightly different dimensionless integral that numerically evaluates
to the same result, which would be fine. I also do not expect you to
solve this definite integral analytically, a numerical confirmation
is fine. However, you must manipulate your integral until it
is dimensionless and has all the dimensionful quantities removed
from it!
Show that the entropy of this box full of photons at temperature
\(T\) is \begin{align}
S &= \frac{32\pi^5}{45} k V \left(\frac{kT}{hc}\right)^3
\\
&= \frac{4\pi^2}{45} k V \left(\frac{kT}{\hbar c}\right)^3
\end{align}
Show that the internal energy of this box full of photons at
temperature \(T\) is \begin{align}
\frac{U}{V} &= \frac{8\pi^5}{15}\frac{(kT)^4}{h^3c^3}
\\
&= \frac{\pi^2}{15}\frac{(kT)^4}{\hbar^3c^3}
\end{align}
Find the equilibrium value at temperature \(T\)
of the fractional magnetization \begin{equation}
\frac{\mu_{tot}}{Nm} \equiv \frac{2\langle s\rangle}{N}
\end{equation} of a system of \(N\) spins each of magnetic moment
\(m\) in a magnetic field \(B\). The spin excess is \(2s\). The energy
of this system is given by \begin{align}
U &= -\mu_{tot}B
\end{align} where \(\mu_{tot}\) is the total magnetization. Take the
entropy as the logarithm of the multiplicity \(g(N,s)\) as given in
(1.35 in the text): \begin{equation}
S(s) \approx k_B\log g(N,0) - k_B\frac{2s^2}{N}
\end{equation} for \(|s|\ll N\), where \(s\) is the spin excess, which
is related to the magnetization by \(\mu_{tot} = 2sm\). Hint:
Show that in this approximation \begin{equation}
S(U) = S_0 - k_B\frac{U^2}{2m^2B^2N},
\end{equation} with \(S_0=k_B\log g(N,0)\). Further, show that
\(\frac1{kT} = -\frac{U}{m^2B^2N}\), where \(U\) denotes
\(\langle U\rangle\), the thermal average energy.
Consider a system that may be unoccupied with energy zero, or
occupied by one particle in either of two states, one of energy zero
and one of energy \(\varepsilon\). Find the Gibbs sum for this
system is in terms of the activity \(\lambda\equiv e^{\beta\mu}\).
Note that the system can hold a maximum of one particle.
Solve for the thermal average occupancy of the system in terms of
\(\lambda\).
Show that the thermal average occupancy of the state at energy
\(\varepsilon\) is \begin{align}
\langle N(\varepsilon)\rangle =
\frac{\lambda e^{-\frac{\varepsilon}{kT}}}{\mathcal{Z}}
\end{align}
Find an expression for the thermal average energy of the system.
Allow the possibility that the orbitals at \(0\) and at
\(\varepsilon\) may each be occupied each by one particle at the
same time; Show that \begin{align}
\mathcal{Z} &= 1 + \lambda + \lambda e^{-\frac{\varepsilon}{kT}} +
\lambda^2 e^{-\frac{\varepsilon}{kT}}
\\
&= (1+\lambda)\left(1+e^{-\frac{\varepsilon}{kT}}\right)
\end{align} Because \(\mathcal{Z}\) can be factored as shown, we
have in effect two independent systems.