Activities
In this small group activity, students solve for the time dependence of two quantum spin 1/2 particles under the influence of a Hamiltonian. Students determine, given a Hamiltonian, which states are stationary and under what circumstances measurement probabilities do change with time.
Students, working in pairs, use their left arms to demonstrate time evolution in spin 1/2 quantum systems.
Lecture about finding \(\left|{\pm}\right\rangle _x\) and then \(\left|{\pm}\right\rangle _y\). There are two conventional choices to make: relative phase for \(_x\left\langle {+}\middle|{-}\right\rangle _x\) and \(_y\left\langle {+}\middle|{+}\right\rangle _x\).
So far, we've talked about how to calculate measurement probabilities if you know the input and output quantum states using the probability postulate:
\[\mathcal{P} = | \left\langle {\psi_{out}}\middle|{\psi_{in}}\right\rangle |^2 \]
Now we're going to do this process in reverse.
I want to be able to relate the output states of Stern-Gerlach analyzers oriented in different directions to each other (like \(\left|{\pm}\right\rangle _x\) and \(\left|{\pm}\right\rangle _x\) to \(\left|{\pm}\right\rangle \)). Since \(\left|{\pm}\right\rangle \) forms a basis, I can write any state for a spin-1/2 system as a linear combination of those states, including these special states.
I'll start with \(\left|{+}\right\rangle _x\) written in the \(S_z\) basis with general coefficients:
\[\left|{+}\right\rangle _x = a \left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \]
Notice that:
(1) \(a\), \(b\), and \(\phi\) are all real numbers; (2) the relative phase is loaded onto the second coefficient only.
My job is to use measurement probabilities to determine \(a\), \(b\), and \(\phi\).
I'll prepare a state \(\left|{+}\right\rangle _x\) and then send it through \(x\), \(y\), and \(z\) analyzers. When I do that, I see the following probabilities:
Input = \(\left|{+}\right\rangle _x\) \(S_x\) \(S_y\) \(S_z\) \(P(\hbar/2)\) 1 1/2 1/2 \(P(-\hbar/2)\) 0 1/2 1/2 First, looking at the probability for the \(S_z\) components:
\[\mathcal(S_z = +\hbar/2) = | \left\langle {+}\middle|{+}\right\rangle _x |^2 = 1/2\]
Plugging in the \(\left|{+}\right\rangle _x\) written in the \(S_z\) basis:
\[1/2 = \Big| \left\langle {+}\right|\Big( a\left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \Big) \Big|^2\]
Distributing the \(\left\langle {+}\right|\) through the parentheses and use orthonormality: \begin{align*} 1/2 &= \Big| a\cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + be^{i\phi} \cancelto{0}{\left\langle {+}\middle|{-}\right\rangle } \Big|^2 \\ &= |a|^2\\[12pt] \rightarrow a &= \frac{1}{\sqrt{2}} \end{align*}
Similarly, looking at \(S_z = -\hbar/2\): \begin{align*} \mathcal(S_z = +\hbar/2) &= | \left\langle {-}\middle|{+}\right\rangle _x |^2 = 1/2 \\ 1/2 = \Big| \left\langle {-}\right|\Big( a\left|{+}\right\rangle + be^{i\phi} \left|{-}\right\rangle \Big) \Big|^2\\ 1/2 &= \Big| a\cancelto{0}{\left\langle {-}\middle|{+}\right\rangle } + be^{i\phi} \cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big|^2 \\ &= |be^{i\phi}|^2\\ &= |b|^2 \cancelto{1}{(e^{i\phi})(e^{-i\phi})}\\[12pt] \rightarrow b &= \frac{1}{\sqrt{2}} \end{align*}
I can't yet solve for \(\phi\) but I can do similar calculations for \(\left|{-}\right\rangle _x\):
\begin{align*} \left|{-}\right\rangle _x &= c \left|{+}\right\rangle + de^{i\gamma} \left|{-}\right\rangle \\ \mathcal(S_z = +\hbar/2) &= | \left\langle {+}\middle|{-}\right\rangle _x |^2 = 1/2\\ \rightarrow c = \frac{1}{\sqrt{2}}\\ \mathcal(S_z = +\hbar/2) &= | \left\langle {-}\middle|{-}\right\rangle _x |^2 = 1/2\\ \rightarrow d = \frac{1}{\sqrt{2}}\\ \end{align*}
Input = \(\left|{-}\right\rangle _x\) \(S_x\) \(S_y\) \(S_z\) \(P(\hbar/2)\) 0 1/2 1/2 \(P(-\hbar/2)\) 1 1/2 1/2 So now I have: \begin{align*} \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\beta} \left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\gamma} \left|{-}\right\rangle \\ \end{align*}
I know \(\beta \neq \gamma\) because these are not the same state - they are orthogonal to each other: \begin{align*} 0 &= \,_x\left\langle {+}\middle|{-}\right\rangle _x \\ &= \Big(\frac{1}{\sqrt{2}} \left\langle {+}\right| + \frac{1}{\sqrt{2}}e^{i\beta} \left\langle {-}\right| \Big)\Big( \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\gamma} \left|{-}\right\rangle \Big)\\ \end{align*}
Now FOIL like mad and use orthonormality: \begin{align*} 0 &= \frac{1}{2}\Big(\cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + e^{i\gamma} \cancelto{0}{\left\langle {+}\middle|{-}\right\rangle } + e^{i\beta} \cancelto{0}{\left\langle {-}\middle|{+}\right\rangle } + e^{i(\gamma - \beta)}\cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big)\\ &= \frac{1}{2}\Big(1 + e^{i(\gamma - \beta} \Big) \\ \rightarrow & \quad e^{i(\gamma-\beta)} = -1 \end{align*}
This means that \(\gamma-\beta = \pi\). I don't have enough information to solve for \(\beta\) and \(\gamma\), but there is a one-time conventional choice made that \(\beta = 0\) and \(\gamma = 1\), so that: \begin{align*} \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{1}{e^{i0}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-1}{e^{i\pi}} \left|{-}\right\rangle \\[12pt] \rightarrow \left|{+}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{+} \frac{1}{\sqrt{2}}\left|{-}\right\rangle \\ \left|{-}\right\rangle _x &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{-} \frac{1}{\sqrt{2}}\left|{-}\right\rangle \\[12pt] \end{align*}
When \(\left|{\pm}\right\rangle _y\) is the input state:
Input = \(\left|{+}\right\rangle _y\) \(S_x\) \(S_y\) \(S_z\) \(P(\hbar/2)\) 1/2 1 1/2 \(P(-\hbar/2)\) 1/2 0 1/2
Input = \(\left|{-}\right\rangle _y\) \(S_x\) \(S_y\) \(S_z\) \(P(\hbar/2)\) 1/2 0 1/2 \(P(-\hbar/2)\) 1/2 1 1/2 The calculations proceed in the same way. The \(S_z\) probabilities give me: \begin{align*} \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{1}{e^{i\alpha}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-1}{e^{i\theta}} \left|{-}\right\rangle \\ \end{align*}
The orthongality between \(\left|{+}\right\rangle _y\) and \(\left|{-}\right\rangle _y\) mean that \(\theta - \alpha = \pi\).
But I also know the \(S_x\) probabilities and how to write \(|ket{\pm}_x\) in the \(S_z\) basis. For an input of \(\left|{+}\right\rangle _y\): \begin{align*} \mathcal(S_x = +\hbar/2) &= | \,_x\left\langle {+}\middle|{+}\right\rangle _y |^2 = 1/2 \\ 1/2 &= \Big| \Big(\frac{1}{\sqrt{2}} \left\langle {+}\right| + \frac{1}{\sqrt{2}}\left\langle {-}\right|\Big) \Big( \frac{1}{\sqrt{2}}\left|{+}\right\rangle + \frac{1}{\sqrt{2}}e^{i\alpha} \left|{-}\right\rangle \Big) \Big|^2\\ 1/2 &= \Big| \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}} \cancelto{1}{\left\langle {+}\middle|{+}\right\rangle } + \frac{1}{\sqrt{2}} \frac{1}{\sqrt{2}}e^{i\alpha} \cancelto{1}{\left\langle {-}\middle|{-}\right\rangle } \Big|^2 \\ &= \frac{1}{4}|1+e^{i\alpha}|^2\\ &= \frac{1}{4} \Big( 1+e^{i\alpha}\Big) \Big( 1+e^{-i\alpha}\Big)\\ &= \frac{1}{4} \Big( 2+e^{i\alpha} + e^{-i\alpha}\Big)\\ &= \frac{1}{4} \Big( 2+2\cos\alpha\Big)\\ \frac{1}{2} &= \frac{1}{2} + \frac{1}{2}\cos\alpha \\ 0 &= \cos\alpha\\ \rightarrow \alpha = \pm \frac{\pi}{2} \end{align*}
Here, again, I can't solve exactly for alpha (or \(\theta\)), but the convention is to choose \(alpha = \frac{\pi}{2}\) and \(\theta = \frac{3\pi}{2}\), making \begin{align*} \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{i}{e^{i\pi/2}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle + \frac{1}{\sqrt{2}}\cancelto{-i}{e^{i3\pi/2}} \left|{-}\right\rangle \\ \rightarrow \left|{+}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{+} \frac{\color{red}{i}}{\sqrt{2}} \left|{-}\right\rangle \\ \left|{-}\right\rangle _y &= \frac{1}{\sqrt{2}} \left|{+}\right\rangle \color{red}{-} \frac{\color{red}{i}}{\sqrt{2}} \left|{-}\right\rangle \\ \end{align*}
If I use these two convenctions for the relative phases, then I can write down \(\left|{\pm}\right\rangle _n\) in an arbitrary direction described by the spherical coordinates \(\theta\) and \(\phi\) as:
Discuss the generalize eigenstates: \begin{align*}\ \left|{+}\right\rangle _n &= \cos \frac{\theta}{2} \left|{+}\right\rangle + \sin \frac{\theta}{2} e^{i\phi} \left|{-}\right\rangle \\ \left|{-}\right\rangle _n &= \sin \frac{\theta}{2} \left|{+}\right\rangle - \cos \frac{\theta}{2} e^{i\phi} \left|{-}\right\rangle \end{align*}
And how the \(\left|{\pm}\right\rangle _x\) and \(\left|{\pm}\right\rangle _y\) are consistent.
Students use Mathematica to visualize the probability density distribution for the hydrogen atom orbitals with the option to vary the values of \(n\), \(\ell\), and \(m\).
Students are asked to find eigenvalues, probabilities, and expectation values for \(H\), \(L^2\), and \(L_z\) for a superposition of \(\vert n \ell m \rangle\) states. This can be done on small whiteboards or with the students working in groups on large whiteboards.
Students then work together in small groups to find the matrices that correspond to \(H\), \(L^2\), and \(L_z\) and to redo \(\langle E\rangle\) in matrix notation.
Eigenvalues and EigenvectorsEach group will be assigned one of the following matrices.
\[ A_1\doteq \begin{pmatrix} 0&-1\\ 1&0\\ \end{pmatrix} \hspace{2em} A_2\doteq \begin{pmatrix} 0&1\\ 1&0\\ \end{pmatrix} \hspace{2em} A_3\doteq \begin{pmatrix} -1&0\\ 0&-1\\ \end{pmatrix} \]
\[ A_4\doteq \begin{pmatrix} a&0\\ 0&d\\ \end{pmatrix} \hspace{2em} A_5\doteq \begin{pmatrix} 3&-i\\ i&3\\ \end{pmatrix} \hspace{2em} A_6\doteq \begin{pmatrix} 0&0\\ 0&1\\ \end{pmatrix} \hspace{2em} A_7\doteq \begin{pmatrix} 1&2\\ 1&2\\ \end{pmatrix} \]
\[ A_8\doteq \begin{pmatrix} -1&0&0\\ 0&-1&0\\ 0&0&-1\\ \end{pmatrix} \hspace{2em} A_9\doteq \begin{pmatrix} -1&0&0\\ 0&-1&0\\ 0&0&1\\ \end{pmatrix} \]
\[ S_x\doteq \frac{\hbar}{2}\begin{pmatrix} 0&1\\ 1&0\\ \end{pmatrix} \hspace{2em} S_y\doteq \frac{\hbar}{2}\begin{pmatrix} 0&-i\\ i&0\\ \end{pmatrix} \hspace{2em} S_z\doteq \frac{\hbar}{2}\begin{pmatrix} 1&0\\ 0&-1\\ \end{pmatrix} \]For your matrix:
- Find the eigenvalues.
- Find the (unnormalized) eigenvectors.
- Describe what this transformation does.
- Normalize your eigenstates.
If you finish early, try another matrix with a different structure, i.e. real vs. complex entries, diagonal vs. non-diagonal, \(2\times 2\) vs. \(3\times 3\), with vs. without explicit dimensions.
Instructor's Guide
Main Ideas
This is a small group activity for groups of 3-4. The students will be given one of 10 matrices. The students are then instructed to find the eigenvectors and eigenvalues for this matrix and record their calculations on their medium-sized whiteboards. In the class discussion that follows students report their finding and compare and contrast the properties of the eigenvalues and eigenvectors they find. Two topics that should specifically discussed are the case of repeated eigenvalues (degeneracy) and complex eigenvectors, e.g., in the case of some pure rotations, special properties of the eigenvectors and eigenvalues of hermitian matrices, common eigenvectors of commuting operators.
Students' Task
Introduction
Give a mini-lecture on how to calculate eigenvalues and eigenvectors. It is often easiest to do this with an example. We like to use the matrix \[A_7\doteq\begin{pmatrix}1&2\cr 9&4\cr\end{pmatrix}\] from the https://paradigms.oregonstate.edu/activities/2179 https://paradigms.oregonstate.edu/activities/2179 Finding Eigenvectors and Eigenvalues since the students have already seen this matrix and know what it's eigenvectors are. Then every group is given a handout, assigned a matrix, and then asked to: - Find the eigenvalues - Find the (unnormalized) eigenvectors - Normalize the eigenvectors - Describe what this transformation doesStudent Conversations
- Typically, students can find the eigenvalues without too much problem. Eigenvectors are a different story. To find the eigenvectors, they will have two equations with two unknowns. They expect to be able to find a unique solution. But, since any scalar multiple of an eigenvector is also an eigenvector, their two equations will be redundant. Typically, they must choose any convenient value for one of the components (e.g. \(x=1\)) and solve for the other one. Later, they can use this scale freedom to normalize their vector.
- The examples in this activity were chosen to include many of the special cases that can trip students up. A common example is when the two equations for the eigenvector amount to something like \(x=x\) and \(y=-y\). For the first equation, they may need help to realize that \(x=\) “anything” is the solution. And for the second equation, sadly, many students need to be helped to the realization that the only solution is \(y=0\).
Wrap-up
The majority of the this activity is in the wrap-up conversation.
The [[whitepapers:narratives:eigenvectorslong|Eigenvalues and Eigenvectors Narrative]] provides a detailed narrative interpretation of this activity, focusing on the wrap-up conversation.
- Complex eigenvectors: connect to discussion of rotations in the Linear Transformations activity where there did not appear to be any vectors that stayed the same.
- Degeneracy: Define degeneracy as the case when there are repeated eigenvalues. Make sure the students see that, in the case of degeneracy, an entire subspace of vectors are all eigenvectors.
- Diagonal Matrices: Discuss that diagonal matrices are trivial. Their eigenvalues are just their diagonal elements and their eigenvectors are just the standard basis.
- Matrices with dimensions: Students should see from these examples that when you multiply a transformation by a real scalar, its eigenvalues are multiplied by that scalar and its eigenvectors are unchanges. If the scalar has dimensions (e.g. \(\hbar/2\)), then the eigenvalues have the same dimensions.
Students compute probabilities and averages given a probability density in one dimension. This activity serves as a soft introduction to the particle in a box, introducing all the concepts that are needed.
Students compute inner products to expand a wave function in a sinusoidal basis set. This activity introduces the inner product for wave functions, and the idea of approximating a wave function using a finite set of basis functions.
Students find matrix elements of the position operator \(\hat x\) in a sinusoidal basis. This allows them to express this operator as a matrix, which they can then numerically diagonalize and visualize the eigenfunctions.
Students implement a finite-difference approximation for the kinetic energy operator as a matrix, and then use numpy
to solve for eigenvalues and eigenstates, which they visualize.
Students consider the relation (1) between the angular momentum and magnetic moment for a current loop and (2) the force on a magnetic moment in an inhomogeneous magnetic field. Students make a (classical) prediction of the outcome of a Stern-Gerlach experiment.
In this lecture, the instructor guides a discussion about translating between bra-ket notation and wavefunction notation for quantum systems.
In this small group activity, students work out the steady state temperature of an object absorbing and emitting blackbody radiation.
Students work out heat and work for rectangular paths on \(pV\) and \(TS\) plots. This gives with computing heat and work, applying the First Law, and recognizing that internal energy is a state function, which cannot change after a cyclic process.
Students take the inner product of vectors that lie on the spacetime axis to show that they are orthogonal. To do the inner product, students much use the Minkowski metric.
In this small group activity, students multiply a general 3x3 matrix with standard basis row/column vectors to pick out individual matrix elements. Students generate the expressions for the matrix elements in bra/ket notation.
A short improvisational role-playing skit based on the Star Trek series in which students explore the definition and notation for position vectors, the importance of choosing an origin, and the geometric nature of the distance formula. \[\vert\vec{r}-\vec{r}^\prime\vert=\sqrt{(x-x^\prime)^2+(y-y^\prime)^2-(z-z^\prime)^2}\]
In this activity, students apply the Stefan-Boltzmann equation and the principle of energy balance in steady state to find the steady state temperature of a black object in near-Earth orbit.
Students explore what linear transformation matrices do to vectors. The whole class discussion compares & contrasts several different types of transformations (rotation, flip, projections, “scrinch”, scale) and how the properties of the matrices (the determinant, symmetries, which vectors are unchanged) are related to these transformations.