Students find matrix elements of the position operator \(\hat x\) in a sinusoidal basis. This allows them to express this operator as a matrix, which they can then numerically diagonalize and visualize the eigenfunctions.
(In case https://paradigms.oregonstate.edu/courses/ph425 hasn't covered this yet.) An operator in quantum mechanics corresponds to a linear transformation of a state (or ket). In a matrix representation, an operator would be a matrix, and would transform a column vector to another column vector by matrix multiplication. We represent operators with hats, such as \(\hat{S_z}\).
Any quantity that we could observe, like the spin or position of a particle has a corresponding Hermitian operator. The eigenvalues of the operator corresponding to an obsevables are the set of values that could be measured when that observable is measured. For instance, the \(z\) component of the spin \(\hat{S_z}\) for a spin-\(\frac12\) particle has eigenvalues of \(\pm\frac12\hbar\), which is why only those two spin values are measured.
Any operator can be written as a matrix using any basis set (of the corresponding system). The elements of that matrix, which represents the operator, are called matrix elements, and are given by \(O_{ij} \equiv \left\langle {i}\right|\hat O\left|{j}\right\rangle \), where \(\left|{i}\right\rangle \) and \(\left|{j}\right\rangle \) are two basis states, \(\hat O\) is some operator, and \(O_{ij}\) is an elment of the matrix corresponding to that operator.
A wave function represents the state of a particle in space, just as a ket or an array of two elements represents the state of a spin-\(\frac12\) particle. Just as there are operators for spins that relate to physical observables, there are also operators for particles in space, which act on wave functions.
We will be considering just one operator this week: the position operator. The position operator in the wave function representation is given by \begin{align} \hat x\, &\dot=\, x \end{align} You might have some trouble understanding what this means, given that the hat and the dot are both new notations. I'll try to explain element by element.
Recall that we started by finding the average position, which was \begin{align} \langle x\rangle &= \int \mathcal{P}(x) x dx \\ &= \int |\psi(x)|^2 x dx \\ &= \langle \psi | \hat x | \psi \rangle \end{align} You then found that you could write \(\psi(x)\) as a sum of basis functions \begin{align} |\psi\rangle &= \sum_{n=1}^\infty C_n|n\rangle \\ &= \sum_{n=1}^\infty \langle n|\psi\rangle |n\rangle \end{align} and thus \begin{align} \psi(x) &= \sum_{n=1}^\infty C_n \phi_n(x) \end{align} We can now put these two expressions together by substituting the expressions for \(\psi(x)\) into the expression for \(\langle x\rangle\): \begin{align} \langle x\rangle &= \int \psi(x)^* x \psi(x) dx \\ &= \int\left(\sum_{n=1}^\infty C_n\phi_n(x) \right)^* x \left(\sum_{n=1}^\infty C_n\phi_n(x) \right) dx \end{align} At this point we run into a possible confusion. I've written down two summations with the same summation index. This is a natural outcom of plugging in the equation for \(\psi(x)\), but we've now got two different index variables with the same name. Whenever this happens to you, it's a good idea to change the equation to give them different names. Since we're summing over them, these index variables are "dummy indexes", just as our integral variable \(x\) is a "dummy variable" and could be renamed at will. We could change one of them to \(n'\) or we could change one of them to \(m\). I'll pick the latter. \begin{align} \langle x\rangle &= \int\left(\sum_{n=1}^\infty C_n\phi_n(x) \right)^* x \left(\sum_{m=1}^\infty C_m\phi_m(x) \right) dx \end{align} Now that we have different dummy variables for summation, we can pull reorder our summations and pull them out of the integral \begin{align} \langle x\rangle &= \sum_{n=1}^\infty\sum_{m=1}^\infty C_n^*C_m\int \phi_n(x)^* x \phi_m(x) dx \\ &= \sum_{n=1}^\infty\sum_{m=1}^\infty C^*_nC_m\langle n|\hat x|m\rangle \\ &= \begin{matrix} \begin{pmatrix} C^*_1 & C^*_2 & C^*_2 & \cdots \end{pmatrix} \\ \\ \\ \\ \end{matrix} \begin{pmatrix} \langle1|\hat x|1\rangle &\langle1|\hat x|2\rangle &\langle1|\hat x|3\rangle & \cdots \\ \langle2|\hat x|1\rangle &\langle2|\hat x|2\rangle &\langle2|\hat x|3\rangle & \cdots \\ \langle3|\hat x|1\rangle &\langle3|\hat x|2\rangle &\langle3|\hat x|3\rangle & \cdots \\ \vdots &\vdots &\vdots & \ddots \\ \end{pmatrix} \begin{pmatrix} C_1 \\ C_2\\C_2\\ \vdots \end{pmatrix} \\ &= \langle \psi|\hat x|\psi\rangle \end{align} Thus we can see that the \(\hat x\) operator does seem to be represented in our sinusoidal basis as a matrix of infinite dimension with its elements given by \(x_{nm} = \langle n|\hat x|m\rangle\). Thus we can also write that \begin{align} \hat x \,\dot= \begin{pmatrix} \langle1|\hat x|1\rangle &\langle1|\hat x|2\rangle &\langle1|\hat x|3\rangle & \cdots \\ \langle2|\hat x|1\rangle &\langle2|\hat x|2\rangle &\langle2|\hat x|3\rangle & \cdots \\ \langle3|\hat x|1\rangle &\langle3|\hat x|2\rangle &\langle3|\hat x|3\rangle & \cdots \\ \vdots &\vdots &\vdots & \ddots \\ \end{pmatrix} \end{align} meaning that in the sinusoidal basis the \(x\) position operator is represented by this matrix.
This visualization can be a bit tricky. Students may be likely to use
pcolor
orpcolormesh
. Both will give a correct visualization, but the “diagonal” of the matrix (which is prominently visible) will be going from lower-left to upper-right, which is not how we write matrices. I tend to address this by asking students where the 1-1 element of the matrix is. In these cases, it will be in the lower-left, and once students realize that they are likely to follow the rest.There is another function called
matshow
which is designed for displaying matrices. It wouldn't hurt to direct students towards this. The key is to use the word “matrix” in the search, as in searching formatplotlib plot matrix
.Another feature I would ask about is where the matrix has big and small elements. If students omit to show a
colorbar
, they may not be aware of which elements are almost zero.
numpy
has a function to do this).
There are two common errors: students will often get the indices backwards when indexing into the 2D array of eigenvectors (or equivalently access rows rather than columns), which gives essentially random eigenfunctions.
The other error is to have an off-by-one error in computing the eigenfunctions, skipping an \(n+1\), forgetting about their
n
counting from zero rather than 1. This gives a peak that looks a lot like a derivative of the correct eigenfunction.
We would like students to observe that there are in fact an infinite number of eigenvalues of the position operator (and eigenfunctions), equally spaced between \(0\) and \(L\).
Need to wrap up by discussing how these eigenstates converge (slowly), and what they converge to.
Solve analytically for the eigenstates of the position operator in a wave function representation. Compare them with your approximate numerical eigenstates above.
To do this, you'll want to try picking a function, any function, and then sketch that function and \(x\) times that function. If they look the same, you found the eigenfunction. Otherwise try again.