Quantum mechanics from classical

In this page, we will use the system of a mass on a spring as a running example to analyze how classical mechanics — the physics of the macroscopic world — can be systematically extended to describe the world in a manner accurate even at a microscopic scale. Microscopically accurate physics is known as quantum mechanics.

Classical Mechanics

Consider how classical mechanics describes a basic physical problem, for instance the simple harmonic oscillator (mass on a spring). At any given time, the mass exists at a particular point of space, and has a particular momentum. Anything you want to know about the object, for example its kinetic or potential energy, can be found in terms of position and momentum. Thus, it is said that the position and momentum determine the system, and that any physical observable, i.e. anything you can measure concerning its motion, is some function of position and momentum.

Our goal, then, is to find the position and momentum of the mass at any given moment in time. We want to know its entire history and future, or put more mathematically, we want to find the following two functions:

(1)
\begin{align} x(t) &= \mathrm{position \ as \ a \ function \ of \ time} \\ p(t) &= \mathrm{momentum \ as \ a \ function \ of \ time} \end{align}

Note that the space of all possible positions $x$ for the object whose motion we are describing (e.g. the mass on a spring) is called the configuration space. The mass on the spring can only move up and down in one dimension, so its configuration space is $\mathbb{R}$, the space of real numbers. However, if our object is a cannonball shot from a cannon, it will have both a vertical and horizontal position, so its configuration space is $\mathbb{R}^2$. The space of all possible pairs of position and momentum $(x,p)$ is called phase space. We can think of the classical trajectory of the mass on the spring or of the cannonball as a path $x(t)$ through configuration space, or (including a bit more detail) as a path $(x(t), p(t))$ through phase space.

Classical mechanics gives us two methods of obtaining the classical trajectory of our object of interest: the Lagrangian approach and the Hamiltonian approach.

The Lagrangian Approach

The key to the Lagrangian approach is a quantity called the (Lagrangian) action of our object as it pursues a path through configuration space. Philosophically, one can consider the action as measuring the net degree to which the object realizes its potential energy in the form of kinetic energy, over its entire trajectory. Denoting kinetic energy as $K(x, \dot{x})$ and potential energy as $U(x)$, the action for a trajectory $x(t)$ can be expressed as

(2)
\begin{align} S[x(t)] = \int_{t_0}^{t_1} K(x(t),\dot x(t)) - U(x(t)) \ dt \end{align}

The integrand $K(x(t),\dot x(t)) - U(x(t))$ is the Lagrangian $L(x, \dot x)$. Notice that the action will be small when the kinetic energy is small compared with the potential - i.e. when the object has not realized much of its potential energy in the form of kinetic energy. Thus this definition of the action indeed measures the degree to which potential energy gets converted to kinetic.

It turns out that we can use the action to determine which trajectory $x(t)$ our object will actually pursue as it moves from an initial position $x_0$ at time $t_0$ to a final position $x_1$ at time $t_1$. The path chosen by Nature turns out to be the one which minimizes the action - this criterion is known as Hamilton's principle of least action. One could conclude that the Universe is lazy, trying to move the object as little as possible in order to progress from its initial to final position. A more positive interpretation might portray the Universe simply as efficient, reserving its potential energy and squandering as little as possible in the form of motion.

To gain further intuition for the significance of the action, consider the case of a so-called free particle, in which the potential energy is zero everywhere (no forces are acting on the particle). With the usual form of the kinetic energy $K(x) = \frac{m}{2} \dot x^2$, the action now becomes

(3)
\begin{align} S[x(t)] = \int_{t_0}^{t_1} \frac{m}{2} \dot x^2 (t) \ dt. \end{align}

Thought Experiment 1 Convince yourself that for a free particle, the action-minimizing path from $x_0$ at time $t_0$ to $x_1$ at time $t_1$ is a straight line, the shortest path from $x_0$ to $x_1$. [Hint: think about arc length].

Thought Experiment 2 For a particle moving in a potential $U(x)$, with action

(4)
\begin{align} S[x(t)] = \int_{t_0}^{t_1} \frac{m}{2} \dot x^2 (t) - U(x(t)) \ dt, \end{align}

convince yourself that the particle would still like to move along the shortest path from $x_0$ at time $t_0$ to $x_1$ at time $t_1$, and that if it deviates from this straight line, it does so to avoid areas of low potential energy.

While these thought experiments allow us to draw general conclusions about the classical trajectory, we need a means of actually solving for the path an object will follow through configuration space when under the influence of forces due to a certain potential $U(x)$. To keep the treatment as general as possible, we write the Lagrangian simply as $L(x, \dot x)$ so that the action takes the form

(5)
\begin{align} S[x(t)] = \int_{t_0}^{t_1} L(x(t), \dot x (t)) \ dt. \end{align}

Now, in order to find the trajectory which minimizes the action, we leverage an old concept from Calculus I optimization problems: to find minima of a function, we must take its derivative and set equal to 0. More mathematically,

(6)
\begin{align} \frac{df}{dx} = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} \equiv 0 \end{align}

This idea cannot be directly applied to the problem of finding a trajectory $x(t)$ which minimizes the action$S[x(t)]$, because $x(t)$ is a function, not just a number as the $x$ is in the Calculus I optimization problem. We only know how to take derivatives with respect to scalar parameters, not functions.

To circumvent this problem, we need to frame our variation of the path $x(t)$ in terms of the variation of some scalar parameter that we know how to differentiate with respect to. Notice that because the initial and final positions of our object are prescribed to be $x_0$ and $x_1$ at the initial and final times $t_0$ and $t_1$, we do not want to vary the endpoints of the path. Our first step will be to recognize that if we have a "reference trajectory" $x(t)$ satisfying $x(t_0) = x_0$ and $x(t_1) = x_1$, we can write any other trajectory through configuration space with those same endpoints in the form

(7)
\begin{equation} x(t) + h(t) \end{equation}

where $h(t_0) = 0 = h(t_1)$.

Thought Experiment 3 Convince yourself that every trajectory through configuration space with these endpoints can be written thus in terms of the reference trajectory.

Next, notice that we can modulate the amount of the variation $h(t)$ which is added to the trajectory $x(t)$ by introducing a parameter $\epsilon$:

(8)
\begin{align} x(t) + \epsilon h(t) \end{align}

When $\epsilon =1$, we have the same variation discussed above; when $\epsilon =0$, we recover the reference trajectory $x(t)$. The parameter $\epsilon$ smoothly interpolates between the two.

We now have a single parameter $\epsilon$ which controls the variation from the reference trajectory, and we can find the action-minimizing path by differentiating the action with respect to $\epsilon$ and setting the derivative equal to 0. Before plunging into this calculation, however, we must pause to make an important observation: we want the path yielding the minimum value of the action out of all possible smooth trajectories with the requisite endpoints. This means we must consider not just trajectories resulting from a particular choice of variation $h(t)$ from the reference trajectory, but any smooth variation $h(t)$ such that $h(t_0) = 0 = h(t_1)$.

To require that our reference trajectory is indeed a critical point for the action, we thus impose the condition

(9)
\begin{align} \frac{d}{d \epsilon} \left[ S[ x(t) + \epsilon h(t) ] \right] \bigg|_{\epsilon = 0} = 0 \end{align}

for all smooth variations $h(t)$ such that $h(t_0) = 0 = h(t_1)$

Thought Experiment 4 Explain the need to evaluate at $\epsilon = 0$ in (9).

Thought Experiment 5 Explain what we will solve for in (9).

Now we work out the consequences and ultimately the solution of (9). Expanding the left hand side, we have

(10)
\begin{align} \frac{d}{d \epsilon} \left[ S[ x(t) + \epsilon h(t) ] \right] \bigg|_{\epsilon = 0} &= \frac{d}{d \epsilon} \int_{t_0}^{t_1} L(x(t) + \epsilon h(t), \dot x (t) + \epsilon \dot h (t)) \ dt \bigg|_{\epsilon = 0} \\ &= \int_{t_0}^{t_1} \frac{\partial L}{\partial x} \cdot h(t) + \frac{\partial L}{\partial \dot x} \cdot \dot h(t) \ dt \\ &= \int_{t_0}^{t_1} \frac{\partial L}{\partial x} \cdot h(t) - \frac{d}{dt} \frac{\partial L}{\partial \dot x} \cdot h(t) \ dt \\ &= \int_{t_0}^{t_1} \left[ \frac{\partial L}{\partial x} - \frac{d}{dt} \frac{\partial L}{\partial \dot x} \right] h(t) \ dt = 0 \end{align}

for all smooth $h(t)$ such that $h(t_0) = 0 = h(t_1)$. However, in order for the final equality to hold for any smooth $h(t)$ satisfying the boundary conditions, we must have

(11)
\begin{align} \frac{\partial L}{\partial x} - \frac{d}{dt} \frac{\partial L}{\partial \dot x} = 0, \end{align}

a differential equation which we can solve to obtain the classical trajectory $x(t)$. This differential equation is called the Euler-Lagrange equation.

Thought Experiment 6 Be sure you can justify to yourself each line of (10), referring if necessary to more detailed derivations such as in Marion and Thornton or Goldstein.

Calculation 1 If you have not already done so, set up the Lagrangian for the simple harmonic oscillator problem (i.e. mass on a spring: kinetic energy $\frac{m}{2} \dot x^2$, potential energy $\frac{k}{2} x^2$). Obtain the Euler-Lagrange equation for the simple harmonic oscillator. Verify that the Euler-Lagrange equation has a solution of the form $A \cos(C t) + B \sin (C t)$, where $A, B, and C$ are constants. Determine the value of $C$ and explain how the values of $A$ and $B$ would be determined on a particular occasion when the mass on the spring is set in motion. If you have already done this calculation, glance back over it to refresh your memory.

The Hamiltonian Approach

Complementary to the Lagrangian approach, another method of determining the classical trajectory uses not the action of the system but the total energy or Hamiltonian. Notice that the Lagrangian is a function of $x$ and $\dot x$ and the Euler-Lagrange equation is a second-order differential equation for $x(t)$. However, we might wish instead to deal only with first-order differential equations, and we can in fact do this by setting up a system of differential equations which describe not just the trajectory $x(t)$ through configuration space, but the trajectory $(x(t), p(t))$ through phase space.

To do this, we need to make a change of variables so as to obtain a quantity that encodes the same information as the Lagrangian action, but does not involve $\dot x$. The change of variables we will use is called a Legendre transformation, and enjoys applicability beyond classical mechanics. Consider a function $F(x)$ which is always either concave up or concave down (i.e. its concavity does not change as $x$ varies).

Thought Experiment 7 Construct an example of such a function.

Thought Experiment 8 Prove that for any function $F(x)$ which is always either convex up or convex down, the derivative $\dot F(x)$ is either always increasing or always decreasing.

Thought Experiment 9 Prove that for any function $F(x)$ which is always either convex up or convex down, the derivative $F'(x)$ is a one-to-one function; i.e., every two distinct $x$-values $x_1 \neq x_2$ will correspond to distinct values $F'(x_1) \neq F'(x_2)$.

Since the above series of thought experiments proves that every $x$ corresponds to a unique value of $F'(x)$, we can use the variable $s \equiv \dot F(x)$ to stand in for $x$ and encode all the same information with no loss. Because of the one-to-one correspondence, we can invert the function $s(x) \equiv F' (x)$ to obtain $x$ in terms of $s$ as a function $x(s)$. Thus we can define a new function $G (s)$ which encodes the same information as $F(x)$, but involving only $s$ as an independent variable rather than $x$:

(12)
\begin{align} G(s) \equiv s x(s) - F(x(s)) \end{align}

The function $G(s)$ is the Legendre transformation of $F(x)$.

Thought Experiment 10 Prove that just as $s = F'(x)$, it is also true that $x = G'(s)$, illustrating a symmetry between $F(x)$ and $G(s)$. Further, show that if we perform a Legendre transform on $G(s)$, we will recover $F(x)$.

Thought Experiment 11 Open this paper and look at Figure 3. Use it to convince yourself that $F + G = xs$, a final manifestation of the symmetry between $F$ and $G$.

Thought experiment 12 Prove that for the usual form of the Lagrangian $L(x, \dot x) = \frac{m}{2} \dot x^2 - U(x)$, $L$ is a concave up function of $\dot x$.

Since $L$ is concave up as a function of $\dot x$, we can define the Legendre transform of $L$ with respect to $\dot x$ and thereby get ourselves a new function that no longer depends on $\dot x$ but encodes the same information. Instead of $F(x)$ we have $L(x, \dot x)$, so instead of $s \equiv F'(x)$ we will have $p \equiv \frac{\partial L}{\partial \dot x}$. Using the definition of Legendre transform, instead of $G(s) = s x(s) - F(x(s))$ we will have

(13)
\begin{align} H(x,p) \equiv p \dot x - L(x, \dot x (p)). \end{align}

This quantity is called the Hamiltonian.

Thought Experiment 13 Prove that for the usual form of Lagrangian $L(x, \dot x ) = \frac{m}{2} \dot x^2 - U(x)$, the Hamiltonian is equal to the total energy $K + U = \frac{p^2}{2m} + U(x)$.

From the Hamiltonian, we obtain two first-order coupled differential equations which determine the trajectory $(x(t), p(t))$ of our object through phase space and which are equivalent to the Euler-Lagrange equation. First, notice that $H$ depends on two variables, $x$ and $p$, and that we would like to write down the partial derivative of $H$ with respect to each of these, since any change in $H$ must be due to change in $x$ or $p$. Since by definition $H = p \dot x - L$, we have

(14)
\begin{align} \frac{\partial H}{\partial p} = \dot x. \end{align}

Next, note that

(15)
\begin{align} \frac{\partial H}{\partial x} = - \frac{\partial L}{\partial x}, \end{align}

and by the Euler-Lagrange equation,

(16)
\begin{align} \frac{\partial L}{\partial x} = \frac{d}{dt} \frac{\partial L}{\partial \dot x} = \dot p, \end{align}

so

(17)
\begin{align} \frac{\partial H}{\partial x} = - \dot p. \end{align}

Thus we have a system of two coupled first-order differential equations which can be solved to yield the trajectory of our object through phase space:

(18)
\begin{align} \frac{\partial H}{\partial p} &= \dot x \\ \frac{\partial H}{\partial x} &= - \dot p. \end{align}

These are Hamilton's equations.

Thought Experiment 14 Convince yourself of each step in (16).

Calculation 2 If you have not done so already, derive the Hamiltonian and Hamilton's equations for the simple harmonic oscillator (mass on a spring).

By defining a new operation on pairs of observables, we can rephrase Hamilton's equations in a compact and elegant form which readily lends itself to the development of quantum theory. Consider two observables $f(x,p)$ and $g(x,p$. Define their Poisson bracket as follows:

(19)
\begin{align} \left\{ f,g \right\} \equiv \frac{\partial f}{ \partial x} \frac{\partial g}{ \partial p} - \frac{\partial f}{ \partial p} \frac{\partial g}{ \partial x}. \end{align}

Thought Experiment 15 Prove the following Poisson bracket relations:

(20)
\begin{align} \left\{x,p \right\} &= 1 \\ \left\{x,H \right\} &= \dot x \\ \left\{p,H \right\} &= \dot p \\ \end{align}

For any observable $f(x,p)$,

(21)
\begin{align} \left\{f,H \right\} &= \dot f. \end{align}

Thus it is said that with respect to the Poisson bracket, the Hamiltonian generates time evolution of observables.

Quantum Mechanics

Despite the beautiful structure of classical mechanics, it is in fact only an approximation of the physical world. More detailed experiments at more microscopic scales brought about two realizations:

1. Actually, we cannot theoretically predict an object's exact location, as classical mechanics would lead us to believe. The best we can do is to solve for a probability distribution allowing us to predict the likelihood of an object's occupying any given volume of space. If we prepare an experiment in an identical fashion hundreds and thousands of times and record a measurement for an object's position on each run of the experiment, the distribution of measurements will follow the theoretically predicted probabilities. Classical mechanics is virtually correct for objects of a macroscopic size, since on such scales the spread of the probability distribution is very small. However, for subatomic particles, the spread is appreciable, meaning that we can no longer ignore the indeterminacy in a particle's location.

2. The more precisely we attempt to measure an object's position, the less precisely we can measure its momentum, and vice versa. This is known as the uncertainty principle.

These realizations can be encoded in a description of the physical world which generalizes classical mechanics. As a consequence of the new framework, physical observables will turn out not to assume a continuum of values as in classical mechanics but rather a discrete set of possible values. For this reason, the new form of mechanics is known as quantum mechanics, after the Latin quantus or 'how much.'

While in classical mechanics we solved for the trajectory $x(t)$ of an object over time, in quantum mechanics we will instead solve for the evolution over time of the probability distribution for the object's location. More precisely, we will solve for a complex-valued function $\Psi(x)$ called the probability amplitude. The probability distribution for the object's location will be given by $\lvert \Psi(x) \rvert^2 = \Psi^*(x) \Psi(x)$, sometimes abbreviated as $\Psi^* \Psi (x)$. The probability amplitude $\Psi(x)$ is known as the quantum state of our object. As we will see, the quantum state will also allow us to obtain expectation values for physical observables in a systematic fashion. Consistent with our new probabilistic description of nature under quantum mechanics, we cannot predict exact values for physical observables; instead, expectation values of observables become the physically relevant quantities.

Mathematically, then, we will be solving for a trajectory $\Psi(x,t)$ through state space, the space of functions $\Psi(x)$ such that

(22)
\begin{align} \int_{-\infty}^{\infty} \lvert \Psi(x) \rvert^2 \ dx \equiv \int_{-\infty}^{\infty} \Psi^* \Psi (x) \ dx = 1 \end{align}

(called normalized square-integrable functions). The reason for requiring the integral to be 1 is that otherwise, $\lvert \Psi \rvert^2$ would not have a sensible interpretation as a probability distribution for the location $x$ of our object. Notice that as in the discussion of classical mechanics, we are assuming that our object moves only in one dimension, which is the case for a mass on a spring. If the system had two degrees of freedom (as in the case of the cannon shot from the cannonball), $\Psi$ would be a function of two variables $x$ and $y$ and the normalization integral would instead be a double integral

(23)
\begin{align} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \lvert \Psi(x,y) \rvert^2 \ dx dy \equiv \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \Psi^* \Psi (x,y) \ dx dy = 1 \end{align}

The key in solving for the quantum state $\Psi(x,t)$ as a function of time is to find the quantum analogs of classical observables and time evolution. Quantum observables are described in terms of operators on the state $\Psi(x)$: an operator $\hat A$ takes the complex-valued function $\Psi(x)$, acts on it, and returns a new function $\hat A \Psi (x)$. The operator with the most obvious definition is the position operator, the quantum version of the classical observable. Denoted $\hat x$, the position operator acts as follows:

(24)
\begin{align} \hat x : \Psi(x) \to x \cdot \Psi(x). \end{align}

In words, the position operator takes the function $\Psi(x)$ and multiplies it by $x$. For example, if $\Psi(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$, then $\hat x \Psi(x) = \frac{x}{\sqrt{2 \pi}} e^{-x^2/2}$.

Thought Experiment 16 Notice that the expected value of $x$ is

(25)
\begin{align} \langle \hat x \rangle \equiv \int_{-\infty}^{\infty} \hat x \Psi(x) \Psi^* (x) \ dx \end{align}

Using the mathematical machinery discussed here, there is a sleeker and more compact way to write the expectation value for position $\langle \hat x \rangle$. Functions can be considered like vectors (you can do everything to functions you can to vectors: add and subtract them, multiply by scalars), and indeed we can define a dot product on complex valued functions as follows:

(26)
\begin{align} \langle f,g \rangle \equiv \int_{-\infty}^{\infty} f^*(x) g(x) \ dx \end{align}

(for further detail on this definition, work through the problems here). Thus,

(27)
\begin{align} \langle \hat x \rangle = \langle \Psi , \hat x \Psi \rangle. \end{align}

Thought Experiment 17 Convince yourself of this assertion.

Having established an intuitively reasonable definition of the position operator, we must now define the momentum operator. To motivate the definition, assume for simplicity that $\Psi(x)$ happens to be real-valued (i.e. has no imaginary part). Then it makes sense to assume that the particle will move toward regions where the probability amplitude $\Psi(x)$ is higher (i.e. toward regions the particle is more likely to occupy). Since $\frac{d \Psi}{dx}$ is positive when $\Psi(x)$ is increasing and negative when decreasing, taking the momentum proportional to $\frac{d \Psi}{dx}$ would indeed give a momentum in the direction of increasing probability amplitude.

We will indeed define momentum proportional to $\frac{d \Psi}{dx}$, but since the probability amplitude is not always real-valued but may be complex, it turns out that the exact proportionality should be

(28)
\begin{align} \hat p = -i \hbar \frac{d}{dx}, \end{align}

so that

(29)
\begin{align} \hat p \Psi(x) = -i \hbar \frac{d \Psi}{dx}. \end{align}

Although at first sight it appears paradoxical, the reason for the $-i$ in the proportionality is to ensure that the expectation values of all physical observables will be real rather than complex numbers (reasonable, since laboratory measurements are always real-valued). We will discuss the details further later, but in essence, the $i$ occurs because $\Psi(x)$ is allowed to be complex and the dot product $\langle f, g \rangle \equiv \int_{-\infty}^{\infty} f^*(x) g(x) \ dx$ includes complex conjugation. The reason for the $\hbar$ (Planck's constant divided by $2 \pi$, which has a value of approximately $10^{-34} \mathrm{m^2 \ kg/s}$) is the ensure that quantum indeterminacy will only become appreciable at subatomic scales.

Now that we have defined the quantum analogs of position and momentum, we can define the quantum Hamiltonian to be the operator $\hat H$ on quantum states in which all appearances of position and momentum within the classical Hamiltonian are promoted to operators. For the usual form of the classical Hamiltonian $H = \frac{p^2}{2m} + U(x),$

(30)
\begin{align} \hat H \Psi(x) = - \frac{\hbar^2}{2m} \frac{d^2 \Psi}{dx^2} + U(x) \Psi(x). \end{align}

Thought Experiment 18 Convince yourself of this assertion.

Calculation 4 Write down the quantum Hamiltonian operator for the simple harmonic oscillator.

Since the classical Hamiltonian generates time evolution, one can see the motivation behind the Schroedinger equation

(31)
\begin{align} \hat H \Psi (x,t) = i \hbar \frac{\partial}{\partial t} \Psi(x,t), \end{align}

which says that we can see how the quantum state of a system evolves in time by applying the quantum Hamiltonian operator to that state. We take the point of view that states evolve over time but observables do not. Expectation values of observables will of course evolve over time since an expectation value is always computed for a particular state.

Calculation 5 Write down the Schroedinger equation for the simple harmonic oscillator.

Notice that the entire structure of classical mechanics is embedded in the Poisson brackets of Thought Experiment 15, and thus we wish to generalize these relations. However, since quantum mechanical observables are operators, we need to know what will take the place of the classical Poisson bracket, which was defined on functions. The key in answering this question is to notice that when two operators are applied to a function, the order matters.

Calculation 3 Taking $\Psi(x) = \frac{1}{\sqrt{2 \pi}} e^{-x^2/2}$, compute $\hat p \Psi(x)$. Now compute $\hat x \hat p \Psi(x)$ and $\hat p \hat x \Psi(x)$, and notice that they are not equal.

To exploit the importance of operator ordering, we define the commutator of two operators $\hat A$ and $\hat B$ as $\left[ \hat A , \hat B \right] \equiv \hat A \hat B - \hat B \hat A$.

Thought Experiment 19 Prove that $[ \hat x, \hat p ] = i \hbar$. [Hint: this is the same as proving that for any quantum state $\Psi(x)$, $(\hat x \hat p - \hat p \hat x) \Psi(x) = i \hbar \Psi(x)$.]

From the fact that $[ \hat x , \hat p ] = i \hbar$, we will be able to derive a realization of the uncertainty principle.

The commutator will serve as the quantum analog of the Poisson bracket. As a rule of thumb, the translation from classical mechanics to quantum mechanics runs as follows:

(32)
\begin{align} [ \hat f , \hat g ] = i \hbar \left\{ f , g \right\}. \end{align}

However, since we take the point of view that observables do not evolve in time while states (and thus expectation values of observables) do, we need a subtler quantum analog for the Poisson bracket $\left\{ f, H \right\} = \dot f$.

Thought Experiment 20 Prove that for an observable $A$, the time derivative of the expectation value is given by

(33)
\begin{align} \frac{d}{dt} \langle \hat A \rangle = \frac{d}{dt} \langle \Psi , \hat A \Psi \rangle = \frac{i}{\hbar} \langle [\hat H , \hat A] \rangle \end{align}

by using the Schroedinger equation. Note that along the way, you will need the follow two facts, which you should also prove. In both of these facts, assume the quantum Hamiltonian is of the form $\hat H = - \frac{\hbar^2}{2m} \frac{d^2}{dx^2} + U(x)$.

  • The complex conjugate $\Psi^*$ of a wavefunction $\Psi$ satisfies the complex conjugate Schroedinger equation
(34)
\begin{align} \hat H \Psi^* = -i \hbar \frac{d}{dt} \Psi^* \end{align}
  • With respect to the inner product $\langle \Psi, \Phi \rangle \equiv \int_{-\infty}^{\infty} \Psi^* (x) \Phi (x) dx$, the Hamiltonian is what's known as a self-adjoint operator, which means that it satisfies
(35)
\begin{align} \langle \hat H \Psi , \Phi \rangle = \langle \Psi , \hat H \Phi \rangle \end{align}
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License