Some Basic Math for Quantum Mechanics

Fri Jul 21 02:16:00 2017 , 0x00019913

I’ve been reading a book on quantum computing lately (Nielsen and Chuang) and I thought I’d write a hyper-condensed review of the super-fundamental bits required for pure QM, but restricted to quantum computing. I’m not sure how human-readable this is in general, but the math will be easy for anyone who’s taken an undergrad course. Of course, the disclaimer is that I mostly come from an undergrad education and lots of reading in my free time, so don’t take this as gospel.

NB: because this is in the context of quantum computing, I will make no mention of infinite-dimentional Hilbert spaces. If I were to include those, I’d take my cues from the first chapter of Shankar’s QM book instead, which is a rather weighty read. I also won’t talk about metrics because that’s general relativity land, and there be dragons.

Lastly, this won’t be nearly the full material - more of a reminder for myself of the results and some background stuff for convenience.

Notation

is the transpose of a column vector or matrix .

is the complex conjugate of the elements of a vector or matrix .

(AKA “dagger”) is the conjugate transpose of a vector or matrix . Lots of literature uses the asterisk to represent this, which cost me some homework points back when I was learning this and didn’t know any better. The asterisk will now and forever mean complex conjugate.

Vector Spaces

We denote a vector as , or for short if it’s obvious what we’re talking about. Say is a set of vectors. Then is a vector space if for any and :

  1. Addition is associative:
  2. Addition is commutative:
  3. There exists s.t. . (Note that the here is shorthand for the vector , not a scalar .)
  4. Additive inverse: there exists s.t. .
  5. Closed under linear combinations: .
  6. …and a couple of others.

We normally think of a vector in an -dimensional vector space as an n-tuple of numbers, i.e., the column vector , where for a particular orthonormal basis . Change basis and the tuple representing the vector in that basis will be different.

A vector can also be a function or even a matrix, as long as it satisfies the standard properties, but, for our purposes, we’ll use the tuple representation.

Bases

A set of vectors is linearly independent if no vector in this set can be written as a linear combination of the others, i.e., for any coefficients ,

If a set of linearly independent vectors spans the given vector space (i.e., every vector in can be written as a linear combination of these vectors), then it’s a basis. The number of vectors in the basis is the dimension of the space.

Linear Operators

An operator is a map , i.e., it takes a vector in and maps it to a vector in . All our operators will be linear:

So operators distribute into and out of sums and everything is peachy. The identity map is customarily written without reference to any particular basis or dimensionality and is understood to take any vector and scalar into itself.

Importantly, if we have a basis of and basis of W, we recognize that A operating on some vector produces some linear combination of ’s basis vectors, which we call ’s matrix elements. This indicates that

  1. the operator has a representation as a matrix, and
  2. this representation is basis-dependent.

Inner Products and Dual Vectors

The inner products of two vectors and , respectively, is written as . This is Dirac’s bra-ket notation and, mysteriously, is never used outside physics. Specifically, the is a “ket” and the is a “bra”.

The inner product of two vectors and , respectively, is defined (assuming we represent our vectors as n-tuples) as

where the two vectors are represented in the same orthonormal basis and and are their coefficients.

The magnitude of a vector is written as

This thing is obviously real and non-negative if we believe our definition of the inner product.

(, strictly speaking, is a dual vector to . It’s a map that takes a vector and turns it into a number. If you have an orthonormal basis of vectors for vector space , we can define a dual basis to it as the set such that . Again, not delving into metrics here. That discussion can be found in innumerable sources, e.g., Wald’s general relativity book.)

Gram-Schmidt Orthonormalization

Leaving this here for future reference: given a potentially non-orthonormal basis , we can turn it into an orthonormal one . First, take . Then, for , take the original , subtract its projection onto all the s we’ve calculated, then normalize. This iterative procedure gives us .

So for :

where the denominator is just the magnitude of the numerator, for the sake of clean-ish notation.

Outer Products, Projection Operators, Completeness Relation

An outer product of two vectors and is written as . This is a linear operator; it’s also a map .

Suppose we have vector . Let’s operate on it with the quantity :

It must be, then, that . So we can dump it in the middle of any string of vector-operator operations without changing the result.

For instance, let’s take operator and find its representation with respect to a particular basis of and of :

Here is the matrix element of . This is a scalar.

A single term P = is a projection operator onto the th basis element; P = for some set of indices is a projection onto that subspace. One can verify that .

(More later.)