Appendix A

Review of Vectors and Matrices

A.1. VECTORS

A.1.1 Definition

For the purposes of this text, a vector is an object which has magnitude and direction.  Examples include forces, electric fields, and the normal to a surface.  A vector is often represented pictorially as an arrow and symbolically by an underlined letter $\underset{_}{a}$ or using bold type $a$.  Its magnitude is denoted $|\underset{_}{a}|$ or $|a|$.  There are two special cases of vectors: the unit vector $n$ has $|n|=1$; and the null vector $0$ has $|0|=0$.

A.1.2 Vector Operations

Let $a$ and $b$ be vectors.  Then $c=a+b$ is also a vector.  The vector $c$ may be shown diagramatically by placing arrows representing $a$ and $b$ head to tail, as shown in the figure.

Multiplication

1.      Multiplication by a scalar. Let $a$ be a vector, and $\alpha$ a scalar.  Then $b=\alpha \text{\hspace{0.17em}}a$ is a vector.  The direction of $b$ is parallel to $a$ and its magnitude is given by $|b|=\alpha |a|$.

Note that you can form a unit vector n which is parallel to a by setting $n=\frac{a}{|a|}$.

2.      Dot Product (also called the scalar product). Let a and b be two vectors.  The dot product of a and b is a scalar denoted by $\alpha =a\cdot b$, and is defined by

$a\cdot b=|a||b|\mathrm{cos}\theta \left(a,b\right)$,

where $\theta \left(a,b\right)$ is the angle subtended by a and b. Note that $a\cdot b=b\cdot a$, and $a\cdot a={|a|}^{2}$.  If $|a|\ne 0$ and $|b|\ne 0$ then $a\cdot b=0$ if and only if $\mathrm{cos}\theta \left(a,b\right)=0$; i.e. a and b are perpendicular.

3.      Cross Product (also called the vector product).  Let a and b be two vectors.  The cross product of a and b is a vector denoted by $c=a×b$The direction of c is perpendicular to a and b, and is chosen so that (a,b,c) form a right handed triad, Fig. 3.  The magnitude of c is given by

$|c|=|a×b|=|a||b|\mathrm{sin}\theta \left(a,b\right)$

Note that $a×b=-b×a$ and $a\cdot \left(a×b\right)=b\cdot \left(a×b\right)=0$.

Some useful vector identities

$\begin{array}{l}a\cdot \left(b×c\right)=b\cdot \left(c×a\right)=c\cdot \left(a×b\right)\\ a×\left(b×c\right)=\left(a\cdot c\right)b-\left(a\cdot b\right)c\\ \left(a×b\right)\cdot \left(c×d\right)=\left(a\cdot c\right)\left(b\cdot d\right)-\left(b\cdot c\right)\left(a\cdot d\right)\end{array}$

A.1.3 Cartesian components of vectors

Let $\left({e}_{1},{e}_{2},{e}_{3}\right)$ be three mutually perpendicular unit vectors which form a right handed triad, Fig. 4.  Then $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ are said to form and orthonormal basis. The vectors satisfy

$|{e}_{1}|=|{e}_{2}|=|{e}_{3}|=1\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}_{1}×{e}_{2}={e}_{3},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}_{1}×{e}_{3}=-{e}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{e}_{2}×{e}_{3}={e}_{1}$

We may express any vector a as a suitable combination of the unit vectors ${e}_{1}$, ${e}_{2}$ and ${e}_{3}$.  For example, we may write

$a={a}_{1}{e}_{1}+{a}_{2}{e}_{2}+{a}_{3}{e}_{3}=\sum _{i=1}^{3}{a}_{i}{e}_{i}$

where $\left({a}_{1},{a}_{2},{a}_{3}\right)$ are scalars, called the components of a in the basis $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$.   The components of a have a simple physical interpretation.  For example, if we evaluate the dot product $a\cdot {e}_{1}$ we find that

$a\cdot {e}_{1}=\left({a}_{1}{e}_{1}+{a}_{2}{e}_{2}+{a}_{3}{e}_{3}\right)\cdot {e}_{1}={a}_{1}$

in view of the properties of the three vectors ${e}_{1}$, ${e}_{2}$ and ${e}_{3}$.  Recall that

$a\cdot {e}_{1}=|a||{e}_{1}|\mathrm{cos}\theta \left(a,{e}_{1}\right)$

Then, noting that $|{e}_{1}|=1$, we have

${a}_{1}=a\cdot {e}_{1}=|a|\mathrm{cos}\theta \left(a,{e}_{1}\right)$

Thus, ${a}_{1}$ represents the projected length of the vector a  in the direction of ${e}_{1}$, as illustrated in the figure.  Similarly, ${a}_{2}$ and ${a}_{3}$ may be shown to represent the projection of $a$ in the directions ${e}_{2}$ and ${e}_{3}$, respectively.

The advantage of representing vectors in a Cartesian basis is that vector addition and multiplication can be expressed as simple operations on the components of the vectors.  For example, let a, b and c be vectors, with components $\left({a}_{1},{a}_{2},{a}_{3}\right)$, $\left({b}_{1},{b}_{2},{b}_{3}\right)$ and $\left({c}_{1},{c}_{2},{c}_{3}\right)$, respectively.  Then, it is straightforward to show that

$\begin{array}{c}c=a+b\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⇔\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{1}={a}_{1}+{b}_{1};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{2}={a}_{2}+{b}_{2};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{3}={a}_{3}+{b}_{3}\\ a\cdot b=\sum _{i=1}^{3}{a}_{i}{b}_{i}\\ c=a×b\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⇔\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{1}={a}_{2}{b}_{3}-{a}_{3}{b}_{2};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{2}={a}_{3}{b}_{1}-{a}_{1}{b}_{3};\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{3}={a}_{1}{b}_{2}-{a}_{2}{b}_{1}\end{array}$

A.1.4 Change of basis

Let a be a vector, and let $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis.  Suppose that the components of a in the basis $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ are known to be $\left({a}_{1},{a}_{2},{a}_{3}\right)$.  Now, suppose that we wish to compute the components of a in a second Cartesian basis, $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$.  This means we wish to find components $\left({\alpha }_{1},{\alpha }_{2},{\alpha }_{3}\right)$, such that

$a={\alpha }_{1}{m}_{1}+{\alpha }_{2}{m}_{2}+{\alpha }_{3}{m}_{3}$

To do so, note that

$\begin{array}{l}{\alpha }_{1}=a\cdot {m}_{1}={a}_{1}{e}_{1}\cdot {m}_{1}+{a}_{2}{e}_{2}\cdot {m}_{1}+{a}_{3}{e}_{3}\cdot {m}_{1}\\ {\alpha }_{2}=a\cdot {m}_{2}={a}_{1}{e}_{1}\cdot {m}_{2}+{a}_{2}{e}_{2}\cdot {m}_{2}+{a}_{3}{e}_{3}\cdot {m}_{2}\\ {\alpha }_{3}=a\cdot {m}_{3}={a}_{1}{e}_{1}\cdot {m}_{3}+{a}_{2}{e}_{2}\cdot {m}_{3}+{a}_{3}{e}_{3}\cdot {m}_{3}\end{array}$

This transformation is conveniently written as a matrix operation

$\left[\alpha \right]=\left[Q\right]\left[a\right]$,

where $\left[\alpha \right]$ is a matrix consisting of the components of a in the basis $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$, $\left[a\right]$ is a matrix consisting of the components of a in the basis $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$, and $\left[Q\right]$ is a rotation matrix’ as follows

$\left[\alpha \right]=\left[\begin{array}{l}{\alpha }_{1}\\ {\alpha }_{2}\\ {\alpha }_{3}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[a\right]=\left[\begin{array}{l}{a}_{1}\\ {a}_{2}\\ {a}_{3}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[Q\right]=\left[\begin{array}{l}{m}_{1}\cdot {e}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{1}\cdot {e}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{1}\cdot {e}_{3}\\ {m}_{2}\cdot {e}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{2}\cdot {e}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{2}\cdot {e}_{3}\\ {m}_{3}\cdot {e}_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{3}\cdot {e}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{m}_{3}\cdot {e}_{3}\end{array}\right]$

Note that the elements of $\left[Q\right]$ have a simple physical interpretation.  For example, ${m}_{1}\cdot {e}_{1}=\mathrm{cos}\theta \left({m}_{1},{e}_{1}\right)$, where $\theta \left({m}_{1},{e}_{1}\right)$ is the angle between the ${m}_{1}$ and ${e}_{1}$ axes.  Similarly ${m}_{1}\cdot {e}_{2}=\mathrm{cos}\theta \left({m}_{1},{e}_{2}\right)$ where $\theta \left({m}_{1},{e}_{2}\right)$ is the angle between the ${m}_{1}$ and ${e}_{2}$ axes.  In practice, we usually know the angles between the axes that make up the two bases, so it is simplest to assemble the elements of $\left[Q\right]$ by putting the cosines of the known angles in the appropriate places.

Index notation provides another convenient way to write this transformation:

${\alpha }_{i}={Q}_{ij}{a}_{j},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{Q}_{ij}={e}_{i}\cdot {m}_{j}$

You don’t need to know index notation in detail to understand this $–$ all you need to know is that

${Q}_{ij}{a}_{j}\text{\hspace{0.17em}}\equiv \sum _{j=1}^{3}{Q}_{ij}{a}_{j}$

The same approach may be used to find an expression for ${a}_{i}$ in terms of ${\alpha }_{i}$.  If you work through the details, you will find that

$\left[\begin{array}{l}{a}_{1}\\ {a}_{2}\\ {a}_{3}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\left[\begin{array}{ccc}{m}_{1}\cdot {e}_{1}& {m}_{2}\cdot {e}_{1}& {m}_{3}\cdot {e}_{1}\\ {m}_{1}\cdot {e}_{2}& {m}_{2}\cdot {e}_{2}& {m}_{3}\cdot {e}_{2}\\ {m}_{1}\cdot {e}_{3}& {m}_{2}\cdot {e}_{3}& {m}_{3}\cdot {e}_{3}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{l}{\alpha }_{1}\\ {\alpha }_{2}\\ {\alpha }_{3}\end{array}\right]$

Comparing this result with the formula for ${\alpha }_{i}$ in terms of ${a}_{i}$, we see that

$\left[a\right]={\left[Q\right]}^{T}\left[\alpha \right]$

where the superscript T denotes the transpose (rows and columns interchanged). The transformation matrix $\left[Q\right]$ is therefore orthogonal, and satisfies

${\left[Q\right]}^{-1}={\left[Q\right]}^{T}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[Q\right]{\left[Q\right]}^{T}={\left[Q\right]}^{T}\left[Q\right]=\left[I\right]$

where [I] is the identity matrix.

A.1.5 Useful vector operations

Calculating areas

The area of a triangle bounded by vectors a, b¸and b-a is

$A=\text{\hspace{0.17em}}\frac{1}{2}|a×b|$

The area of the parallelogram shown in the picture is 2A.

Calculating angles

The angle between two vectors a and b is

$\theta =\text{\hspace{0.17em}}{\mathrm{cos}}^{-1}\left(a\cdot b/|a||b|\right)$

Calculating the normal to a surface.

If two vectors a and b can be found which are known to lie in the surface, then the unit normal to the surface is

$n=\text{\hspace{0.17em}}±\frac{a×b}{|a×b|}$

If the surface is specified by a parametric equation of the form $r=\text{\hspace{0.17em}}r\left(s,t\right)$, where s and t are two parameters and r is the position vector of a point on the surface, then two vectors which lie in the plane may be computed from

$a=\frac{\partial r}{\partial s},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}b=\frac{\partial r}{\partial t}$

Calculating Volumes

The volume of the parallelopiped defined by three vectors a, b, c is

$V=|\text{\hspace{0.17em}}c\cdot \left(a×b\right)|$

The volume of the tetrahedron shown outlined in red is V/6.

A.2. VECTOR FIELDS AND VECTOR CALCULUS

A.2.1. Scalar field.

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r=\text{\hspace{0.17em}}{x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  A scalar field is a scalar valued function of position in space.  A scalar field is a function of the components of the position vector, and so may be expressed as $\varphi \left({x}_{1},{x}_{2},{x}_{3}\right)$. The value of $\varphi$ at a particular point in space must be independent of the choice of basis vectors.  A scalar field may be a function of time (and possibly other parameters) as well as position in space.

A.2.2. Vector field

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r=\text{\hspace{0.17em}}{x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  A vector field is a vector valued function of position in space.  A vector field is a function of the components of the position vector, and so may be expressed as $v\left({x}_{1},{x}_{2},{x}_{3}\right)$.  The vector may also be expressed as components in the basis $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}\text{\hspace{0.17em}}$

$v\left({x}_{1},{x}_{2},{x}_{3}\right)={v}_{1}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{1}+{v}_{2}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{2}+{v}_{3}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{3}$

The magnitude and direction of $v$ at a particular point in space is independent of the choice of basis vectors.   A vector field may be a function of time (and possibly other parameters) as well as position in space.

A.2.3. Change of basis for scalar fields.

Let $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space. Express the position vector of a point relative to O in $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ as

$r=\text{\hspace{0.17em}}{x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

and let $\varphi \left({x}_{1},{x}_{2},{x}_{3}\right)\text{\hspace{0.17em}}$ be a scalar field.

Let $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}\text{\hspace{0.17em}}$ be a second Cartesian basis, with origin P.  Let $c\equiv \stackrel{\to }{OP}\text{\hspace{0.17em}}$ denote the position vector of  P relative to O. Express the position vector of a point relative to P in $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}\text{\hspace{0.17em}}$ as

$p={\xi }_{1}{m}_{1}+{\xi }_{2}{m}_{2}+{\xi }_{3}{m}_{3}$

To find $\varphi \left({\xi }_{1},{\xi }_{2},{\xi }_{3}\right)\text{\hspace{0.17em}}$, use the following procedure.  First, express  p as components in the basis $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$, using the procedure outlined in Section 1.4:

$p={p}_{1}{e}_{1}+{p}_{2}{e}_{2}+{p}_{3}{e}_{3}$

where

$\begin{array}{l}{p}_{1}={\xi }_{1}{e}_{1}\cdot {m}_{1}+{\xi }_{2}{e}_{2}\cdot {m}_{1}+{\xi }_{3}{e}_{3}\cdot {m}_{1}\\ {p}_{2}={\xi }_{1}{e}_{1}\cdot {m}_{2}+{\xi }_{2}{e}_{2}\cdot {m}_{2}+{\xi }_{3}{e}_{3}\cdot {m}_{2}\\ {p}_{3}={\xi }_{1}{e}_{1}\cdot {m}_{3}+{\xi }_{2}{e}_{2}\cdot {m}_{3}+{\xi }_{3}{e}_{3}\cdot {m}_{3}\end{array}$

or, using index notation

${p}_{i}={Q}_{ij}{\xi }_{j}$

where the transformation matrix ${Q}_{ij}$ is defined in Sect 1.4.

Now, express c as components in $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$, and note that

$\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}r=p+c\\ ⇒\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}={p}_{1}{e}_{1}+{p}_{2}{e}_{2}+{p}_{3}{e}_{3}+{c}_{1}{e}_{1}+{c}_{2}{e}_{2}+{c}_{3}{e}_{3}\\ ⇒{x}_{1}={p}_{1}+{c}_{1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{2}={p}_{2}+{c}_{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{x}_{3}={p}_{3}+{c}_{3}\\ ⇒{x}_{i}={Q}_{ij}{\xi }_{j}+{c}_{i}\end{array}$

so that

$\begin{array}{l}\varphi \left({x}_{1},{x}_{2},{x}_{3}\right)=\varphi \left({p}_{1}+{c}_{1},{p}_{2}+{c}_{2},{p}_{3}+{c}_{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}=\varphi \left({Q}_{1j}{\xi }_{j}+{c}_{1},{Q}_{2j}{\xi }_{j}+{c}_{2},{Q}_{3j}{\xi }_{j}+{c}_{3}\right)\end{array}$

A.2.4. Change of basis for vector fields.

Let $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space. Express the position vector of a point relative to O in $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ as

$r={x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

and let $v\left({x}_{1},{x}_{2},{x}_{3}\right)$ be a vector  field, with components

$v\left({x}_{1},{x}_{2},{x}_{3}\right)={v}_{1}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{1}+{v}_{2}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{2}+{v}_{3}\left({x}_{1},{x}_{2},{x}_{3}\right){e}_{3}$

Let $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$ be a second Cartesian basis, with origin P.  Let $c\equiv \stackrel{\to }{OP}$ denote the position vector of  P relative to O. Express the position vector of a point relative to P in $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$ as

$p={\xi }_{1}{m}_{1}+{\xi }_{2}{m}_{2}+{\xi }_{3}{m}_{3}$

To express the vector field as components in $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$ and as a function of the components of p, use the following procedure.  First, express $\left({v}_{1},{v}_{2},{v}_{3}\right)$ in terms of $\left({\xi }_{1},{\xi }_{2},{\xi }_{3}\right)$ using the procedure outlined for scalar fields in the preceding section

$\begin{array}{l}{v}_{k}\left({x}_{1},{x}_{2},{x}_{3}\right)={v}_{k}\left({p}_{1}+{c}_{1},{p}_{2}+{c}_{2},{p}_{3}+{c}_{3}\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={v}_{k}\left({Q}_{1j}{\xi }_{j}+{c}_{1},{Q}_{2j}{\xi }_{j}+{c}_{2},{Q}_{3j}{\xi }_{j}+{c}_{3}\right)\end{array}$

for k=1,2,3.  Now, find the components  of v in $\left\{{m}_{1},{m}_{2},{m}_{3}\right\}$ using the procedure outlined in Section 1.4.  Using index notation, the result is

$\begin{array}{l}v={Q}_{1i}{v}_{i}\left({Q}_{1j}{\xi }_{j}+{c}_{1},{Q}_{2j}{\xi }_{j}+{c}_{2},{Q}_{3j}{\xi }_{j}+{c}_{3}\right){e}_{1}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{Q}_{2i}{v}_{i}\left({Q}_{1j}{\xi }_{j}+{c}_{1},{Q}_{2j}{\xi }_{j}+{c}_{2},{Q}_{3j}{\xi }_{j}+{c}_{3}\right){e}_{2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}+{Q}_{2i}{v}_{i}\left({Q}_{1j}{\xi }_{j}+{c}_{1},{Q}_{2j}{\xi }_{j}+{c}_{2},{Q}_{3j}{\xi }_{j}+{c}_{3}\right){e}_{3}\end{array}$

A.2.5. Time derivatives of vectors

Let a(t) be a vector whose magnitude and direction vary with time, t.  Suppose that $\left\{i,j,k\right\}$ is a fixed basis, i.e. independent of time.  We may express a(t) in terms of components $\left({a}_{x},{a}_{y},{a}_{z}\right)$ in the basis $\left\{i,j,k\right\}$ as

$a\left(t\right)={a}_{x}i+{a}_{y}j+{a}_{z}k$.

The time derivative of a is defined using the usual rules of calculus

$\stackrel{˙}{a}\left(t\right)=\frac{d}{dt}a\left(t\right)=\underset{\in \to 0}{\mathrm{lim}}\frac{a\left(t+\in \right)-a\left(t\right)}{\in }$,

or in component form as

$\stackrel{˙}{a}\left(t\right)={\stackrel{˙}{a}}_{x}i+{\stackrel{˙}{a}}_{y}j+{\stackrel{˙}{a}}_{z}k$

The definition of the time derivative of a vector may be used to show the following rules

$\begin{array}{l}\frac{d}{dt}\left[\alpha \left(t\right)a\left(t\right)\right]=\stackrel{˙}{\alpha }\left(t\right)a\left(t\right)+\alpha \left(t\right)\stackrel{˙}{a}\left(t\right)\\ \frac{d}{dt}\left[a\left(t\right)\cdot b\left(t\right)\right]=\stackrel{˙}{a}\left(t\right)\cdot b\left(t\right)+a\left(t\right)\cdot \stackrel{˙}{b}\left(t\right)\\ \frac{d}{dt}\left[a\left(t\right)×b\left(t\right)\right]=\stackrel{˙}{a}\left(t\right)×b\left(t\right)+a\left(t\right)×\stackrel{˙}{b}\left(t\right)\end{array}$

A.2.6. Using a rotating basis

It is often convenient to express position vectors as components in a basis which rotates with time.  To write equations of motion one must evaluate time derivatives of rotating vectors.

Let $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a basis which rotates with instantaneous angular velocity $\Omega$.  Then,

$\frac{d{e}_{1}}{dt}=\Omega ×{e}_{1},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{d{e}_{2}}{dt}=\Omega ×{e}_{2},\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\frac{d{e}_{3}}{dt}=\Omega ×{e}_{3}$

A.2.7. Gradient of a scalar field.

Let $\varphi$ be a scalar field in three dimensional space.  The gradient of $\varphi$ is a vector field denoted by $\text{grad}\left(\varphi \right)$ or $\varphi \nabla$, and is defined so that

$\left(\varphi \nabla \right)\cdot a=\underset{\in \to 0}{\mathrm{lim}}\frac{\varphi \left(r+\in \text{\hspace{0.17em}}a\right)-\varphi \left(r\right)}{\in }$

for every position r in space and for every vector a.

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r={x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  Express $\varphi$ as a function of the components of r $\varphi =\varphi \left({x}_{1},{x}_{2},{x}_{3}\right)$.  The gradient of  $\varphi$ in this basis is then given by

$\varphi \nabla =\frac{\partial \varphi }{\partial {x}_{1}}{e}_{1}+\frac{\partial \varphi }{\partial {x}_{2}}{e}_{2}+\frac{\partial \varphi }{\partial {x}_{3}}{e}_{3}$

A.2.8. Gradient of a vector field

Let v be a vector field in three dimensional space.  The gradient of v is a tensor field denoted by $\text{grad}\left(v\right)$ or $v\otimes \nabla$, and is defined so that

$\left(v\otimes \nabla \right)\cdot a=\underset{\in \to 0}{\mathrm{lim}}\frac{v\left(r+\in \text{\hspace{0.17em}}a\right)-v\left(r\right)}{\in }$

for every position r in space and for every vector a.

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r={x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  Express v as a function of the components of r, so that $v=v\left({x}_{1},{x}_{2},{x}_{3}\right)$.  The gradient of  v in this basis is then given by

$v\otimes \nabla =\left[\begin{array}{ccc}\frac{\partial {v}_{1}}{\partial {x}_{1}}& \frac{\partial {v}_{1}}{\partial {x}_{2}}& \frac{\partial {v}_{1}}{\partial {x}_{3}}\\ \frac{\partial {v}_{2}}{\partial {x}_{1}}& \frac{\partial {v}_{2}}{\partial {x}_{2}}& \frac{\partial {v}_{2}}{\partial {x}_{3}}\\ \frac{\partial {v}_{3}}{\partial {x}_{1}}& \frac{\partial {v}_{3}}{\partial {x}_{2}}& \frac{\partial {v}_{3}}{\partial {x}_{3}}\end{array}\right]$

Alternatively, in index notation

${\left[v\otimes \nabla \right]}_{ij}\equiv \frac{\partial {v}_{i}}{\partial {x}_{j}}$

A.2.9. Divergence of a vector field

Let v be a vector field in three dimensional space.  The divergence of v is a scalar field denoted by $\text{div}\left(v\right)$ or $\nabla \cdot v$.  Formally, it is defined as $\text{trace(grad(}v\text{))}$ (the trace of a tensor is the sum of its diagonal terms).

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r={x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  Express v as a function of the components of r: $v=v\left({x}_{1},{x}_{2},{x}_{3}\right)$. The divergence of v is then

$\text{div(}v\text{)=}\frac{\partial {v}_{1}}{\partial {x}_{1}}+\frac{\partial {v}_{2}}{\partial {x}_{2}}+\frac{\partial {v}_{3}}{\partial {x}_{3}}$

A.2.10. Curl of a vector field.

Let v be a vector field in three dimensional space.  The curl of  v  is a vector field denoted by $\text{curl}\left(v\right)$ or $\nabla ×v$.  It is best defined in terms of its components in a given basis, although its magnitude and direction are not dependent on the choice of basis.

Let  $\left\{{e}_{1},{e}_{2},{e}_{3}\right\}$ be a Cartesian basis with origin O in three dimensional space.  Let

$r={x}_{1}{e}_{1}+{x}_{2}{e}_{2}+{x}_{3}{e}_{3}$

denote the position vector of a point in space.  Express v as a function of the components of r  $v=v\left({x}_{1},{x}_{2},{x}_{3}\right)$. The curl of  v in this basis is then given by

$\nabla ×v=|\begin{array}{ccc}{e}_{1}& {e}_{2}& {e}_{3}\\ \frac{\partial }{\partial {x}_{1}}& \frac{\partial }{\partial {x}_{2}}& \frac{\partial }{\partial {x}_{3}}\\ {v}_{1}& {v}_{2}& {v}_{3}\end{array}|=\left(\frac{\partial {v}_{3}}{\partial {x}_{2}}-\frac{\partial {v}_{2}}{\partial {x}_{3}}\right){e}_{1}+\left(\frac{\partial {v}_{1}}{\partial {x}_{3}}-\frac{\partial {v}_{3}}{\partial {x}_{1}}\right){e}_{2}+\left(\frac{\partial {v}_{2}}{\partial {x}_{1}}-\frac{\partial {v}_{1}}{\partial {x}_{2}}\right){e}_{3}$

Using index notation, this may be expressed as

${\left[\nabla ×v\right]}_{i}={\in }_{ijk}\frac{\partial {v}_{j}}{\partial {x}_{k}}$

A.2.11 The Divergence Theorem.

Let V be a closed region in three dimensional space, bounded by an orientable surface S. Let n  denote the unit vector normal to S, taken so that n points out of V. Let u be a vector field which is continuous and has continuous first partial derivatives in some domain containing T.  Then

$\underset{V}{\int }\text{div(}u\text{)}\text{\hspace{0.17em}}dV=\underset{S}{\int }u\cdot n\text{\hspace{0.17em}}dA$

alternatively, expressed in index notation

$\underset{V}{\int }\frac{\partial {u}_{i}}{\partial {x}_{i}}dV=\underset{S}{\int }{u}_{i}{n}_{i}dA$

For a proof of this extremely useful theorem consult e.g. Kreyzig, Advanced Engineering Mathematics, Wiley, New York, (1998).

A.3. MATRICES

A.3.1 Definition

An $\left(n×m\right)$ matrix $\left[A\right]$ is a set of numbers, arranged in m rows and n columns

$\left[A\right]=\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{13}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1n}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {a}_{m1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m3}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{mn}\end{array}\right]$

A square matrix has equal numbers of rows and columns

A diagonal matrix is a square matrix with elements such that ${a}_{ij}=0$ for $i\ne j$

The identity matrix $\left[I\right]$ is a diagonal matrix for which all diagonal elements ${a}_{ii}=1$

A symmetric matrix is a square matrix with elements such that ${a}_{ij}={a}_{ji}$

A skew symmetric matrix is a square matrix with elements such that ${a}_{ij}=-{a}_{ji}$

A.3.2 Matrix operations

Addition  Let  $\left[A\right]$ and $\left[B\right]$ be two matrices of order $\left(m×n\right)$ with elements ${a}_{ij}$ and ${b}_{ij}$.  Then

$\left[C\right]=\left[A\right]+\left[B\right]⇔{c}_{ij}={a}_{ij}+{b}_{ij}$

Multiplication by a scalar.  Let $\left[A\right]$ be a matrix with elements ${a}_{ij}$, and let k be a scalar.  Then

$\left[B\right]=k\left[A\right]⇔{b}_{ij}=k{a}_{ij}$

Multiplication by a matrix. Let $\left[A\right]$ be a matrix of order $\left(m×n\right)$ with elements ${a}_{ij}$, and let $\left[B\right]$ be a matrix of order $\left(p×q\right)$ with elements ${b}_{ij}$.  The product $\left[C\right]=\left[A\right]\left[B\right]$ is defined only if n=p, and is an $\left(m×q\right)$ matrix such that

$\left[C\right]=\left[A\right]\left[B\right]⇔{c}_{ij}=\sum _{k=1}^{n}{a}_{ik}{b}_{kj}$

Note that multiplication is distributive and associative, but not commutative, i.e.

$\left[A\right]\left(\left[B\right]+\left[C\right]\right)=\left[A\right]\left[B\right]+\left[A\right]\left[C\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[A\right]\left(\left[B\right]\left[C\right]\right)=\left(\left[A\right]\left[B\right]\right)\left[C\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[A\right]\left[B\right]\ne \left[B\right]\left[A\right]$

The multiplication of a vector by a matrix is a particularly important operation.  Let b and c be two vectors with n components, which we think of as $\left(1×n\right)$ matrices.  Let $\left[A\right]$ be an $\left(m×n\right)$ matrix.  Thus

$b=\left[\begin{array}{l}{b}_{1}\\ {b}_{2}\\ {b}_{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {b}_{n}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}c=\left[\begin{array}{l}{c}_{1}\\ {c}_{2}\\ {c}_{3}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {c}_{n}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[A\right]=\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{13}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1n}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {a}_{m1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m3}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{mn}\end{array}\right]$

Now,

$c=\left[A\right]b\text{\hspace{0.17em}}\text{\hspace{0.17em}}⇔\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{c}_{i}=\sum _{j=1}^{n}{a}_{ij}{b}_{j}$

i.e.

$\begin{array}{l}{c}_{1}={a}_{11}{b}_{1}+{a}_{12}{b}_{2}+{a}_{13}{b}_{3}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1n}{b}_{n}\\ {c}_{2}={a}_{21}{b}_{1}+{a}_{22}{b}_{2}+{a}_{23}{b}_{3}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2n}{b}_{n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {c}_{m}={a}_{m1}{b}_{1}+{a}_{m2}{b}_{2}+{a}_{m3}{b}_{3}+\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{mn}{b}_{n}\end{array}$

Transpose. Let $\left[A\right]$ be a matrix of order $\left(m×n\right)$ with elements ${a}_{ij}$.  The transpose of $\left[A\right]$ is denoted ${\left[A\right]}^{T}$.  If $\left[B\right]$ is an $\left(n×m\right)$ matrix such that $\left[B\right]={\left[A\right]}^{T}$, then ${b}_{ij}={a}_{ji}$, i.e.

${\left[A\right]}^{T}={\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{13}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{1n}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2n}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {a}_{m1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{m3}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{mn}\end{array}\right]}^{T}=\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{31}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{n1}\\ {a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{3}{}_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{n2}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {a}_{1m}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{2m}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{3m}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{nm}\end{array}\right]$

Note that

${\left(\left[A\right]\left[B\right]\right)}^{T}={\left[B\right]}^{T}{\left[A\right]}^{T}$

Determinant  The determinant is defined only for a square matrix.  Let $\left[A\right]$ be a $\left(2×2\right)$ matrix with components ${a}_{ij}$.  The determinant of  $\left[A\right]$ is denoted by $\mathrm{det}\left[A\right]$ or $|A|$ and is given by

$|A|=|\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\end{array}|={a}_{11}{a}_{22}-{a}_{12}{a}_{21}$

Now, let $\left[A\right]$ be an $\left(n×n\right)$ matrix.  Define the minors ${M}_{ij}$ of $\left[A\right]$ as the determinant formed by omitting the ith row and jth column of $\left[A\right]$.  For example, the minors ${M}_{11}$ and ${M}_{12}$ for a $\left(3×3\right)$ matrix are computed as follows.   Let

$\left[A\right]=\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{13}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\text{\hspace{0.17em}}\\ {a}_{31}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{32}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{33}\end{array}\right]$

Then

${M}_{11}=|\begin{array}{l}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\\ {a}_{32}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{33}\end{array}|={a}_{22}{a}_{33}-{a}_{32}{a}_{23}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{M}_{12}=|\begin{array}{l}{a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\\ {a}_{31}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{33}\end{array}|={a}_{21}{a}_{33}-{a}_{31}{a}_{23}$

Define the cofactors ${C}_{ij}$ of $\left[A\right]$ as

${C}_{ij}={\left(-1\right)}^{i+j}{M}_{ij}$

Then, the determinant of the $\left(n×n\right)$ matrix $\left[A\right]$ is computed as follows

$|A|=\sum _{j=1}^{n}{a}_{ij}{C}_{ij}$

The result is the same whichever row i is chosen for the expansion.  For the particular case of a $\left(3×3\right)$ matrix

$\mathrm{det}\left[A\right]=\mathrm{det}\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{13}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{23}\text{\hspace{0.17em}}\\ {a}_{31}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{32}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{33}\end{array}\right]={a}_{11}\left({a}_{22}{a}_{33}-{a}_{23}{a}_{32}\right)+{a}_{12}\left({a}_{23}{a}_{31}-{a}_{21}{a}_{33}\right)+{a}_{13}\left({a}_{21}{a}_{32}-{a}_{31}{a}_{22}\right)$ The determinant may also be evaluated by summing over rows, i.e.

$|A|=\sum _{i=1}^{n}{a}_{ij}{C}_{ij}$

and as before the result is the same for each choice of column j.  Finally, note that

$\mathrm{det}{\left[A\right]}^{T}=\mathrm{det}\left[A\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{det}\left(\left[A\right]\left[B\right]\right)=\mathrm{det}\left[A\right]\mathrm{det}\left[B\right]$

Inversion.  Let $\left[A\right]$ be an $\left(n×n\right)$ matrix.  The inverse of $\left[A\right]$ is denoted by ${\left[A\right]}^{-1}$ and is defined such that

${\left[A\right]}^{-1}\left[A\right]=\left[I\right]$

The inverse of $\left[A\right]$ exists if and only if $\mathrm{det}\left[A\right]\ne 0$.  A matrix which has no inverse is said to be singular.  The inverse of a matrix may be computed explicitly, by forming the cofactor matrix $\left[C\right]$ with components ${c}_{ij}$ as defined in the preceding section.  Then

${\left[A\right]}^{-1}=\frac{1}{\mathrm{det}\left[A\right]}{\left[C\right]}^{T}$

In practice, it is faster to compute the inverse of a matrix using methods such as Gaussian elimination.

Note that

${\left(\left[A\right]\left[B\right]\right)}^{-1}={\left[B\right]}^{-1}{\left[A\right]}^{-1}$

For a diagonal matrix, the inverse is

$\left[A\right]=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\\ \text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{nn}\end{array}\right]=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[\begin{array}{l}1/{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1/\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1/{a}_{nn}\end{array}\right]$

For a $\left(2×2\right)$ matrix, the inverse is

$\left[\begin{array}{l}{a}_{11}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\\ {a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\end{array}\right]=\frac{1}{{a}_{11}{a}_{22}-{a}_{12}{a}_{21}}\left[\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}-\text{\hspace{0.17em}}{a}_{12}\\ -{a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{11}\end{array}\right]$

Eigenvalues and eigenvectors. Let $\left[A\right]$ be an $\left(n×n\right)$ matrix, with coefficients ${a}_{ij}$.  Consider the vector equation

$\left[A\right]x=\lambda x$                                                (1)

where x is a vector with n components, and $\lambda$ is a scalar (which may be complex).  The n nonzero vectors x and corresponding scalars $\lambda$ which satisfy this equation are the eigenvectors and eigenvalues of $\left[A\right]$.

Formally, eighenvalues and eigenvectors may be computed as follows.  Rearrange the preceding equation to

$\left(\left[A\right]-\lambda \left[I\right]\right)x=0$                                     (2)

This has nontrivial solutions for x only if the determinant of the matrix $\left(\left[A\right]-\lambda \left[I\right]\right)$ vanishes.  The equation

$\mathrm{det}\left(\left[A\right]-\lambda \left[I\right]\right)=0$

is an nth order polynomial which may be solved for $\lambda$.  In general the polynomial will have n roots, which may be complex.  The eigenvectors may then be computed using equation (2).  For example, a $\left(2×2\right)$ matrix generally has two eigenvectors, which satisfy

$|A-\lambda I|=|\begin{array}{l}{a}_{11}-\lambda \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}-\lambda \end{array}|=\left({a}_{11}-\lambda \right)\left({a}_{22}-\lambda \right)-{a}_{12}{a}_{21}=0$

Solve the quadratic equation to see that

$\begin{array}{l}{\lambda }_{1}=\frac{1}{2}\left({a}_{11}+{a}_{22}\right)-\frac{1}{2}{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\\ {\lambda }_{2}=\frac{1}{2}\left({a}_{11}+{a}_{22}\right)+\frac{1}{2}{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\end{array}$

The two corresponding eigenvectors may be computed from (2), which shows that

$\left[\begin{array}{l}{a}_{11}-{\lambda }_{i}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{12}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{21}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{a}_{22}-{\lambda }_{i}\end{array}\right]\left[\begin{array}{l}{x}_{1}^{\left(i\right)}\\ {x}_{2}^{\left(i\right)}\end{array}\right]=0$

so that, multiplying out the first row of the matrix (you can use the second row too, if you wish $–$ since we chose $\lambda$ to make the determinant of the matrix vanish, the two equations have the same solutions.  In fact, if ${a}_{12}=0$, you will need to do this, because the first equation will simply give 0=0 when trying to solve for one of the eigenvectors)

$\begin{array}{l}\left(\frac{1}{2}\left({a}_{11}-{a}_{22}\right)+\frac{1}{2}{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\right){x}_{1}^{\left(1\right)}+{a}_{12}{x}_{2}^{\left(1\right)}=0\\ \left(\frac{1}{2}\left({a}_{11}-{a}_{22}\right)-\frac{1}{2}{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\right){x}_{1}^{\left(2\right)}+{a}_{12}{x}_{2}^{\left(2\right)}=0\end{array}$

which are satisfied by any vector of the form

$\begin{array}{l}{x}^{\left(1\right)}=\left[\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}2{a}_{12}\\ \left({a}_{11}-{a}_{22}\right)+{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\end{array}\right]p\\ {x}^{\left(2\right)}=\left[\begin{array}{l}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}2{a}_{12}\\ \left({a}_{11}-{a}_{22}\right)-{\left\{{\left({a}_{11}+{a}_{22}\right)}^{2}-4\left({a}_{11}{a}_{22}-{a}_{12}{a}_{21}\right)\right\}}^{1/2}\end{array}\right]q\end{array}$

where p and q are arbitrary real numbers.

It is often convenient to normalize eigenvectors so that they have unit length’.  For this purpose, choose p and q so that ${x}^{\left(i\right)}\cdot {x}^{\left(i\right)}=1$.  (For vectors of dimension n, the generalized dot product is defined such that $x\cdot x={\sum }_{i=1}^{n}{x}_{i}{x}_{i}$ )

One may calculate explicit expressions for eigenvalues and eigenvectors for any matrix up to order $\left(4×4\right)$, but the results are so cumbersome that, except for the $\left(2×2\right)$ results, they are virtually useless.  In practice, numerical values may be computed using several iterative techniques.  Packages like Mathematica, Maple or Matlab make calculations like this easy.

The eigenvalues of a real symmetric matrix are always real, and its eigenvectors are orthogonal, i.e. the ith and jth eigenvectors (with $i\ne j$ ) satisfy ${x}^{\left(i\right)}\cdot {x}^{\left(j\right)}=0$.

The eigenvalues of a skew symmetric matrix are pure imaginary.

Spectral and singular value decomposition.  Let $\left[A\right]$ be a real symmetric  $\left(n×n\right)$ matrix. Denote the n (real) eigenvalues of $\left[A\right]$ by ${\lambda }_{i}$, and let ${w}^{\left(i\right)}$ be the corresponding normalized eigenvectors, such that ${w}^{\left(i\right)}\cdot {w}^{\left(i\right)}=1$.  Then, for any arbitrary vector b,

$\left[A\right]b=\sum _{i=1}^{n}{\lambda }_{i}\left({w}^{\left(i\right)}\cdot b\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}^{\left(i\right)}$

Let $\left[\Lambda \right]$ be a diagonal matrix which contains the n eigenvalues of $\left[A\right]$ as elements of the diagonal, and let $\left[Q\right]$ be a matrix consisting of the n eigenvectors as columns, i.e.

$\left[\Lambda \right]=\text{\hspace{0.17em}}\left[\begin{array}{l}{\lambda }_{1}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{2}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ \text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\lambda }_{n}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\left[Q\right]=\text{\hspace{0.17em}}\left[\begin{array}{l}{w}_{1}^{\left(1\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{1}^{\left(2\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{1}^{\left(3\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{1}^{\left(n\right)}\text{\hspace{0.17em}}\\ {w}_{2}^{\left(1\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{2}^{\left(2\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{2}^{\left(3\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{2}^{\left(n\right)}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ {w}_{n}^{\left(1\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{n}^{\left(2\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{n}^{\left(3\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{w}_{n}^{\left(n\right)}\end{array}\right]$

Then

$\left[A\right]=\left[Q\right]\left[\Lambda \right]{\left[Q\right]}^{T}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\left[Q\right]}^{T}\left[Q\right]=\left[Q\right]{\left[Q\right]}^{T}=\left[I\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\left[Q\right]}^{T}\left[A\right]\left[Q\right]=\left[\Lambda \right]$

Note that this gives another (generally quite useless) way to invert $\left[A\right]$

${\left[A\right]}^{-1}=\left[Q\right]{\left[\Lambda \right]}^{-1}{\left[Q\right]}^{T}$

where ${\left[\Lambda \right]}^{-1}$ is easy to compute since $\left[\Lambda \right]$ is diagonal.

Square root of a matrix.   Let $\left[A\right]$ be a real symmetric  $\left(n×n\right)$ matrix.  Denote the singular value decomposition of $\left[A\right]$ by $\left[A\right]=\left[Q\right]\left[\Lambda \right]{\left[Q\right]}^{T}\text{\hspace{0.17em}}$ as defined above.  Suppose that $\left[S\right]={\left[A\right]}^{1/2}$ denotes the square root of $\left[A\right]$, defined so that

$\left[S\right]\left[S\right]=\left[A\right]$

One way to compute $\left[S\right]$ is through the singular value decomposition of $\left[A\right]$

$\left[S\right]=\left[Q\right]{\left[\Lambda \right]}^{1/2}{\left[Q\right]}^{T}$

where

${\left[\Lambda \right]}^{1/2}=\text{\hspace{0.17em}}\left[\begin{array}{l}\sqrt{{\lambda }_{1}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\sqrt{{\lambda }_{2}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\\ \text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ddots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}⋮\\ 0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\cdots \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\sqrt{{\lambda }_{n}}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}$