

Appendix A
Review of Vectors and Matrices
A.1. VECTORS
A.1.1 Definition
For the purposes of this text, a vector is an object which has magnitude and direction.Â Examples include forces, electric fields, and the normal to a surface.Â A vector is often represented pictorially as an arrow and symbolically by an underlined letter Â or using bold type .Â Its magnitude is denoted Â or .Â There are two special cases of vectors: the unit vector Â has ; and the null vector Â has .
A.1.2 Vector Operations
Â Addition
Let Â and Â be vectors.Â Then Â is also a vector.Â The vector Â may be shown diagramatically by placing arrows representing Â and Â head to tail, as shown in the figure.
Â Multiplication
1. Multiplication by a scalar. Let Â be a vector, and Â a scalar.Â Then Â is a vector.Â The direction of Â is parallel to Â and its magnitude is given by . Note that you can form a unit vector n which is parallel to a by setting .
2. Dot Product (also called the scalar product). Let a and b be two vectors.Â The dot product of a and b is a scalar denoted by , and is defined by , where Â is the angle subtended by a and b. Note that , and .Â If Â and Â then Â if and only if ; i.e. a and b are perpendicular.
3. Cross Product (also called the vector product).Â Let a and b be two vectors.Â The cross product of a and b is a vector denoted by .Â The direction of c is perpendicular to a and b, and is chosen so that (a,b,c) form a right handed triad, Fig. 3.Â The magnitude of c is given by
Note that Â and .
Â Some useful vector identities
A.1.3 Cartesian components of vectors
Let Â be three mutually perpendicular unit vectors which form a right handed triad, Fig. 4.Â Then Â are said to form and orthonormal basis. The vectors satisfy
We may express any vector a as a suitable combination of the unit vectors , Â and .Â For example, we may write
where Â are scalars, called the components of a in the basis .Â Â The components of a have a simple physical interpretation.Â For example, if we evaluate the dot product Â we find that
in view of the properties of the three vectors , Â and .Â Recall that
Then, noting that , we have
Thus, Â represents the projected length of the vector aÂ in the direction of , as illustrated in the figure.Â Similarly, Â and Â may be shown to represent the projection of Â in the directions Â and , respectively.
The advantage of representing vectors in a Cartesian basis is that vector addition and multiplication can be expressed as simple operations on the components of the vectors.Â For example, let a, b and c be vectors, with components , Â and , respectively.Â Then, it is straightforward to show that
A.1.4 Change of basis
Let a be a vector, and let Â be a Cartesian basis.Â Suppose that the components of a in the basis Â are known to be .Â Now, suppose that we wish to compute the components of a in a second Cartesian basis, .Â This means we wish to find components , such that
To do so, note that
This transformation is conveniently written as a matrix operation , where Â is a matrix consisting of the components of a in the basis , Â is a matrix consisting of the components of a in the basis , and Â is a `rotation matrixâ€™ as follows
Note that the elements of Â have a simple physical interpretation.Â For example, , where Â is the angle between the Â and Â axes.Â Similarly Â where Â is the angle between the Â and Â axes.Â In practice, we usually know the angles between the axes that make up the two bases, so it is simplest to assemble the elements of Â by putting the cosines of the known angles in the appropriate places.
Index notation provides another convenient way to write this transformation:
You donâ€™t need to know index notation in detail to understand this Â all you need to know is that
The same approach may be used to find an expression for Â in terms of .Â If you work through the details, you will find that
Comparing this result with the formula for Â in terms of , we see that
where the superscript T denotes the transpose (rows and columns interchanged). The transformation matrix Â is therefore orthogonal, and satisfies
where [I] is the identity matrix.
A.1.5 Useful vector operations
Â Calculating areas The area of a triangle bounded by vectors a, bÂ¸and ba is
The area of the parallelogram shown in the picture is 2A.
Â Calculating angles The angle between two vectors a and b is
Â Calculating the normal to a surface. If two vectors a and b can be found which are known to lie in the surface, then the unit normal to the surface is
If the surface is specified by a parametric equation of the form , where s and t are two parameters and r is the position vector of a point on the surface, then two vectors which lie in the plane may be computed from
Â Calculating Volumes
The volume of the parallelopiped defined by three vectors a, b, c is
The volume of the tetrahedron shown outlined in red is V/6.
A.2. VECTOR FIELDS AND VECTOR CALCULUS
A.2.1. Scalar field.
LetÂ Â be a Cartesian basis with origin O in three dimensional space. Â Let
denote the position vector of a point in space.Â A scalar field is a scalar valued function of position in space.Â A scalar field is a function of the components of the position vector, and so may be expressed as . The value of Â at a particular point in space must be independent of the choice of basis vectors.Â A scalar field may be a function of time (and possibly other parameters) as well as position in space.
A.2.2. Vector field
LetÂ Â be a Cartesian basis with origin O in three dimensional space.Â Let
denote the position vector of a point in space.Â A vector field is a vector valued function of position in space.Â A vector field is a function of the components of the position vector, and so may be expressed as .Â The vector may also be expressed as components in the basis
The magnitude and direction of Â at a particular point in space is independent of the choice of basis vectors.Â Â A vector field may be a function of time (and possibly other parameters) as well as position in space.
A.2.3. Change of basis for scalar fields.
Let Â be a Cartesian basis with origin O in three dimensional space. Express the position vector of a point relative to O in Â as
and let Â be a scalar field. Let Â be a second Cartesian basis, with origin P.Â Let Â denote the position vector ofÂ P relative to O. Express the position vector of a point relative to P in Â as
To find , use the following procedure.Â First, expressÂ p as components in the basis , using the procedure outlined in Section 1.4:
where
or, using index notation
where the transformation matrix Â is defined in Sect 1.4. Now, express c as components in , and note that
so that
A.2.4. Change of basis for vector fields.
Let Â be a Cartesian basis with origin O in three dimensional space. Express the position vector of a point relative to O in Â as
and let Â be a vectorÂ field, with components
Let Â be a second Cartesian basis, with origin P.Â Let Â denote the position vector ofÂ P relative to O. Express the position vector of a point relative to P in Â as
To express the vector field as components in Â and as a function of the components of p, use the following procedure.Â First, express Â in terms of Â using the procedure outlined for scalar fields in the preceding section
for k=1,2,3.Â Now, find the componentsÂ of v in Â using the procedure outlined in Section 1.4.Â Using index notation, the result is
A.2.5. Time derivatives of vectors
Let a(t) be a vector whose magnitude and direction vary with time, t.Â Suppose that Â is a fixed basis, i.e. independent of time.Â We may express a(t) in terms of components Â in the basis Â as . The time derivative of a is defined using the usual rules of calculus , or in component form as
The definition of the time derivative of a vector may be used to show the following rules
A.2.6. Using a rotating basis Â It is often convenient to express position vectors as components in a basis which rotates with time.Â To write equations of motion one must evaluate time derivatives of rotating vectors.
Let Â be a basis which rotates with instantaneous angular velocity .Â Then,
A.2.7. Gradient of a scalar field.
Let Â be a scalar field in three dimensional space.Â The gradient of Â is a vector field denoted by Â or , and is defined so that
for every position r in space and for every vector a.
LetÂ Â be a Cartesian basis with origin O in three dimensional space.Â Let
denote the position vector of a point in space.Â Express Â as a function of the components of r .Â The gradient ofÂ Â in this basis is then given by
A.2.8. Gradient of a vector field
Let v be a vector field in three dimensional space.Â The gradient of v is a tensor field denoted by Â or , and is defined so that
for every position r in space and for every vector a.
LetÂ Â be a Cartesian basis with origin O in three dimensional space.Â Let
denote the position vector of a point in space.Â Express v as a function of the components of r, so that .Â The gradient ofÂ v in this basis is then given by
Alternatively, in index notation
A.2.9. Divergence of a vector field
Let v be a vector field in three dimensional space.Â The divergence of v is a scalar field denoted by Â or .Â Formally, it is defined as Â (the trace of a tensor is the sum of its diagonal terms).Â
LetÂ Â be a Cartesian basis with origin O in three dimensional space.Â Let
denote the position vector of a point in space.Â Express v as a function of the components of r: . The divergence of v is then
A.2.10. Curl of a vector field.
Let v be a vector field in three dimensional space.Â The curl ofÂ vÂ is a vector field denoted by Â or .Â It is best defined in terms of its components in a given basis, although its magnitude and direction are not dependent on the choice of basis.
LetÂ Â be a Cartesian basis with origin O in three dimensional space.Â Let
denote the position vector of a point in space.Â Express v as a function of the components of rÂ . The curl ofÂ v in this basis is then given by
Using index notation, this may be expressed as
A.2.11 The Divergence Theorem.
Let V be a closed region in three dimensional space, bounded by an orientable surface S. Let n Â denote the unit vector normal to S, taken so that n points out of V. Let u be a vector field which is continuous and has continuous first partial derivatives in some domain containing T.Â Then
alternatively, expressed in index notation
For
a proof of this extremely useful theorem consult e.g. Kreyzig, Advanced Engineering Mathematics, Wiley,
A.3. MATRICES
A.3.1 Definition
An Â matrix Â is a set of numbers, arranged in m rows and n columns
Â A square matrix has equal numbers of rows and columns Â A diagonal matrix is a square matrix with elements such that Â for Â The identity matrix Â is a diagonal matrix for which all diagonal elements Â A symmetric matrix is a square matrix with elements such that Â A skew symmetric matrix is a square matrix with elements such that
A.3.2 Matrix operations
Â AdditionÂ LetÂ Â and Â be two matrices of order Â with elements Â and .Â Then
Â Multiplication by a scalar.Â Let Â be a matrix with elements , and let k be a scalar.Â Then
Â Multiplication by a matrix. Let Â be a matrix of order Â with elements , and let Â be a matrix of order Â with elements .Â The product Â is defined only if n=p, and is an Â matrix such that
Note that multiplication is distributive and associative, but not commutative, i.e.
The multiplication of a vector by a matrix is a particularly important operation.Â Let b and c be two vectors with n components, which we think of as Â matrices.Â Let Â be an Â matrix.Â Thus
Now,
i.e.
Â Transpose. Let Â be a matrix of order Â with elements .Â The transpose of Â is denoted .Â If Â is an Â matrix such that , then , i.e.
Note that
Â DeterminantÂ The determinant is defined only for a square matrix.Â Let Â be a Â matrix with components .Â The determinant ofÂ Â is denoted by Â or Â and is given by
Now, let Â be an Â matrix.Â Define the minors Â of Â as the determinant formed by omitting the ith row and jth column of .Â For example, the minors Â and Â for a Â matrix are computed as follows.Â Â Let
Then
Define the cofactors Â of Â as
Then, the determinant of the Â matrix Â is computed as follows
The result is the same whichever row i is chosen for the expansion.Â For the particular case of a Â matrix Â The determinant may also be evaluated by summing over rows, i.e.
and as before the result is the same for each choice of column j.Â Finally, note that
Â Inversion.Â Let Â be an Â matrix.Â The inverse of Â is denoted by Â and is defined such that
The inverse of Â exists if and only if .Â A matrix which has no inverse is said to be singular.Â The inverse of a matrix may be computed explicitly, by forming the cofactor matrix Â with components Â as defined in the preceding section.Â Then
In practice, it is faster to compute the inverse of a matrix using methods such as Gaussian elimination.Â
Note that
For a diagonal matrix, the inverse is
For a Â matrix, the inverse is
Â Eigenvalues and eigenvectors. Let Â be an Â matrix, with coefficients .Â Consider the vector equation Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (1) where x is a vector with n components, and Â is a scalar (which may be complex).Â The n nonzero vectors x and corresponding scalars Â which satisfy this equation are the eigenvectors and eigenvalues of .
Formally, eighenvalues and eigenvectors may be computed as follows.Â Rearrange the preceding equation to Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â (2) This has nontrivial solutions for x only if the determinant of the matrix Â vanishes.Â The equation
is an nth order polynomial which may be solved for .Â In general the polynomial will have n roots, which may be complex.Â The eigenvectors may then be computed using equation (2).Â For example, a Â matrix generally has two eigenvectors, which satisfy
Solve the quadratic equation to see that
The two corresponding eigenvectors may be computed from (2), which shows that
so that, multiplying out the first row of the matrix (you can use the second row too, if you wish Â since we chose Â to make the determinant of the matrix vanish, the two equations have the same solutions.Â In fact, if , you will need to do this, because the first equation will simply give 0=0 when trying to solve for one of the eigenvectors)
which are satisfied by any vector of the form
where p and q are arbitrary real numbers.
It is often convenient to normalize eigenvectors so that they have unit `lengthâ€™.Â For this purpose, choose p and q so that .Â (For vectors of dimension n, the generalized dot product is defined such that Â )
One may calculate explicit expressions for eigenvalues and eigenvectors for any matrix up to order , but the results are so cumbersome that, except for the Â results, they are virtually useless.Â In practice, numerical values may be computed using several iterative techniques.Â Packages like Mathematica, Maple or Matlab make calculations like this easy.
The eigenvalues of a real symmetric matrix are always real, and its eigenvectors are orthogonal, i.e. the ith and jth eigenvectors (with Â ) satisfy .
The eigenvalues of a skew symmetric matrix are pure imaginary.
Â Spectral and singular value decomposition.Â Let Â be a real symmetricÂ Â matrix. Denote the n (real) eigenvalues of Â by , and let Â be the corresponding normalized eigenvectors, such that .Â Then, for any arbitrary vector b,
Let Â be a diagonal matrix which contains the n eigenvalues of Â as elements of the diagonal, and let Â be a matrix consisting of the n eigenvectors as columns, i.e.
Then
Note that this gives another (generally quite useless) way to invert
where Â is easy to compute since Â is diagonal.
Â Square root of a matrix.Â Â Let Â be a real symmetricÂ Â matrix.Â Denote the singular value decomposition of Â by Â as defined above.Â Suppose that Â denotes the square root of , defined so that
One way to compute Â is through the singular value decomposition of
where


(c) A.F. Bower, 2008 