Appendix B
A Brief
Introduction to Tensors and Their Properties
B.1. BASIC PROPERTIES OF TENSORS
B.1.1 Examples of
Tensors
The
gradient of a vector field is a good example of a tensor.Â Visualize a vector field: at every point in
space, the field has a vector value .Â Let Â represent the gradient of u.Â
By definition, G enables
you to calculate the change in u when
you move from a point x in space
to a nearby point at :
G is a second order tensor.Â From
this example, we see that when you multiply a vector by a tensor, the result
is another vector.Â
This
is a general property of all second order tensors.Â A
tensor is a linear mapping of a vector onto another vector. Â Two examples, together with the vectors they
operate on, are:
Â The stress tensor
where n
is a unit vector normal to a surface, Â is the stress tensor and t is the traction vector acting on
the surface.
Â The deformation gradient tensor
where dx is an infinitesimal line element in
an undeformed solid, and dw is the vector representing the
deformed line element.
B.1.2 Matrix
representation of a tensor
To
evaluate and manipulate tensors, we express them as components in a basis, just as for vectors.Â We can use the displacement gradient to
illustrate how this is done.Â Let Â be a vector field, and let Â represent the gradient of u.Â
Recall the definition of G
Now,
let Â be a Cartesian basis, and express both du
and dx as components.Â Then,
calculate the components of du in terms of dx using the usual
rules of calculus
We could represent this as
a matrix product
From
this we see that G can be
represented as a Â matrix.Â
The elements of the matrix are known as the components of G in the basis .Â All second order tensors can be represented
in this form.Â For example, a general
second order tensor S could be
written as
You
have probably already seen the matrix representation of stress and strain
components in introductory courses.
Since
S can be represented as a matrix,
all operations that can be performed on a Â matrix can also be performed on S.Â
Examples include sums and products, the transpose, inverse, and
determinant.Â One can also compute
eigenvalues and eigenvectors for tensors, and thus define the log of a
tensor, the square root of a tensor, etc.Â
These tensor operations are summarized below.
Note
that the numbers ,
,
â€¦ Â depend on the basis ,
just as the components of a vector depend on the basis used to represent the
vector.Â However, just as the magnitude
and direction of a vector are independent of the basis, so the properties of
a tensor are independent of the basis.Â
That is to say, if S is a
tensor and u is a vector, then the
vector
has
the same magnitude and direction, irrespective of the basis used to represent
u, v, and S.
B.1.3 The difference between a matrix and a tensor
If
a tensor is a matrix, why is a matrix not the same thing as a tensor?Â Well, although you can multiply the three
components of a vector u by any Â matrix,
the
resulting three numbers Â may or may not represent the components of a
vector.Â If they are the components of a vector, then the matrix represents the
components of a tensor A, if not,
then the matrix is just an ordinary old matrix.
Â To check whether Â are the components of a vector, you need to
check how Â change due to a change of basis.Â That is to say, choose a new basis,
calculate the new components of u
in this basis, and calculate the new matrix in this basis (the new elements
of the matrix will depend on how the matrix was defined.Â The elements may or may not change Â if they donâ€™t, then the matrix cannot be the
components of a tensor).Â Then,
evaluate the matrix product to find a new left hand side, say .Â IfÂ Â are related to Â by the same transformation that was used to
calculate the new components of u,
then Â are the components of a vector, and,
therefore, the matrix represents the components of a tensor.
B.1.4 Creating a tensor using a dyadic product of
two vectors.
Let a and b be two
vectors.Â The dyadic product of a and bÂ is a second order tensor
S denoted by
.
with the property
for
all vectors u.Â (Clearly, this maps u onto a vector parallel to a
with magnitude Â )
The components of Â in a basis Â are
Note
that not all tensors can be constructed using a dyadic product of only two
vectors (this is because Â always has to be parallel to a, and therefore the representation
cannot map a vector onto an arbitrary vector).Â However, if a, b, and c are three independent vectors (i.e.
no two of them are parallel) then all tensors can be constructed as a sum of
scalar multiples of the nine possible dyadic products of these vectors.Â
B.2. OPERATIONS ON SECOND ORDER TENSORS
Â Tensor components.Â
Let
Â be a Cartesian basis, and let S be a second order tensor.Â The components of S in Â may be represented as a matrix
where
The representation of a
tensor in terms of its components can also be expressed in dyadic form as
This representation is
particularly convenient when using polar coordinates, as described in
Appendix E.
Â Addition
Let S and T be two tensors.Â Then Â is also a tensor.
Denote
the Cartesian components of U, S and
T by matrices as defined
above.Â The components of U are then related to the components
of S and T by
Â Product
of a tensor and a vector
Let u be a vector and S a
second order tensor.Â Then
is a vector.Â
Let
Â and Â denote the components of vectors u and v in a Cartesian basis ,
and denote the Cartesian components of S
as described above.Â Then
The product
is also a vector.Â In component form
Observe that Â (unless S is symmetric).
Â Product
of two tensors
Let T and S be two second
order tensors.Â Then Â is also a tensor.
Denote the components of U, S and T by Â matrices.Â
Then,
Note that tensor products, like matrix products, are not
commutative; i.e.
Â Transpose
Let S be a tensor.Â The
transpose of S is denoted by Â and is defined so that
Denote the components of S by a 3x3 matrix.Â The components ofÂ Â are then
i.e. the rows and columns
of the matrix are switched.
Note that, if A and B are two tensors, then
Â Trace
Let
S be a tensor, and denote the
components of S by a Â matrix.Â
The trace of S is denoted
by tr(S) or trace(S), and can be computed by summing
the diagonals of the matrix of components
More formally, let Â be any Cartesian basis.Â Then
The
trace of a tensor is an example of an invariant
of the tensor Â you get the same value for trace(S) whatever basis you use to define
the matrix of components of S.
Â Contraction.
Inner Product: Let S
and T be two second order
tensors.Â The inner product of S and T is a scalar, denoted by .Â Represent S and T by their
components in a basis.Â Then
Observe that ,
and also that ,
where I is the identity tensor.
Â Outer product: Let S and T be two second order tensors.Â The outer product of S and T is a scalar,
denoted by .Â Represent S and T by their
components in a basis.Â Then
Observe that
Â Determinant
The
determinant of a tensor is defined as the determinant of the matrix of its
components in a basis.Â For a second
order tensor
Note that if S and T are two tensors, then
Â Inverse
Let
S be a second order tensor.Â The inverse of S exists if and only if ,
and is defined by
where Â denotes the inverse of S and I is the identity tensor.
The
inverse of a tensor may be computed by calculating the inverse of the matrix
of its components.Â The result cannot
be expressed in a compact form for a general three dimensional second order
tensor, and is best computed by methods such as Gaussian elimination.
Â Eigenvalues and Eigenvectors (Principal values and
direction)
Let S be a second order tensor.Â
The scalars Â and unit vectors m which satisfy
are
known as the eigenvalues and eigenvectors of S, or
the principal values and principal directions of S. Note that Â may be complex.Â For a second order tensor in three
dimensions, there are generally three values of Â and three unique unit vectors m which satisfy this equation.Â Occasionally, there may be only two or one
value of .Â If this is the case, there are infinitely
many possible vectors m that
satisfy the equation.Â The eigenvalues
of a tensor, and the components of the eigenvectors, may be computed by
finding the eigenvalues and eigenvectors of the matrix of components (see
A.3.2)
The
eigenvalues of a symmetric tensor are always real.Â The eigenvalues of a skew tensor are always
pure imaginary or zero.
Â Change of Basis.
Let
S be a tensor, and let Â be a Cartesian basis.Â Suppose that the components of S in the basis Â are known to be
Now, suppose that we wish
to compute the components ofÂ S in a second Cartesian basis, .Â Denote these components by
To do so, first compute
the components of the transformation matrix [Q]
(this is the same matrix
you would use to transform vector components from Â to Â ).Â
Then,
or, written out in full
To prove this result, let
u and v be vectors satisfying
Denote the components of u and v in the two bases byÂ Â and ,
respectively.Â Recall that the vector
components are related by
Now, we could express the
tensorvector product in either basis
Substitute for Â from above into the second of these two
relations, we see that
Recall that
so multiplying both sides
by [Q] shows that
so, comparing with the
first of equation (1)
as stated.
Â Invariants
Invariants of a tensor are functions of the tensor components
which remain constant under a basis change.Â
That is to say, the invariant has the same value when computed in two
arbitrary bases Â and .
Â A symmetric second order tensor always
has three independent invariants.
Examples of invariants
are
1. The three eigenvalues
2. The determinant
3. The trace
4. The inner and outer products
These are not all
independent Â for example any of 24 can be calculated in
terms of 1.
B3 SPECIAL TENSORS
Â Identity tensorÂ The identity tensor I is the tensor such that, for any
tensor S or vector v
In any basis, the
identity tensor has components
Â Symmetric Tensor A symmetric tensor S has
the property
The components of a
symmetric tensor have the form
so that there are only
six independent components of the tensor, instead of nine.
Â Skew TensorÂ A skew tensor S has the property
The components of a skew
tensor have the form
Â Orthogonal Tensors An orthogonal tensor S has the property
An orthogonal tensor must
have ;
a tensor with Â is known as a proper orthogonal tensor.
