
			Linear Algebra

*INTRO This chapter describes the commands for doing linear
algebra. They can be used to manipulate vectors, represented as lists,
and matrices, represented as lists of lists.

*CMD LeviCivita --- totally anti-symmetric Levi-Civita symbol
*STD
*CALL
	LeviCivita(list)

*PARMS

list - a list of integers 1 .. n in some order

*DESC

{LeviCivita} implements the Levi-Civita symbol. This is generally
useful for tensor calculus.  {list}  should be a list of integers,
and this function returns 1 if the integers are in successive order,
eg. {LeviCivita( {1,2,3,...} )}  would return 1. Swapping two elements of this
list would return -1. So, {LeviCivita( {2,1,3} )} would evaluate
to -1.

*E.G.

	In> LeviCivita({1,2,3})
	Out> 1;
	In> LeviCivita({2,1,3})
	Out> -1;
	In> LeviCivita({2,2,3})
	Out> 0;

*SEE Permutations

*CMD Permutations --- get all permutations of a list
*STD
*CALL
	Permutations(list)

*PARMS

list - a list of elements

*DESC

Permutations returns a list with all the permutations of
the original list.

*E.G.

	In> Permutations({a,b,c})
	Out> {{a,b,c},{a,c,b},{c,a,b},{b,a,c},
	{b,c,a},{c,b,a}};

*SEE LeviCivita

*CMD InProduct --- inner product of vectors
*STD
*CALL
	InProduct(a,b)
	a . b (prec. 3)

*PARMS

{a}, {b} -- vectors of equal length

*DESC

The inner product of the two vectors "a" and "b" is returned. The
vectors need to have the same size.

*E.G.

	In> {a,b,c} . {d,e,f};
	Out> a*d+b*e+c*f;

*SEE CrossProduct

*CMD CrossProduct --- outer product of vectors
*STD
*CALL
	CrossProduct(a,b)
	a X b  (prec. 3)

*PARMS

{a}, {b} -- three-dimensional vectors

*DESC

The outer product (also called the cross product) of the vectors "a"
and "b" is returned. The result is perpendicular to both "a" and
"b" and its length is the product of the lengths of the vectors.
Both "a" and "b" have to be three-dimensional.

*E.G.

	In> {a,b,c} X {d,e,f};
	Out> {b*f-c*e,c*d-a*f,a*e-b*d};

*SEE InProduct

*CMD ZeroVector --- create a vector with all zeroes
*STD
*CALL
	ZeroVector(n)

*PARMS

n - length of the vector to return

*DESC

This command returns a vector of length "n", filled with zeroes.

*E.G.

	In> ZeroVector(4)
	Out> {0,0,0,0};

*SEE BaseVector, ZeroMatrix, IsZeroVector

*CMD BaseVector --- base vector
*STD
*CALL
	BaseVector(k, n)

*PARMS

k - index of the base vector to construct

n - dimension of the vector

*DESC

This command returns the "k"-th base vector of dimension "n". This
is a vector of length "n" with all zeroes except for the "k"-th
entry, which contains a 1.

*E.G.

	In> BaseVector(2,4)
	Out> {0,1,0,0};

*SEE ZeroVector, Identity

*CMD Identity --- make identity matrix
*STD
*CALL
	Identity(n)

*PARMS

n - size of the matrix

*DESC

This commands returns the identity matrix of size "n" by "n". This
matrix has ones on the diagonal while the other entries are zero.

*E.G.

	In> Identity(3)
	Out> {{1,0,0},{0,1,0},{0,0,1}};

*SEE BaseVector, ZeroMatrix, DiagonalMatrix

*CMD ZeroMatrix --- make a zero matrix
*STD
*CALL
	ZeroMatrix(n, m)

*PARMS

n - number of rows

m - number of columns

*DESC

This command returns a matrix with "n" rows and "m" columns,
completely filled with zeroes.

*E.G.

	In> ZeroMatrix(3,4)
	Out> {{0,0,0,0},{0,0,0,0},{0,0,0,0}};

*SEE ZeroVector, Identity

*CMD DiagonalMatrix --- construct a diagonal matrix
*STD
*CALL
	DiagonalMatrix(d)

*PARMS

d - list of values to put on the diagonal

*DESC

This command constructs a diagonal matrix, that is a square matrix
whose off-diagonal entries are all zero. The elements of the vector
"d" are put on the diagonal.

*E.G.

	In> DiagonalMatrix(1 .. 4)
	Out> {{1,0,0,0},{0,2,0,0},{0,0,3,0},{0,0,0,4}};

*SEE Identity, ZeroMatrix

*CMD IsMatrix --- test for a matrix
*STD
*CALL
	IsMatrix(M)

*PARMS

M - a mathematical object

*DESC

{IsMatrix} returns {True} if {M} is a matrix, {False} otherwise. Something is
considered to be a matrix if it is a list and all the entries of this
list are themselves lists.

*E.G.

	In> IsMatrix(ZeroMatrix(3,4))
	Out> True;
	In> IsMatrix(ZeroVector(4))
	Out> False;
	In> IsMatrix(3)
	Out> False;

*SEE IsVector

*CMD Normalize --- normalize a vector
*STD
*CALL
	Normalize(v)

*PARMS

v - a vector

*DESC

Return the normalized (unit) vector parallel to {v}: a vector having the same
direction but with length 1.

*E.G.

	In> Normalize({3,4})
	Out> {3/5,4/5};
	In> % . %
	Out> 1;

*SEE InProduct, CrossProduct

*CMD Transpose --- get transpose of a matrix
*STD
*CALL
	Transpose(M)

*PARMS

M - a matrix

*DESC

{Transpose} returns the transpose of a matrix $M$. Because matrices are
just lists of lists, this is a useful operation too for lists.

*E.G.

	In> Transpose({{a,b}})
	Out> {{a},{b}};

*CMD Determinant --- determinant of a matrix
*STD
*CALL
	Determinant(M)

*PARMS

M - a matrix

*DESC

Returns the determinant of a matrix M.

*E.G.

	In> DiagonalMatrix(1 .. 4)
	Out> {{1,0,0,0},{0,2,0,0},{0,0,3,0},{0,0,0,4}};
	In> Determinant(%)
	Out> 24;

*CMD Trace --- trace of a matrix
*STD
*CALL
	Trace(M)

*PARMS

M - a matrix

*DESC

{Trace} returns the trace of a matrix $M$ (defined as the sum of the
elements on the diagonal of the matrix).

*E.G.

	In> DiagonalMatrix(1 .. 4)
	Out> {{1,0,0,0},{0,2,0,0},{0,0,3,0},{0,0,0,4}};
	In> Trace(%)
	Out> 10;

*CMD Inverse --- get inverse of a matrix
*STD
*CALL
	Inverse(M)

*PARMS

M - a matrix

*DESC

Inverse returns the inverse of matrix $M$. The determinant of $M$ should
be non-zero. Because this function uses {Determinant} for calculating
the inverse of a matrix, you can supply matrices with non-numeric
matrix elements.

*E.G.

	In> DiagonalMatrix({a,b,c})
	Out> {{a,0,0},{0,b,0},{0,0,c}};
	In> Inverse(%)
	Out> {{(b*c)/(a*b*c),0,0},{0,(a*c)/(a*b*c),0},
	{0,0,(a*b)/(a*b*c)}};
	In> Simplify(%)
	Out> {{1/a,0,0},{0,1/b,0},{0,0,1/c}};

*SEE Determinant

*CMD Minor --- get principal minor of a matrix
*STD
*CALL
	Minor(M,i,j)

*PARMS

M - a matrix

i, j - positive integers

*DESC

Minor returns the minor of a matrix around
the element (i,j). The minor is the determinant of the matrix
excluding the "i"-th row and "j"-th column.

*E.G.

	In> A := {{1,2,3}, {4,5,6}, {7,8,9}};
	Out> {{1,2,3},{4,5,6},{7,8,9}};
	In> PrettyForm(A);
	
	/                    \
	| ( 1 ) ( 2 ) ( 3 )  |
	|                    |
	| ( 4 ) ( 5 ) ( 6 )  |
	|                    |
	| ( 7 ) ( 8 ) ( 9 )  |
	\                    /
	
	Out> True;
	In> Minor(A,1,2);
	Out> -6;
	In> Determinant({{2,3}, {8,9}});
	Out> -6;

*SEE CoFactor, Determinant, Inverse

*CMD CoFactor --- cofactor of a matrix
*STD
*CALL
	CoFactor(M,i,j)

*PARMS

M - a matrix

i, j - positive integers

*DESC

CoFactor returns the cofactor of a matrix around
the element (i,j). The cofactor is the minor times
$(-1)^(i+j)$.

*E.G.

	In> A := {{1,2,3}, {4,5,6}, {7,8,9}};
	Out> {{1,2,3},{4,5,6},{7,8,9}};
	In> PrettyForm(A);
	
	/                    \
	| ( 1 ) ( 2 ) ( 3 )  |
	|                    |
	| ( 4 ) ( 5 ) ( 6 )  |
	|                    |
	| ( 7 ) ( 8 ) ( 9 )  |
	\                    /
	
	Out> True;
	In> CoFactor(A,1,2);
	Out> 6;
	In> Minor(A,1,2);
	Out> -6;
	In> Minor(A,1,2) * (-1)^(1+2);
	Out> 6;

*SEE Minor, Determinant, Inverse

*CMD SolveMatrix --- solve a linear system
*STD
*CALL
	SolveMatrix(M,v)

*PARMS

M - a matrix

v - a vector

*DESC

{SolveMatrix} returns the vector $x$ that satisfies
the equation $M*x = v$. The determinant of $M$ should be non-zero.

*E.G.

	In> A := {{1,2}, {3,4}};
	Out> {{1,2},{3,4}};
	In> v := {5,6};
	Out> {5,6};
	In> x := SolveMatrix(A, v);
	Out> {-4,9/2};
	In> A * x;
	Out> {5,6};

*SEE Inverse, Solve, PSolve, Determinant

*CMD CharacteristicEquation --- get characteristic polynomial of a matrix
*STD
*CALL
	CharacteristicEquation(matrix,var)

*PARMS

matrix - a matrix

var - a free variable

*DESC

CharacteristicEquation
returns the characteristic equation of "matrix", using
"var". The zeros of this equation are the eigenvalues
of the matrix, Det(matrix-I*var);

*E.G.

	In> DiagonalMatrix({a,b,c})
	Out> {{a,0,0},{0,b,0},{0,0,c}};
	In> CharacteristicEquation(%,x)
	Out> (a-x)*(b-x)*(c-x);
	In> Expand(%,x)
	Out> (b+a+c)*x^2-x^3-((b+a)*c+a*b)*x+a*b*c;

*SEE EigenValues, EigenVectors

*CMD EigenValues --- get eigenvalues of a matrix
*STD
*CALL
	EigenValues(matrix)

*PARMS

{matrix} -- a square matrix

*DESC

EigenValues returns the eigenvalues of a matrix.
The eigenvalues x of a matrix M are the numbers such that
$M*v=x*v$ for some vector.

It first determines the characteristic equation, and then factorizes this
equation, returning the roots of the characteristic equation
Det(matrix-x*identity).

*E.G.

	In> M:={{1,2},{2,1}}
	Out> {{1,2},{2,1}};
	In> EigenValues(M)
	Out> {3,-1};

*SEE EigenVectors, CharacteristicEquation

*CMD EigenVectors --- get eigenvectors of a matrix
*STD
*CALL
	EigenVectors(A,eigenvalues)

*PARMS

{matrix} -- a square matrix

{eigenvalues} -- list of eigenvalues as returned by {EigenValues}

*DESC

{EigenVectors} returns a list of the eigenvectors of a matrix.
It uses the eigenvalues and the matrix to set up n equations with
n unknowns for each eigenvalue, and then calls {Solve} to determine
the values of each vector.

*E.G.

	In> M:={{1,2},{2,1}}
	Out> {{1,2},{2,1}};
	In> e:=EigenValues(M)
	Out> {3,-1};
	In> EigenVectors(M,e)
	Out> {{-ki2/ -1,ki2},{-ki2,ki2}};

*SEE EigenValues, CharacteristicEquation

*CMD IsHermitian --- test for a Hermitian matrix
*STD
*CALL
	IsHermitian(A)

*PARMS

{A} -- a square matrix

*DESC

IsHermitian(A) returns {True} if {A} is Hermitian and {False}
otherwise. $A$ is a Hermitian matrix iff Conjugate( Transpose $A$ )=$A$.
If $A$ is a real matrix, it must be symmetric to be Hermitian.

*E.G.

	In> IsHermitian({{0,I},{-I,0}})
	Out> True;
	In> IsHermitian({{0,I},{2,0}})
	Out> False;

*SEE IsUnitary

*CMD IsOrthogonal --- test for an orthogonal matrix
*STD
*CALL
	IsOrthogonal(A)

*PARMS

A - square matrix

*DESC

{IsOrthogonal(A)} returns {True} if {A} is orthogonal and {False} 
otherwise. $A$ is orthogonal iff $A$*Transpose($A$) = Identity, or
equivalently Inverse($A$) = Transpose($A$).

*E.G.

	In> A := {{1,2,2},{2,1,-2},{-2,2,-1}};
	Out> {{1,2,2},{2,1,-2},{-2,2,-1}};
	In> PrettyForm(A/3)
	/                      \
	| / 1 \  / 2 \ / 2 \   |
	| | - |  | - | | - |   |
	| \ 3 /  \ 3 / \ 3 /   |
	|                      |
	| / 2 \  / 1 \ / -2 \  |
	| | - |  | - | | -- |  |
	| \ 3 /  \ 3 / \ 3  /  |
	|                      |
	| / -2 \ / 2 \ / -1 \  |
	| | -- | | - | | -- |  |
	| \ 3  / \ 3 / \ 3  /  |
	\                      /
	Out> True;
	In> IsOrthogonal(A/3)
	Out> True;


*CMD IsSymmetric --- test for a symmetric matrix
*STD
*CALL
	IsSymmetric(A)

*PARMS

{A} -- a square matrix

*DESC

{IsSymmetric(A)} returns {True} if {A} is symmetric and {False} otherwise.
$A$ is symmetric iff Transpose ($A$) =$A$. All Hermitian matrices are
symmetric. 

*E.G.

	In> A := {{1,0,0,0,1},{0,2,0,0,0},{0,0,3,0,0},
	  {0,0,0,4,0},{1,0,0,0,5}};
	In> PrettyForm(A)
	/                                \
	| ( 1 ) ( 0 ) ( 0 ) ( 0 ) ( 1 )  |
	|                                |
	| ( 0 ) ( 2 ) ( 0 ) ( 0 ) ( 0 )  |
	|                                |
	| ( 0 ) ( 0 ) ( 3 ) ( 0 ) ( 0 )  |
	|                                |
	| ( 0 ) ( 0 ) ( 0 ) ( 4 ) ( 0 )  |
	|                                |
	| ( 1 ) ( 0 ) ( 0 ) ( 0 ) ( 5 )  |
	\                                /
	Out> True;
	In> IsSymmetric(A)
	Out> True;
		

*SEE IsHermitian, IsSkewSymmetric

*CMD IsSkewSymmetric --- test for a skew-symmetric matrix
*STD
*CALL
	IsSkewSymmetric(A)

*PARMS

{A} -- a square matrix

*DESC

{IsSkewSymmetric(A)} returns {True} if {A} is skew symmetric and {False} otherwise.
$A$ is skew symmetic iff Transpose $A$ =$-A$. 

*E.G.

	In> A := {{0,-1},{1,0}}
	Out> {{0,-1},{1,0}};
	In> PrettyForm(%)
	/               \
	| ( 0 ) ( -1 )  |
	|               |
	| ( 1 ) ( 0 )   |
	\               /
	Out> True;
	In> IsSkewSymmetric(A);
	Out> True;

*SEE IsSymmetric, IsHermitian
 
*CMD IsUnitary --- test for a unitary matrix
*STD
*CALL
	IsUnitary(A)

*PARMS

{A} -- a square matrix

*DESC

This function tries to find out if A is unitary.

A matrix $A$ is orthogonal iff $A^(-1)$ = Transpose( Conjugate($A$) ). This is
equivalent to the fact that the columns of $A$ build an orthonormal system
(with respect to the scalar product defined by {InProduct}).

*E.G.

	In> IsUnitary({{0,I},{-I,0}})
	Out> True;
	In> IsUnitary({{0,I},{2,0}})
	Out> False;

*SEE IsHermitian, IsSymmetric

*CMD IsIdempotent --- test whether a matrix is idempotent
*STD
*CALL
	IsIdempotent(A)

*PARMS

{A} -- a square matrix

*DESC

{IsIdempotent(A)} returns {True} if {A} is idempotent and {False} otherwise.
$A$ is idempotent iff $A^2$ =$A$. Note that this also implies that $A$
to any power is $A$.

*E.G.

	In> IsIdempotent(ZeroMatrix(10,10));
	Out> True;
	In> IsIdempotent(Identity(20))
	Out> True;


*CMD JacobianMatrix --- calculate the Jacobian matrix of n functions in n variables
*STD
*CALL
        JacobianMatrix(functions,variables)

*PARMS

{functions} -- An {n} dimensional vector of functions

{variables} -- An {n} dimensional vector of variables

*DESC

The function {JacobianMatrix} calculates the Jacobian matrix
of n functions in n variables.

The {ij}th element of the Jacobian matrix is defined as the derivative
of {i}th function with respect to the {j}th variable.

*E.G.

	In> JacobianMatrix( {Sin(x),Cos(y)}, {x,y} ); 
	Out> {{Cos(x),0},{0,-Sin(y)}};
	In> PrettyForm(%)
	/                                 \
	| ( Cos( x ) ) ( 0 )              |
	|                                 |
	| ( 0 )        ( -( Sin( y ) ) )  |
	\                                 /


*CMD VandermondeMatrix --- calculate the Vandermonde matrix
*STD
*CALL
	VandermondeMatrix(vector)
*PARMS

{vector} -- An {n} dimensional vector 

*DESC

The function {VandermondeMatrix} calculates the Vandermonde matrix
of a vector.

The {ij}th element of the Vandermonde matrix is defined as {i^(j-1)}.

*E.G.
	In> VandermondeMatrix({1,2,3,4})
	Out> {{1,1,1,1},{1,2,3,4},{1,4,9,16},{1,8,27,64}};
	In>PrettyForm(%)
	/                            \
	| ( 1 ) ( 1 ) ( 1 )  ( 1 )   |
	|                            |
	| ( 1 ) ( 2 ) ( 3 )  ( 4 )   |
	|                            |
	| ( 1 ) ( 4 ) ( 9 )  ( 16 )  |
	|                            |
	| ( 1 ) ( 8 ) ( 27 ) ( 64 )  |
	\                            /

*CMD HessianMatrix --- calculate the Hessian matrix
*STD
*CALL
        HessianMatrix(function,var)
*PARMS

{function} -- An function in {n} variables

{var} -- An {n} dimensian vector of variables

*DESC

The function {HessianMatrix} calculates the Hessian matrix
of a vector.

The {ij}th element of the Hessian matrix is defined as 
$ Deriv(var[i]) Deriv(var[j]) function $. If the third
order mixed partials are continuous, then the Hessian
matrix is symmetric.

The Hessian matrix is used in the second derivative test 
to discern if a critical point is a local maximum, local
minimum or a saddle point.


*E.G.

	In> HessianMatrix(3*x^2-2*x*y+y^2-8*y, {x,y} )
	Out> {{6,-2},{-2,2}};
	In> PrettyForm(%)
	/                \
	| ( 6 )  ( -2 )  |
	|                |
	| ( -2 ) ( 2 )   |
	\                /




*CMD WronskianMatrix --- calculate the Wronskian matrix
*STD
*CALL
        WronskianMatrix(func,var)
*PARMS

{func} -- An {n} dimensional vector of functions

{var} -- A variable to differentiate with respect to

*DESC

The function {WronskianMatrix} calculates the Wronskian matrix
of $n$ functions.

The Wronskian matrix is created by putting each function as the
first element of each column, and filling in the rest of each
column by the ($i-1$)-th derivative, where $i$ is the current row.

The Wronskian matrix is used to verify if $n$ functions are linearly
independent, usually solutions to a differential equation.
If the determinant of the Wronskian matrix is zero, then the functions
are dependent, otherwise they are independent.

*E.G.
	In> WronskianMatrix({Sin(x),Cos(x),x^4},x);
	Out> {{Sin(x),Cos(x),x^4},{Cos(x),-Sin(x),4*x^3},
	  {-Sin(x),-Cos(x),12*x^2}};
	In> PrettyForm(%)
	/                                                 \
	| ( Sin( x ) )      ( Cos( x ) )      /  4 \      |
	|                                     \ x  /      |
	|                                                 |
	| ( Cos( x ) )      ( -( Sin( x ) ) ) /      3 \  |
	|                                     \ 4 * x  /  |
	|                                                 |
	| ( -( Sin( x ) ) ) ( -( Cos( x ) ) ) /       2 \ |
	|                                     \ 12 * x  / |
	\                                                 /
The last element is a linear combination of the first two, so the determinant is zero:
	In> Determinant( WronskianMatrix( {x^4,x^3,2*x^4 
	  + 3*x^3},x ) )
	Out> x^4*3*x^2*(24*x^2+18*x)-x^4*(8*x^3+9*x^2)*6*x
	  +(2*x^4+3*x^3)*4*x^3*6*x-4*x^6*(24*x^2+18*x)+x^3
	  *(8*x^3+9*x^2)*12*x^2-(2*x^4+3*x^3)*3*x^2*12*x^2;
	In> Simplify(%)
	Out> 0;


*CMD SylvesterMatrix --- calculate the Sylvester matrix of two polynomials
*STD
*CALL
	SylvesterMatrix(poly1,poly2,variable)

*PARMS

{poly1} -- polynomial

{poly2} -- polynomial

{variable} -- variable to express the matrix for

*DESC

The function {SylvesterMatrix} calculates the Sylvester matrix
for a pair of polynomials.

The Sylvester matrix is closely related to the resultant, which
is defined as the determinant of the Sylvester matrix. Two polynomials
share common roots only if the resultant is zero.

*E.G.

	In> ex1:= x^2+2*x-a
	Out> x^2+2*x-a;
	In> ex2:= x^2+a*x-4
	Out> x^2+a*x-4;
	In> SylvesterMatrix(ex1,ex2,x)
	Out> {{1,2,-a,0},{0,1,2,-a},
	  {1,a,-4,0},{0,1,a,-4}};
	In> Determinant(%)
	Out> 16-a^2*a- -8*a-4*a+a^2- -2*a^2-16-4*a;
	In> Simplify(%)
	Out> 3*a^2-a^3;

The above example shows that the two polynomials have common
zeros if $ a = 3 $.

*SEE Determinant, Simplify, Solve, PSolve


