Part IV. Linear Differential Equations
Section 4. Matrix Methods
In this section, you will learn enough about vectors and matrices in Mathematica to use them to obtain vector solutions to linear systems of the form
x' = a x + b y
y' = c x + d y
where x and y are functions of t and a, b, c, d are constants. Three dimensional systems will also be considered. Vectors and matrices arise naturally by replacing scalar equations with one vector equation v' = Av, where v is a vector valued function having the unknown functions as its components and A is a square matrix.
Warning: It is assumed that the reader knows what eigenvectors and eigenvalues are and understands the role they play in formulating solutions to systems of differential equations. (See Ledder, Chapter 6.) Our principal goal is to demonstrate how to handle vectors, matrices, eigenvectors, etc. so that Mathematica can be applied successfully to the analysis of such systems.
Matrices and Vectors
A vector in Mathematica is represented as a list.
In[4]:=
u = {1,2,3}
v = {a,b,c}
Out[4]=
Out[5]=
The dot product is obtained as u.v (put a period between u and v).
In[3]:=
u.v
Out[3]=
A matrix is a list of lists. Each list is a row.
In[1]:=
A = {{1,2,3},{4,5,6},{7,8,9}}
Out[1]=
The function MatrixForm will display A as a matrix.
In[2]:=
MatrixForm[A]
Out[2]//MatrixForm=
The MatrixForm function displays a one-dimensional list (i.e. a vector) as a column. The next entry shows an alternate way to apply the MatrixForm function.
In[6]:=
u//MatrixForm
Out[6]//MatrixForm=
The matrix A and the vector v are multiplied in this order using A.v. Mathematica interprets v as column vector and outputs the product as a vector (i.e. a list).
In[6]:=
A.v
Out[6]=
In[7]:=
v.A//MatrixForm
Out[7]//MatrixForm=
Matrix A and vector v can also be multiplied in the opposite order using v.A. Now Mathematica interprets v as a row and still outputs the product as a vector.
In[8]:=
v.A
Out[8]=
The MatrixForm function displays this product as a column also.
In[9]:=
v.A//MatrixForm
Out[9]//MatrixForm=
The Table function can be used to make matrices.
In[10]:=
B = Table[ 1/(m+n^2), {m,1,3}, {n,0,2} ]
Out[10]=
The determinant of B is obtained with the entry
Det[B]
In[11]:=
MatrixForm[B]
Det[B]
Out[11]//MatrixForm=
Out[12]=
Since B's determinant is not zero, it has an inverse. It can be found using
Inverse[B]
In[13]:=
Binverse = Inverse[B]
B.Binverse
Out[13]=
Out[14]=
Be careful because the entry 1/B or B^(-1) will output the matrix whose entries are the reciprocals of the entries in B.
In[15]:=
1/B
Out[15]=
To find B's characteristic polynomial, as a function of t, enter
CharacteristicPolynomial[B,t]
In[29]:=
CharacteristicPolynomial[B,t]
Out[29]=
The Euclidean length of a vector v can be found using Norm[v].
In[31]:=
v
Norm[v]
Out[31]=
Out[32]=
The absolute value function is applied to the components a, b, c because they might be complex. If a, b, c are known to be real numbers, then the Euclidean norm of v can also be found as follows.
In[33]:=
Sqrt[v.v]
Out[33]=
The entry Norm[v,∞] calculates what is called the "infinity" or "max" norm of v. This is the maximum of the absolute values of the entries.
In[36]:=
u
Norm[u,∞]
Out[36]=
Out[37]=
Note. The infinity symbol can be entered using the symbol palette. Typing the word "infinity" will not work.
The entry B[[i,j]] outputs the i,j entry of matrix B. The entry v[[k]] is the kth component of the vector v.
In[43]:=
{B[[2,2]], v[[2]]}
Out[43]=
The jth column of matrix B is obtained using Take[B,All,{j,j}]. The entry Take[B,{i,i}] outputs its ith row. They appear as lists.
In[51]:=
Take[B,All,{3,3}]
Take[B,{1,1}]
Out[51]=
Out[52]=
Augmenting and stacking (not needed later, but kind of neat anyway)
If you would like to stack the matrix B with the row vector {x,y,z}, on the bottom, do it like this.
In[75]:=
Bstacked = Insert[B,{x,y,z},4]
Out[75]=
In[76]:=
MatrixForm[Bstacked]
Out[76]//MatrixForm=
If you would like to then put {a,b,c,d} into the right column of Bstacked, first transpose Bstacked.
In[79]:=
Transpose[Bstacked]
Out[79]=
Then insert {a,b,c,d} along the bottom.
In[80]:=
Insert[%,{a,b,c,d},4]
Out[80]=
And then transpose again.
In[81]:=
Bstackedaugmented = Transpose[%]
Out[81]=
In[82]:=
%//MatrixForm
Out[82]//MatrixForm=
Finally, if you need to calculate the determinant of the 3 x 3 submatrix of Bstackedaugmented that is obtained by deleting its first row and first column, do it like this.
In[92]:=
Bstackedaugmented[[{2,3,4},{2,3,4}]];
%//MatrixForm
%%//Det
Out[93]//MatrixForm=
Out[94]=
Eigenvectors and Eigenvalues
The matrix A, defined below, is square. It has eigenvectors and eigenvalues. Let's find them.
In[96]:=
A = {{3,2,1},{0,-1,-1},{2,1,1}};
%//MatrixForm
Out[97]//MatrixForm=
First find A's characteristic polynomial.
In[3]:=
cp = CharacteristicPolynomial[A,t]
Out[3]=
Then obtain the eigenvalues as its roots. Call the output lambda.
In[12]:=
Solve[cp==0,t]
Out[12]=
And use it to make a list of eigenvalues called lambda.
In[13]:=
lambda = Table[t/.%[[j]],{j,1,3}]
Out[13]=
Now use the NullSpace function to obtain bases for the three eigenspaces. Recall that the kth eigenspace is the null space of the matrix
A - lambda[[k]]*Identity
where "Identity" is the 3 x 3 identity matrix. This is obtained in Mathematica with the entry
IdentityMatrix[3].
In[35]:=
v = Table[ NullSpace[ A - lambda[[k]]*IdentityMatrix[3] ],
{k,1,3} ]
Out[35]=
The output for NullSpace is a list containing a basis for the null space, in list form. If further computations are required it would probably be wise to convert the eigenvalues and eigenvectors to floating point form.
In[39]:=
N[lambda,3]
N[v,3]
Out[39]=
Out[40]=
Mathematica's direct method
Mathematica has functions that will output eigenvectors and eigenvalues directly. To obtain A's eigenvalues apply the Eigenvalues function.
In[41]:=
Eigenvalues[A]
Out[41]=
The output is a list of A's eigenvalues. The function called Eigenvectors will output a list with each eigenvector appearing in list form. Observe that the eigenvectors appear in simplified form and in the order corresponding to the appearance of the output for Eigenvalue.
In[42]:=
Eigenvectors[A]
Out[42]=
Eigenvector solutions to v' = Av, the 2 x 2 case
Let's begin by solving a simple 2 x 2 system
x' = x + 2 y
y' = x - y
using eigenvectors. This is equivalent to v' = A v where v = < x , y > and
A = .
Start with the eigenvectors and eigenvalues of A.
In[108]:=
A = {{1,2},{1,-1}};
%//MatrixForm
lambda = Eigenvalues[A]
v = Eigenvectors[A]
Out[109]//MatrixForm=
Out[110]=
Out[111]=
Let v1 and v2 be the eigenvector solutions. They are straight line trajectories. Recall that vk is defined using the formula
vk =
where .
In[51]:=
v1 = Exp[lambda[[1]]*t]*v[[1]]
v2 = Exp[lambda[[2]]*t]*v[[2]]
Out[51]=
Out[52]=
Now that we have v1 and v2, the easiest way to obtain solutions to the IVP
v' = Av, v(0) = v0
is to use the formula
v(t) = X(t)
where X(t) is the fundamental matrix having v1 and v2 as its columns. See Ledder, Section A.5.
In[65]:=
X = Transpose[{v1,v2}]
Out[65]=
For example, the following entries define the vector v as the solution to v' = A v , v(0) = < 1 , 1 > , then show the trajectory and the initial point. The formula for the solution is suppressed because it is a mess.
In[89]:=
v = X.Inverse[X/.t->0].{1,1};
In[86]:=
Pt = ListPlot[{{1,1}}, PlotStyle->PointSize[0.03]];
Traj = ParametricPlot[ v, {t,-1,1}, PlotStyle->RGBColor[0,1,0] ];
SolnTraj = Show[ Pt, Traj, PlotRange->{{-2,4},{-2,4}}, AspectRatio->1/1 ]
Here is the solution vector, simplified.
In[90]:=
Simplify[v];
%//MatrixForm
Out[91]//MatrixForm=
And here is a picture of the solution trajectory and two eigenvector trajectories, one blue, one red.
In[93]:=
Show[ ParametricPlot[ v1, {t,-5,1}, PlotStyle->RGBColor[0,0,1] ],
ParametricPlot[ v2, {t,-5,1}, PlotStyle->RGBColor[1,0,0] ],
SolnTraj, PlotRange->{{-2,4},{-2,4}}, AspectRatio->1/1]
The matrix exponential
The function called MatrixExp, when applied to square matrix A as
MatrixExp[t*A]
outputs the matrix X(t) that was found above using the eigenvector solutions. We compute it below, applied the 2 x 2 matrix A defined above.
In[112]:=
Mexp = MatrixExp[t*A];
Simplify[%]//MatrixForm
Out[113]//MatrixForm=
Compare the matrix Mexp to X(t), simplified and in matrix form.
In[105]:=
X.Inverse[X/.t->0]//Simplify//MatrixForm
Out[105]//MatrixForm=
A 3 dimensional example, approximate solutions
We will now compute three linearly independent eigenvector solutions to v' = A v for the 3 x 3 matrix A defined several pages ago and redefined below.
In[106]:=
A = {{3,2,1},{0,-1,-1},{2,1,1}};
%//MatrixForm
Out[107]//MatrixForm=
Because this is a small system, exact formulas can be obtained. However we will make our calculations in floating point form. We do this for three reasons.
1. Floating point calculations are easier to make, and are more relevant for real world applications.
2. It will give us some experience with error analysis.
3. If exact solution formulas are required, it is easier to get them directly from the matrix exponential or by simply applying DSolve.
Here, once more, are A's eigenvalues and eigenvectors, but in approximate decimal form.
In[108]:=
lambda = N[Eigenvalues[A]]
EV = N[Eigenvectors[A]]
Out[108]=
Out[109]=
The following Table constructs three eigenvector solutions.
In[110]:=
v = Table[ Exp[lambda[[k]]*t]*EV[[k]], {k,1,3}]
Out[110]=
The fundamental matrix with these vectors in its columns is called X.
In[111]:=
X = Transpose[v];
%//MatrixForm
Out[112]//MatrixForm=
And here is the solution to v' = Av satisfying v(0) = <2,4,1>.
In[116]:=
v = X.Inverse[X/.t->0].{2,4,1};
Simplify[%]//MatrixForm
Out[117]//MatrixForm=
The next entry checks the initial value.
In[118]:=
v/.t->0
Out[118]=
Error analysis over the interval t = 0 to t = 8
The vector function v does not satisfy the differential equation exactly. Let's see how large the error is over the time interval from 0 to 8. The error can be analyzed as follows. First calculate the "error vector" v' - A v.
In[128]:=
D[v,t] - A.v;
ErrorVector = Simplify[%];
%//MatrixForm
Out[130]//MatrixForm=
Then define Error as a function whose value at t is the largest of the absolute errors in the three components of ErrorVector. This requires the "infinity" norm.
In[131]:=
Error = Norm[ErrorVector,∞]
Out[131]=
Now graph Error over the interval from 0 to 8.
In[164]:=
Plot[Error, {t,0,8}, PlotRange->{0,0.002}]
This looks very good. Although the error clearly grows exponentially, it less than 0.0015 at t = 8 (years, maybe?). We say years because at t = 8 the approximate solution vector is over 3.6 x units long. (The length calculation uses the Euclidean norm.)
In[166]:=
Norm[v/.t->8]
Out[166]=
Thus the maximum relative error over this interval could be as small as 4 parts in
Go to Next Section Previous Section
Created by Mathematica (December 8, 2004)