2.2 Differential Equations
A differential equation is an equation that shows a relationship among derivatives. These derivatives may be multiplied by each other. They may also bare some relationship to the original function. There are many different types of differential equations. We will begin this section by reviewing differential equation classification.
Order
The order of a differential equation is the highest derivative within the differential equation. As an example, please consider
equation 2.55.
| (2.55) |
The highest derivative in equation is .
Therefore, this is a second order differential equation.
Partial vs. Ordinary
An ordinary differential equation is one composed of the derivatives of single variable functions. This type of
derivative is introduced in section 2.1.1. Equation 2.56 shows an example of an ordinary differential equation.
| (2.56) |
A partial differential equation is, as the name suggests, composed of partial derivatives. As a reminder from section 2.1.2, this
means that we are dealing with derivatives of a multi-variable function. Equation 2.57 shows an example of a partial
differential equation.
| (2.57) |
Linear vs. Non-Linear
A linear differential equation is a special type of ordinary differential equation that follows the following form shown in
equation 2.58.
| (2.58) |
In equation 2.58, .
The important thing to note in equation 2.58 is that all values
and all derivatives of
are to the first power. In addition,
no derivatives of are multiplied by
. Therefore, this equation is said to be
linear in . Please note that equation
2.58 accounts for the fact that the
coefficients could be 0, therefore eliminating the derivative of that order from the equation. So an equation that does not have a
term but
has a
and is
still considered linear. Equation 2.59 shows an example of a linear differential equation.
| (2.59) |
Equation 2.59 is linear in .
A non linear differential equation is one that does not follow the listed rules. Equation 2.60 shows example of a non-linear
differential equation.
| (2.60) |
Second+ Order Homogeneous vs. Non-Homogeneous
There are two definitions of a homogeneous differential equation. The first definition applies to first order differential
equations. We will not cover this definition, because it is very specific. The second definition of homogeneous differential
equations applies to second+ order differential equations. A differential equation with the desired solution
is said to
be homogeneous if it follows the form shown in equation 2.61.
| (2.61) |
This is a similar equation as shown for linear differential equations. However, the important thing with homogeneous differential equations
is that there is no ;
there are no standalone functions of the dependent variable (which, in the
functionality,
is ).
An example of a homogeneous second order differential equation is shown in equation 2.62.
| (2.62) |
An example of a non-homogeneous second order differential equation is shown in equation 2.63.
| (2.63) |
Solving Differential Equations
As defined previously, a differential equation is simply a relationship of derivatives. Our goal is to decode this differential equation
and find a function that satisfies it. In other words, if we are dealing with an ordinary differential equation, we want to find a
function
that, if plugged into the differential equation, holds true. If working with a partial differential equation, we want to find a
function
that, if plugged into the differential equation, holds true. I say “a” equation because theoretically, there could be multiple
equations that serve as a solution to the differential equation. In the sections below we will go over methods for finding
solutions to differential equations.
From section 2.1.1, we know that the derivative of a constant function is 0 (if the function is constant, its change must be
).
This, in effect, leads to a situation that each time we take a derivative of a function we loose information
regarding any constant terms within the function. Let us take a step back; we are looking for functions which
comply with a set of derivative relationships. There are infinite constant terms that can be added to any
function and still comply with the derivative relationship defined in the differential equation. For example,
has the same derivative as
. Typically, we lock these
constant terms by including initial conditions or boundary conditions. Initial conditions, as the name suggests, specify the value of the function
or its derivatives when the independent
variable is . Boundary conditions
specify the value of the function
at given “boundary” values of the differential equation. The difference between initial conditions and boundary conditions can be nebulous when
a boundary condition at is
specified. Therefore, the naming of these conditions can be thought of as a technicality and not a critical aspect to solving differential equations.
When we have an
order differential equation, this means we must differentiate our solution-function
times. Again, every time we differentiate we loose a constant term. Therefore, we need to have
initial conditions/boundary conditions for a differential equation of order
.
2.2.1 Direct Integration
Direct integration is a technique which can be used for first order ordinary differential equations
which can be manipulated to the form shown in equation 2.64. Again, our goal is to find a equation
that
satisfies the given differential equation.
| (2.64) |
is some
function of
and is some
function of .
In this case, we can just directly integrate equation 2.64 with respect to
. Doing
this, we get equation 2.65.
| (2.65) |
The integral in equation 2.65 can be integrated using methods specified in section 2.1.5. Since this is a first order differential equation, we only need one initial condition to fully define its solution.
Short Example
Let us begin with the differential equation provided in equation 2.66.
| (2.66) |
We will find a solution to equation 2.66 given the initial conditions provided in equation 2.67.
| (2.67) |
Using the method outlined in equation 2.65, we get the equation 2.68.
| (2.68) |
Integrating equation 2.68, we get equation 2.69.
| (2.69) |
Both integrals generate a constant of integration. However, we could say that
. In this
case, we are left with equation 2.70.
| (2.70) |
Equation 2.70 shows the general solution. It holds true for any constant
. In order to specify
this constant ,
we plug in the initial conditions specified in equation 2.67. Doing this, we get
.
Therefore, our final solution is as shown in equation 2.71.
| (2.71) |
2.2.2 Substitution
The substitution method can be used to solve linear (in the dependent variable, which in this book is
)
homogeneous differential equations, with the stipulation that there may be no independent
()
variables within the equation. Therefore, these equations must be of the form shown in equation 2.72.
| (2.72) |
In equation 2.72,
is a known constant. To solve equation 2.72 using the substitution method, we start with the assumption shown in equation
2.73.
| (2.73) |
Assuming this, we can find the nth derivative of the above function using the chain rule (section 2.1.1.0) as shown in equation
2.74.
| (2.74) |
Plugging equations generated using equation 2.74 into equation 2.72 we get equation 2.75.
| (2.75) |
Factoring out ,
we get equation 2.76.
| (2.76) |
By the zero product property, we can separate the terms in equation 2.76 generating equations 2.77 and 2.78.
| (2.77) |
| (2.78) |
Now, we know that
can never equal .
Therefore, we are left with equation 2.78. You may notice that this equation is simply a polynomial which we can solve for
. We should
get
“roots”, which are values for lambda. In order to find the solution to our differential equation, we plug all of these
values back into equation 2.73. We then add all of these equations together. This is shown in equation 2.79.
| (2.79) |
Equation 2.79 is the general solution; it has constants . As expected, the number of constants matches the order of the differential equation. In order to find the value of these constants, we use the initial conditions.
Short Example
Let us start with the differential equation listed in equation 2.80.
| (2.80) |
The initial conditions are provided in equation 2.81.
| (2.81) |
Following the prescribed methodology and arriving at the equation laid out by equation 2.78, we get equation 2.82.
| (2.82) |
Solving the polynomial in equation 2.82, we get values for lambda as shown in equations 2.83.
| (2.83) |
Therefore, per equation 2.79, our solution is shown in equation 2.84.
| (2.84) |
Using the initial conditions laid out in the problem statement (equation 2.81), we get
and
.
Therefore, our final solution is shown in equation 2.85.
| (2.85) |
2.2.3 The Method of Undetermined Coefficients
The method of undetermined coefficients can be used to solve linear (in
)
non-homogeneous differential equations with the stipulation that no functions of the independent variable
() may be multiplied by
the dependent variable ()
or its derivatives. Therefore, these equations must be of the form shown in equation 2.86.
| (2.86) |
The method of undetermined coefficients is simply a fancy way of saying “guessing”. Basically, we guess a function for
,
differentiate as appropriate, and plug it into the given differential equation. Our goal is to formulate our guess so
as to fulfill the differential equation; we want the left side (the one with all the derivatives) to equal
when all
the algebra is worked out. Guessing the appropriate function isn’t as hard as it seems. Table 2.1 shows recommended guesses
for based
on .
In Table 2.1, ,
, and
are constants that
are provided by .
Constants ,
and
are constants which must be
solved for by plugging the
guess into the given differential equation and ensuring the result equals
. The
that
ends up working is referred to as the particular solution and from this point onward will be referred to as
.
Solving for the particular solution gives us a general solution to the differential equation. However, the particular solution
alone has no integration constants. Therefore, while our particular solution is a solution to the differential equation, it is
possible that it could not abide to the initial conditions. In order to find a differential equation that abides
by the boundary conditions, we must find the homogeneous solution (the solution of equation 2.86 ignoring
). This solution
can be found using the method outlined above in section 2.2.2. From here on, we will refer to the solution of the homogeneous
equation as .
We then add
to as
shown below.
| (2.87) |
Solving for the constants of
using the boundary conditions, we will be left with an equation that works with the non-homogeneous original differential
equation and also abides by the boundary conditions.
Short Example
Let us start with the differential equation shown in equation 2.88.
| (2.88) |
The initial conditions are provided in equations 2.89.
| (2.89) |
Following the laid out order of steps, we start by finding the particular solution. Using equation 2.86 as the template, we know that
. This means
that is of the
form. From Table 2.1 we can tell
that this means we should use
as our guess. Plugging in
into equation 2.88, we get .
This solution becomes our ,
as shown in equation 2.90.
| (2.90) |
You may notice that the homogeneous version of the differential equation at hand was solved in section 2.2.2.0. This solution
(equation 2.84) is provided for sake of continuity.
Applying equation 2.87 we get equation 2.91 as our total solution.
| (2.91) |
Now, it is time to solve for our constants of integration
and . Plugging in the initial conditions
from equation 2.89, we get that
and . Our
final solution shown in equation 2.92.
| (2.92) |
2.2.4 The Laplace Transform
There are many real world uses of the Laplace Transform in electronic circuits, digital signal processing, and nuclear physics.
However, this section is within the differential equation section because, within this book, we will go over the Laplace
Transform simply as a tool to solve differential equations.
The Laplace transform is a way to transform a function which varies as a function of
() to a different function which varies
as a function of a new variable
().
The Laplace transform can be performed using equation 2.93 for a function
.
| (2.93) |
Table 2.2 shows the Laplace Transform of some common generic functions.
We are interested in taking the Laplace transform of differential equations. Therefore, it is helpful to find the Laplace transform of
. This is
shown in equation 2.94.
| (2.94) |
We can solve equation 2.94 using the integration by parts method laid out in section 2.1.5.0. Doing this, we get equation
2.95.
| (2.95) |
Since does not vary
as a function of ,
we can take
out of the integral in equation 2.95. Doing this, it becomes apparent that the integral term matches the definition of
. It also becomes apparent
that , when evaluated
using limits, becomes .
Therefore, equation 2.95 simplifies as shown in equation 2.96.
| (2.96) |
We can repeat the laid out procedure for a
order derivative to get equation 2.97.
| (2.97) |
The Laplace Transform can be used to solve linear (in )
non-homogeneous differential equations with the stipulation that no functions of the independent variable
() are multiplied by the
dependent variable ()
or its derivatives. Therefore, these equations must be of the form shown in equation 2.98.
| (2.98) |
Taking the Laplace transform of both sides and simplifying it is possible to get the general solution shown in equation 2.99.
| (2.99) |
Plugging in equation 2.93 for
we get equation 2.100.
| (2.100) |
Now, we want to go in reverse. We want to find what function
gives us the given
. The process of finding
the appropriate
is done by guess and check. It is typically referred to as finding the inverse Laplace transform. The
that
works is the solution to our differential equation. You may notice that this process gives us the ability to solve the same types
of differential equations as the processes presented in sections 2.2.2 and 2.2.3. However, the cool part about using the Laplace
transform to solve differential equations is that, as shown in equation 2.97, solving for the initial conditions is included within
the method.
Short Example
Let us start with the differential equation shown in equation 2.101.
| (2.101) |
With the initial conditions shown in equation 2.102.
| (2.102) |
Our next step is to rewrite equation 2.101 as function of .
In other words, we want to take the Laplace Transform of equation 2.101. We can use Table 2.2 to find the Laplace transform
. We can
use equation 2.97 to find the Laplace transform of the derivatives. Doing this, we get equation 2.103.
| (2.103) |
Solving for
we get equation 2.104.
| (2.104) |
Now, we need to take the inverse Laplace transform of equation 2.104. To do this, we can use partial fraction decomposition
(section 2.1.5.0) to simplify as shown in equation 2.105.
| (2.105) |
Now, we can easily find the inverse Laplace transform of equation 2.105 using Table 2.2. Doing this, we are left with equation
2.106.
| (2.106) |
You may not notice that this is the same equation that was presented in section 2.2.3.0. You may also notice that we got the same answer when we used the Laplace Transform to solve the differential equation as when we used the method of undetermined coefficients (2.2.3.0) substitution method (2.2.2). However, when doing the Laplace Transform method, we did not need to solve for our initial conditions.
2.2.5 Series Solutions to Differential Equations
The series solution method can be used to solve linear (in
)
homogeneous equations. Such an equation is shown in it’s general form in equation 2.107.
| (2.107) |
As you may have observed, the big difference between this method and the methods outlined
in sections 2.2.2 2.2.3, and 2.2.4 is that we can solve differential equations which have functions
of the independent
variable () multiplied by
the dependent variable ()
or its derivatives ().
We begin by assuming that a solution to the differential equation is an infinite series of the form shown in equation 2.108.
| (2.108) |
In equation 2.108, is an arbitrary
constant. The next step is to find the th
derivative of the equation shown in equation 2.108. There is no clean way to present the
derivative of a series. Therefore, only the first two derivatives of equation 2.108 are shown in equations 2.109 and 2.110.
| (2.109) |
| (2.110) |
Please note that every time to we take the derivative, the resultant series starts at a different
to ensure we
have no
terms with negative exponents. We can use the shown derivation pattern to recreate each term within our original differential
equation (2.108) as a series. This is shown below in equation 2.111.
| (2.111) |
In equation 2.111, depends on the
desired first term of the series. stands for
some function of which is the simplified
output that results from being multiplied
by the derivative dependent series.
stands for some resulting exponent that results from the algebra of multiplying
. As
can be deduced, our goal is to rewrite our differential equation as a series by plugging in the tailored terms.
However, before we do this, we must ”shift” each series so it matches the format shown in equation 2.112.
| (2.112) |
Notice that each series must be rewritten so that that it contains as
, and not, for example,
an . Also, notice that
the subscript of
changes from to
when we shift the
function, where is
the shift amount. has
been changed to
to emphasize that this coefficient term will change as a result of the shift.
Now we can plug each tailored individual series into the differential equation. Since each series has been shifted to have the
term , we
can rewrite our entire differential equation as one big series by combining the series of each term within the differential
equation. Essentially, we are plugging each individual 2.112 into equation 2.107. Doing this, we will get something which
resembles equation 2.113.
| (2.113) |
In equation 2.113,
is the result of all the
terms being added together. A solution to the differential equation is found when the
values can be
found in terms of ,
where
stands for the constant of integration terms. For more details as to how to do this, please reference the following
example.
Short Example
Let us start with the differential equation shown in equation 2.114.
| (2.114) |
Now, in following the laid out procedure, we assume equation 2.108. Therefore, we obtain equations 2.115 and 2.116.
Now, we want to shift each series so that it is composed of
and its
coefficients. Doing this, we get equations 2.117 and 2.118.
Plugging equations 2.117 and 2.118 into our original differential equation (equation 2.114), we get equation 2.119.
| (2.119) |
In order to get both sums to start at
and combine them, we can take out the
term from the first sum as shown in equation 2.120.
| (2.120) |
Now, we need to find coefficient ()
terms for which equation 2.120 is true. Since equation 2.120 must equal
for all values of
, we know that the coefficients of
each must add up to be zero. Let us
begin by looking at the coefficient of .
Plugging
into equation 2.120 we get equation 2.121.
| (2.121) |
Therefore, we know equation 2.122 must be true.
| (2.122) |
Now, for the coefficients of
we can repeat a similar process to get equation 2.123.
| (2.123) |
Now we plug in sequential values of
to solve for the terms
at different values of .
We know that this is a second order differential equation. Therefore, we should theoretically be able to find all coefficients in terms
of two constants (which would be the constants of integration). Since the series used to define the above relationship starts at
, we must start plugging
in values starting at .
This is done up until .
As you can observe, as we iterate up in we can
plug in coefficients that arise from previous
values. Given the
values, our initial assumption of equation 2.108 can be rewritten as equation 2.124.
| (2.124) |
In equation 2.124, every can was
written as a function of either
or . As
expected, these are our two constants of integration. Consolidating equation 2.124 using the sum symbol, we get equation
2.125.
| (2.125) |
2.2.6 Series Solutions to Cauchy Euler Differential Equations
Cauchy Euler differential equations are linear (in x and y), ordinary, homogeneous differential equations of the form shown in
equation 2.126.
| (2.126) |
As can be deduced, if we try to solve the above equation using series solutions (section 2.2.5) we will get a bunch of terms to
the
power. Therefore, we cannot use this method. Rather, we assume our solution is of the form shown in equation 2.127.
| (2.127) |
Assuming this, as in section 2.2.2, we can find the th
derivative of the function in equation 2.127 as shown in equations 2.128 and 2.129. There is no clean way to present the
th
derivative of equation 2.127. Therefore, only the first two derivatives are provided.
| (2.128) |
| (2.129) |
Plugging equations 2.128 and 2.129 into our original differential equation (equation 2.126) and factoring out
we get
equation 2.130.
| (2.130) |
By the zero product property, we can state equations 2.131 and 2.132.
| (2.131) |
| (2.132) |
Now, we know that can never equal 0.
However, we can solve the equation, which
will be a polynomial. Doing this, we get
values ,
, ...
.
Therefore, our final solution will be of the form shown in equation 2.133.
| (2.133) |
2.2.7 Short Example
Let us start with the differential equation shown in equation 2.134.
| (2.134) |
With the initial conditions provided in equation 2.135.
| (2.135) |
Assuming equation 2.127 and plugging in with accordance to the above process, we get equation 2.136.
| (2.136) |
In following the process, we solve the
term for . Doing
this, we get .
Therefore, our general solution is shown in equation 2.137.
| (2.137) |
Plugging in our initial conditions, we find that .
Therefore, the final solution is shown in equation 2.138.
| (2.138) |
2.2.8 Separation of Variables
The separation of variables method can be used to solve partial differential equations. There is no general rule for the form
these equations must have to be solved using this method. However, within engineering, we use this method to solve
equations of general format shown in equations 2.139 and 2.140.
| (2.139) |
| (2.140) |
Equation 2.139 is called the 2D Laplace’s equation if we assume
and
to be
positive. If is
negative equation 2.139 resembles the general form the wave equation, which is not covered in this book. Equation 2.140
resembles the general form of the 1D heat equation, derived in section ??. We will solve this equation using the separation of
variables method in the example.
The separation of variables is based upon, as the name suggests, separating the variables in the
partial differential equation. The solution to both equations 2.139 and 2.140 is some function
. When
we apply the separation of variables method, we assume that the solution is of the form shown in equation 2.141.
| (2.141) |
In other words, we are assuming that the solution to our partial differential equation is some function of just
multiplied by some
function of just
. In
following the pattern of this differential equations section, finding solutions to differential equations is a creative guessing
game. We assume equation 2.141 simply because it ends up working.
Now that we have assumed the format of our solution, we can recreate each term within our
original partial differential equation by the taking the partial derivative (2.1.2) of equation 2.141
times.
This process is not not dissimilar from the one presented in section 2.2.5. Rewriting our differential equation using our
assumption we will get a new differential equation which resembles equation 2.142.
| (2.142) |
Now, our goal is to algebraically manipulate the equation so that each side of the equation is a function of
and its derivatives and the other
side of the equation is a function of
and its
derivatives. Doing this, we will get equation 2.143.
| (2.143) |
The negative was eliminated from equation 2.143 since
and/or
can be positive or negative.
Now, if you look closely at equation 2.143, you might notice something peculiar. We have a output on the left side that is purely a function of
. We have an output on the right
side that is purely a function of .
For every and
, these outputs are
equal to each other. How could this be possible? The only way this could be possible is if both these outputs are equal to a constant. We will
call this constant .
This is concept is shown in equation 2.144.
| (2.144) |
can be
positive or negative depending on the downstream necessity.
Given this conceptual realization, we can rewrite equation 2.144 as shown in equations 2.145 and 2.146.
| (2.145) |
| (2.146) |
You may notice that we have taken our partial differential equation and turned it into two ordinary differential equations. Equations 2.145 and 2.146 can be solved for given initial conditions/boundary values using methods outlined in sections 2.2.2 2.2.3, and 2.2.4. When we are done solving for and we simply multiply them together in accordance with equation 2.141 to obtain the solution to our partial differential equation.
Short Example: The Heat Equation / Diffusion equation
In this section, we will solve the 1D version of the diffusion equation (equation ??), which is in the same form as the heat
equation (equation ??) assuming no internal heating. The derivation for the diffusion equation can be found in section ??,
while the derivation of the heat equation can be found in section ??. Rewriting equation 2.140 with the relevant variables we
get equation 2.147.
| (2.147) |
The initial condition is provided in equation 2.148.
The boundary conditions are provided in equation 2.149.
| (2.149) |
In linking equation 2.147 to equation ?? for diffusion,
stands for concentration, and
stands for time.
is equal to ,
the diffusion coefficient. In linking equation 2.147 to equation ?? for heat,
stands for
temperature and
stands for time. is
equal to , where
is thermal conductivity (section ??),
is specific heat (section ??), and
is density. The initial conditions
state that at time the object has the
concentration / temperature distribution .
The boundary conditions state that no matter what the time
(), the concentration/temperature
at either end of the object is .
In following the process laid out, we begin by assuming the multi-variable solution to our differential equation
(), will be a product
of function
and function .
| (2.150) |
We then take the partial derivative (2.1.2) of equation 2.150 as to plug it into our original differential equation (2.147). Doing
this, we get equation 2.151.
| (2.151) |
Reordering so that each side of the equation has like terms, we get equation 2.152.
| (2.152) |
Now we can assume the relationship laid out in equation 2.144. Doing this, for the left hand term, we get the ordinary
differential equation shown in equation 2.153.
| (2.153) |
We know that for function to abide by the
global initial and boundary conditions,
must abide by the boundary conditions shown in equation 2.154.
| (2.154) |
We need to assume that
is positive/negative in accordance with what makes our math later on work out. In this case,
is kept
positive. Solving this equation using the method outlined in section 2.2.2 we get equation 2.155.
| (2.155) |
Using equation 2.255, equation 2.155 can be rewritten as equation 2.156.
| (2.156) |
Now, plugging in the first boundary condition shown in equation 2.149, we get that
.
Plugging in the second boundary condition shown in equation 2.154, we get that
.
can
equal any positive integer. Therefore, we are left with equation 2.157.
| (2.157) |
From the right hand term of equation 2.152 we get equation 2.158.
| (2.158) |
Solving equation 2.158 using the method outlined in section 2.2.2 we get equation 2.159.
Please note that we plugged in .
Plugging in equation 2.157 and 2.159 into equation 2.150 we get equation 2.160.
Please note that the generalized constant .
Equation 2.160 would theoretically work for any positive integer
.
signifies that the
constant will probably
be different at every
used.
Now, our last step is to solve for .
This equation is dependent upon our initial conditions. Plugging in equation 2.148 to equation 2.160 we can see that one obvious
solution exits at
and .
Therefore, our final solution is shown in equation 2.161.
2.2.9 Numerical Methods: Finite Difference
The previous differential equation sections overviewed methods that provide us with analytical solutions to
differential equations. These methods output a clean equation which defines the solution (symbolized by
) as a function of the
independent variable (typically ).
In this section, we will use a method called the finite difference method to generate values that constitute
the solution data-set (i.e. a numerical solution). To do this, we must choose values of the independent
variable at which we will calculate our solution. Within this section, these chosen values are symbolized by
, with
being an integer
value. represents
increment between independent variable values at which we calculate the solution to the differential equation, and within this section is
symbolized by (i.e.
). As follows, the solution to the
differential equation at a given
is written as . In this
section, we will shorten
to (i.e.
).
In general, generation of a solution data set to a differential equation by use of the finite difference method is
a three step process. First, we must approximate the differential terms in a differential equation using an
finite difference approximation (an algebraic simplification). Secondly, we must plug our approximation into
the differential equation. Lastly, we solve the differential equation based of the initial conditions given the
substituted finite difference approximation. In this section, we will first focus on the first step of using the finite
difference method and present a number of common finite difference approximations used to generate numerical
solutions to differential equations. We will go over the errors for each approximation. We will also go over
one advanced approximation called RK4. Then, we will cover the second and third steps of using the finite
difference method by plugging in our approximation and numerically solving two example differential equations.
Forward difference
The forward difference approximation for the derivative at
is
provided in equation 2.162.
To find the error associated with the forward difference approximation, we can start out
by writing out the Taylor series (the Taylor series will be discussed in section 2.5.1) for
centered
at . This
is shown in equation 2.163.
Subtracting from both sides and dividing by , we get equation 2.164.
Please note that in equation 2.164
was substituted with .
Subtracting all terms except for
from the right hand side of equation 2.164 and then flipping the equation we get equation 2.165.
Comparing equation 2.165 to equation 2.162, we see that the
terms
are all error terms. Therefore, we can rewrite equation 2.165 as equation 2.166.
With
defined in equation 2.167.
is
referred to as the leading truncation term. The order of a given numerical method is defined by the power of the
of the leading truncation term. For forward difference, we see that the
of our leading truncation
term is to the 1st power (i.e. ).
Therefore, the forward difference approximation is considered first order.
Backward difference
The backward difference approximation for the derivative at
is
provided in equation 2.168.
To find the error associated with the backwards difference approximation, we will follow
the same process as we did for forward difference by writing out the Taylor series for
centered
at .
Doing this we get equation 2.169.
Solving for
and replacing
with we
get equation 2.170.
With
provided in equation 2.171.
From equation 2.171 we see that the leading truncation term is
.
Therefore, just as with forward difference backwards difference is a first order approximation.
Central difference
The central difference approximation for the derivative at
is
provided in equation 2.172.
To find the error associated with the central difference approximation, we can add the Taylor series for
(equation 2.163) to the Taylor
series for (equation 2.169)
with both series centered at .
Doing this and simplifying, we get equation 2.173.
With
provided in equation 2.174.
In equation 2.174,
stands for higher order terms. In contrast to forward and backward difference, we see that our leading truncation term has the
to the
power of .
Therefore, central difference is a second order approximation.
Second order central difference
The second order in second order central difference does not refer to the order of the leading
truncation term, but rather that this approximation is used for the second derivative of
as shown
in equation 2.175.
The second order central difference approximation has a second order leading truncation term. Because it is laborious, the
steps to get to this conclusion are not provided. However, the reader is invited to prove this to her/himself
using the process shown in the previous forward, backwards, and central difference approximation sections.
Runge–Kutta 4 (RK4)
Runge-Kutta is a general name used to describe a family of finite different approximations. These approximations are
advanced. Therefore, we will not review the definition and derivation of the Runge-Kutta approximation.
Instead, we will present how to utilize a common Runge-Kutta approximation, called Runge-Kutta 4 or RK4 for
short. In contrast to the previously presented methods, we will not present a approximation for a derivative of
which then must be plugged into an equation. Rather we will jump directly to solving for
therefore
giving us our solution.
To utilize the RK4 method, we first start by assuming we have a first order differential equation of the form shown in
equation 2.176.
Essentially, we are stating that the derivative of
is equal to some function
of and
. We then
state that
can be found using equation 2.177.
With ,
,
, and
defined
in equation 2.178.
| (2.178) |
As the name suggests,
has a fourth order leading truncation term.
Differences between approximations
As touched on when reviewing finite difference approximations, one difference between
methods is the order of the leading truncation term. Since the order refers to the exponent of
multiplied by the first term of the error function, higher order leading truncation terms have a lower error than lower order
leading truncation terms. Another big difference between methods is the stability. Depending on the method used, specific
values of
can make the finite difference approximation unstable and produce an erroneous result. Stability analysis is a
major topic in numerical methods; to learn more about this the reader is referred to an numerical methods
textbook. The last consideration to make when choosing a given finite difference approximation is whether a given
approximation is implicit or explicit. For initial value problems, only the initial condition is provided, meaning that
must be calculated based
on . Therefore, it is helpful to
have an equation of the form
(i.e. it is possible to solve for ).
However, with some methods (such as backwards difference) this is impossible, and
must be
found using implicit methods.
Example: Ordinary differential equation
In this example, we will use the forward difference approximation to generate a data set for the differential
equation provided in equation 2.88. For continuity, this equation is provided here and relabeled as
equation 2.179. Please note that in order to remain consistent with the notation in this section the
in equation 2.88
was replaced with .
| (2.179) |
The initial conditions for this example are provided in equations 2.180.
| (2.180) |
It is now time to plug in our finite difference approximations. Since we are using forward
difference, we can plug in the forward difference approximation (equation 2.162) for
. We must still plug in
an approximation for .
The reader may be tempted to plug in the second order central difference approximation (equation 2.175).
However, the reader will then subsequently notice that the second order central difference approximation has a
term for which we cannot solve. So, we move to do a different clever trick. We will convert equation 2.179 to
become a system of two first order coupled differential equations. We can do this by introducing a new function
defined
as shown in equation 2.181.
Substituting
(equation 2.181) into equation 2.179 we get two coupled differential equations shown in equation 2.182.
| (2.182) |
We can find the initial condition for our coupled first order differential equation by substituting equation 2.181 in for the
initial
condition shown in equation 2.180. Doing this, we get our new initial conditions as shown in equation 2.183.
| (2.183) |
Now, we can plug in our forward difference approximation (equation 2.162) for
and
. Doing this, and
solving for
and we
get equation 2.184.
| (2.184) |
In following the notation used in this section,
is simply a shortened way of writing
and is simply a shortened
way of writing .
Investigating equation 2.184 closely, you may notice that we can solve for
and
by simply plugging in our initial
conditions which provide values for
(same as )
and (same as
) in equation 2.183.
We can then use the
and to solve
for and
. This is great news! We
can now build a solution
by ”marching forward”. This is best done using a coding software such as MATLAB or python, but
excel can be used as well. Figure 2.4 shows a solution dataset using the Foward Euler approximation for
and
. The
analytical ”ground truth” solution from equation 2.92 is provided as a comparison.
Example: PDE - The Heat Equation
We will use this section as an opportunity to provide a numerical solution set to the 2D version of the heat equation (the
heat equation is derived in section ?? as equation ??). For continuity, the 2D heat equation is provided here as equation
2.185.
For this example, we will assume that our 2D solid is a rectangle with a height of 50 and a width of 100.
,
and
are
constants that described in detail in section ??. In this example, these constants will be set to values as follows:
,
,
.
For our initial condition, we will assume that the entire solid is at a temperature of 100 at
. This is
shown in equation 2.186.
For our boundary conditions, we will assume that all edges have a temperature of 0 except for the right edge which is 200.
This is shown in equation 2.187.
| (2.187) |
Now, it is time to apply a finite difference approximation to equation 2.185. In this example, we will use forward difference (equation 2.162) for the
term and second order central
difference (equation 2.175) for the
and terms. We will define our
values at selected times ,
selected
locations ,
and selected
locations , where
are integer values. To
express at a given
in a concise fashion,
we will shorten
to .
Implementing this notation using our chosen finite different approximations, we get equation 2.188.
Solving for ,
we get equation 2.189.
Now, from our initial condition we know all of the
points. Using equation 2.189, we can calculate all
points. In this way, we can calculate the next time step points
based on
by iteration at
each timestep .
This is best done in a coding software such as MATLAB or python. To provide the reader with a
visualization of a solution to equation 2.185, equation 2.189 was input into a coding software to calculate
temperature distribution of the rectangle for each time step. The result of numerical solution is shown in
Figure 2.5 at select time steps 0, 20, 40, and 60. For the written code, increments were set as follows;
,
.