Escolar Documentos
Profissional Documentos
Cultura Documentos
Slide 1
Numerical Methods
Modern Engineering
Mathematics (3rd edition)
Glyn James
Prentice-Hall 2001
ISBN 0-13-018319-9
Approx. price: 25
James Grimbleby
Slide 2
Syllabus
Numerical precision, absolute and relative errors, binary
codes, integer and floating-point codes, IEEE floating-point
representations, rounding and truncation errors in arithmetic
operations, underflow and overflow
Real zeros of non-linear equations by repeated bisection,
secant method, iteration method, Newton-Raphson method,
factorisation of polynomials, synthetic division
Curve fitting, linear and quadratic interpolation, Lagrange
polynomials, exact polynomial fit through equally-spaced
points, numerical differentiation, polynomial least-squares fit
James Grimbleby
Slide 3
Syllabus
Numerical integration, the trapezium approximation, improper
integrals, midpoint approximation, changes of variable to deal
with infinite integration limits
Function minimisation, single-variable approximation methods,
single-variable search methods, Golden-Section search
Numerical solution of first-order non-linear differential
equations; Euler's method, 2nd and 4th order Runge-Kutta
methods
Numerical solution of higher-order differential equations,
reduction to a set of 1st-order differential equations
James Grimbleby
Slide 4
Numerical Methods
Engineering calculations often involve quantities whose
values can vary over a wide range
Frequencies:
1 Hz 10 GHz
Mass:
g many tonnes
Distance:
m thousands of km
Power:
nW MW
Capacitance:
1pF 1F
Slide 5
Precision
Relative precision: the precision of a number relative to its
magnitude
1002, 10.02 and 0.0010.00002 all have a relative
precision of 0.02 or 2%
Absolute precision: the absolute precisions of these values
are respectively 2, 0.02 and 0.00002.
In most cases it is the relative precision, rather than the
absolute precision, that is of importance
James Grimbleby
Slide 6
Binary Codes
Numbers are represented in a computer by means of
binary codes
Each number is stored and processed in the form of a
group of binary digits (bits) known as a word
A binary word consisting of n bits can represent 2n distinct
values
Natural binary code: n-bit word represents the integer
values 0 to 2n-1
Signed binary code: n-bit word represents the integer
values -2n-1 to 2n-1-1
James Grimbleby
Slide 7
Floating-Point Codes
Binary floating-point codes can represent very large and
very small values
Floating-point numbers consist of two fields, the mantissa
M and the exponent E
X = M 2E
The relative precision of a floating-point number is
determined by the number of bits in the mantissa M
The range is determined by the number of bits in the
exponent E
James Grimbleby
Slide 8
11 bit exponent
52 bit mantissa
Slide 9
= 1 (decimal)
2 (decimal)
52 bits of
IEEE format
Slide 10
-1022
+1023
Floating-point range:
1 2-1022
2 2+1023
=
=
2.3 10-308
1.7 10+308
Slide 11
Slide 12
Truncation
Truncation is the act of simply discarding unwanted digits
from a number
Decimal value 1.2 has a binary representation:
1.00110011001100110011001100110011 ...
This exact binary representation could be truncated to 8 bits:
1.0011001
This has decimal value 1.1953125, an absolute error of
0.0046875 or relative error of 0.39%
James Grimbleby
Slide 13
Rounding
Rounding involves determining the nearest discrete value of
the appropriate word
Decimal value 1.2 has a binary representation:
1.00110011001100110011001100110011 ...
This exact binary representation could be rounded to 8 bits:
1.0011010
This has decimal value 1.203125 , an absolute error of
0.003125 or relative error of 0.26%
James Grimbleby
Slide 14
Slide 15
Slide 16
Slide 17
Slide 18
y = a1x + b1
y = a2 x + b2
Solution:
y = a1x + b1
y = a2 x + b2
y = a1x + b1 = a2 x + b2
x(a1 a2 ) = b2 b1
b2 b1
x=
a1 a2
James Grimbleby
Slide 19
Slide 20
q = b + sgn(b). b 2 4ac
q
x1 =
2a
2c
x2 =
q
It is safe to use the normal quadratic formula if the roots are
complex, that is b2<4ac
James Grimbleby
Slide 21
Slide 22
Unstable Algorithms
Often several algorithms are available to perform some
operation
Examples: solving simultaneous linear equations, curve fitting
and solving differential equations
All of the algorithms will be mathematically correct and will
give the same results if executed on a computer that does not
use rounding or truncation
On a real computer they will not necessarily produce results
of the same precision
Some algorithms are notoriously unstable
James Grimbleby
Slide 23
Unstable Algorithms
Consider raising to an integer power, where:
5 1
=
2
The n+1 th power of can be obtained from the nth and n-1 th:
n +1
(
=
5 1 n 1 5 2 5 + 1 n 1
5 1 n 1
n 1
4
4
2
= n 1 n
James Grimbleby
Slide 24
Unstable Algorithms
n
n (by multiplication)
n (by algorithm)
1
6.180339887498948810-1
6.180339887498948810-1
2
3.819660112501052010-1
3.819660112501051210-1
3
2.360679774997897610-1
2.360679774997898010-1
4
1.458980337503155210-1
1.458980337503153010-1
5
9.016994374947428810-2
9.016994374947451210-2
6
5.572809000084124810-2
5.572809000084077610-2
. . . . . . . . . . . . . . . . . . . . . . . . . . .
32
2.053031023146580010-7
2.051847740602852410-7
33
1.268842952262558610-7
1.270757543636591410-7
34
7.841880708840216010-8
7.810901969662609610-8
35
4.846548813785372010-8
4.896673466703304810-8
36
2.995331895054845210-8
2.914228502959304010-8
37
1.851216918730527610-8
1.982444963744001210-8
38
1.144114976324318010-8
9.317835392153028810-9
39
7.071019424062098410-9
1.050661424528698210-8
40
4.370130339181083210-9 -1.188778853133953810-9
James Grimbleby
Slide 25
Slide 26
m = x2 + y 2
Evaluating this the obvious way limits the values of x and y to
of the available floating-point range
To avoid this the modulus can be evaluated in an alternative
way:
2
y
x y
m = x 1+
x
x
m = y 1+
y
James Grimbleby
x< y
Slide 27
v1
James Grimbleby
Slide 28
2.0 v1
i=
1000.0
14
1000
2.0V
v1
exp
0.026
v1
14
v1
2.0 v1
exp
= 0
0.026 1000.0
Slide 29
Graphical Solution
i
2mA
i = 1.0 10
14
v1
exp
0.026
2.0 v1
i=
1000.0
2V
James Grimbleby
vi
Slide 30
Repeated Bisection
f(x)
c
a+b
c=
2
bx
Slide 31
Repeated Bisection
Repeated Bisection algorithm:
If the initial interval is a0 to b0, with midpoint c0, and the
interval for the next iteration is a1 to b1, then :
c0 = (a0+b0)/2
If f(a0)f(c0) < 0 then:
otherwise:
a1 = a0, b1 = c0
a1 = c0, b1 = b0
Slide 32
Repeated Bisection
a
0.000000
0.000000
0.500000
0.500000
0.625000
0.625000
0.656250
0.656250
0.664063
0.664063
0.666016
0.666016
0.666016
. . . . .
0.666031
0.666031
James Grimbleby
c
1.000000
0.500000
0.750000
0.625000
0.687500
0.656250
0.671875
0.664063
0.667969
0.666016
0.666992
0.666504
0.666260
. . . . .
0.666033
0.666032
b
2.000000
1.000000
1.000000
0.750000
0.750000
0.687500
0.687500
0.671875
0.671875
0.667969
0.667969
0.666992
0.666504
. . . . .
0.666035
0.666033
fa
-2.0e-3
-2.0e-3
-1.5e-3
-1.5e-3
-1.1e-3
-1.1e-3
-4.3e-4
-4.3e-4
-9.9e-5
-9.9e-5
-8.3e-7
-8.3e-7
-8.3e-7
. . . .
-3.0e-8
-3.0e-8
fc
fb
5.1e+2 2.6e+9
-1.5e-3 5.1e+2
3.2e-2 5.1e+2
-1.1e-3 3.2e-2
1.7e-3 3.2e-2
-4.3e-4 1.7e-3
3.4e-4 1.7e-3
-9.9e-5 3.4e-4
1.1e-4 3.4e-4
-8.3e-7 1.1e-4
5.1e-5 1.1e-4
2.5e-5 5.1e-5
1.2e-5 2.5e-5
. . . . . . .
7.0e-8 1.7e-7
2.0e-8 7.0e-8
Slide 33
Repeated Bisection
The repeated bisection method is very robust and its rate of
convergence (a factor of 2 per iteration) is fixed
After n iterations the size of the interval (bn-an) is given by:
b0 a0
bn an =
2n
n = log2
=
log10 2
James Grimbleby
Slide 34
Repeated Bisection
f(x)
f(x)
x
x
1
f ( x) =
x2
James Grimbleby
Slide 35
Secant Method
Two points a and b on the function f(x) are used to estimate
the position c of the root by linear interpolation
f(x)
f (b) f (a) 0 f (a)
=
slope =
ba
c a
(b a )
c a = f (a)
f (b) f (a)
f(b)
a
f(a)
(b a)
c = a f (a)
f (b) f (a)
Slide 36
Secant Method
It is not necessary for the root to lie between the points a
and b:
f(x)
f (b) f (a) f (a)
slope =
=
ba
ac
f(b)
c
f(a)
a
James Grimbleby
(b a)
a c = f (a)
f (b) f (a)
(b a)
c = a f (a)
f (b) f (a)
Slide 37
Secant Method
The process is now repeated with a1=b0 and b1=c0 to get an
even better approximation c1:
f(x)
f(a)
bc
f(b)
James Grimbleby
Slide 38
Secant Method
a
0.700000
1.000000
0.699998
0.699996
0.680959
0.673203
0.667802
0.666258
0.666039
b
1.000000
0.699998
0.699996
0.680959
0.673203
0.667802
0.666258
0.666039
0.666031
f(a)
3.626510-3
5.053910+2
3.626110-3
3.625710-3
1.049510-3
4.308910-4
9.574810-5
1.191410-5
3.913810-7
f(b)
5.053910+2
3.626110-3
3.625710-3
1.049510-3
4.308910-4
9.574810-5
1.191410-5
3.913810-7
1.671810-9
c
0.699998
0.699996
0.680959
0.673203
0.667802
0.666258
0.666039
0.666031
0.666031
Slide 39
Iteration Method
The Iteration method is based on manipulating the equation
into the form:
x = g (x )
Given an approximation xn to the root, a better approximation
xn+1 is found by using the formula:
xn +1 = g ( xn )
The iteration method can often converge to a root very quickly
Unfortunately the iteration method does not guarantee
convergence
Nevertheless, because of its simplicity it is a valuable method
James Grimbleby
Slide 40
y=x
y=x
y = g (x )
x0
x2
x1
Convergent
James Grimbleby
x
x = g( x)
y = g( x)
x1
x0 x2
Divergent
Slide 41
Diode circuit:
1.0 10
14
2.0 v1
v1
exp
= 0
0.026 1000.0
11
v1
exp
0.026
Slide 42
10
1 .0
exp
= 1.9 107
0.026
James Grimbleby
g(x)
-5.05396510+05
2.00000010+00
-2.55427610+22
Slide 43
v1
= exp
0.026
1.0 10 11
so that:
v1
2.0 v1
or : ln
=
1.0 10 11 0.026
2.0 v1
v1 = 0.026 ln
11
1.0 10
11
1.0 10
)
Slide 44
Iteration Method
From a starting point of x0=1.0:
g ( x0 ) =
0.026
0.026
=
= 0.026
2.0 x
2.0 1
g(x)
0.658539
0.666177
0.666029
0.666032
0.666031
0.666031
Slide 45
Newton-Raphson Method
The Newton-Raphson method can in principle be applied to
any equation of the form:
f ( x) = 0
Slide 46
Newton-Raphson Method
Suppose that x0 is initial estimate of root
Expand function as a Taylor series around this point:
h2
f ( x0 + h) = f ( x0 ) + hf ' ( x0 ) +
f " ( x0 ) + ...
2!
f ( x0 )
or : h =
f ' ( x0 )
Slide 47
Newton-Raphson Method
f ( x0 )
h=
f ' ( x0 )
Slide 48
Newton-Raphson Method
f(x)
f(x0)
x2
x1
f ( x0 )
f ' ( x0 ) = slope =
x0 x1
James Grimbleby
x0
or :
f ( x0 )
x1 = x0
f ' ( x0 )
Slide 49
Newton-Raphson Method
The Newton-Raphson method requires the derivative of the
function to be evaluated
In the case of the diode circuit the derivative can be obtained
in algebraic form:
f ( x ) = 1.0 10
14
2.0 x
x
exp
0.026 1000.0
1.0 10 14
1.0
x
exp
+
f ' ( x) =
0.026
0.026 1000.0
Slide 50
Newton-Raphson Method
xn
1.000000
0.974000
0.948000
0.922001
0.896002
0.870005
0.844014
0.818038
0.792106
0.766295
0.740814
0.716224
0.693917
0.676741
0.667939
0.666099
0.666032
James Grimbleby
f(xn)
5.0539710+2
1.8592510+2
6.8397710+1
2.5161810+1
9.2562210+0
3.4048610+0
1.2522610+0
4.6036110-1
1.6902910-1
6.1848510-2
2.2415310-2
7.9109110-3
2.5927310-3
6.9064110-4
1.0342910-4
3.5114110-6
4.4317210-9
f'(xn)
1.9438410+4
7.1510010+3
2.6307210+3
9.6780510+2
3.5605210+2
1.3100110+2
4.8209510+1
1.7752610+1
6.5485810+0
2.4272410+0
9.1155810-1
3.5464210-1
1.5095410-1
7.8457710-2
5.6211210-2
5.2439010-2
5.2306610-2
xn+1
0.974000
0.948000
0.922001
0.896002
0.870005
0.844014
0.818038
0.792106
0.766295
0.740814
0.716224
0.693917
0.676741
0.667939
0.666099
0.666032
0.666031
Slide 51
Newton-Raphson Method
Newton-Raphson method may fail where two or more of the
roots are coincident
f(x)
Zero
function
Zero derivative
James Grimbleby
x
Slide 52
Newton-Raphson Method
Although Newton-Raphson works well in most cases there is
no guarantee that it will converge:
f(x)
x3
x1
x0
James Grimbleby
x2
Slide 53
Polynomial Equations
The Newton-Raphson method is particularly suitable for
polynomial equations because their derivatives can always be
obtained analytically:
2
i =n
f ( x ) = a0 + a1x + a2 x + ... + an x = ai x i
i =0
n 1
i =n
= iai x i 1
i =1
Slide 54
Evaluation of Polynomials
The most efficient way of evaluating a polynomial is to use
Horner's rule
This can be derived by rearranging the polynomial into nested
form:
f ( x ) = a0 + a1x + a2 x 2 + ... + an 1x n 1 + an x n
= a0 + x(a1 + x(a2 + ... + x(an 1 + an x )..))
Slide 55
Polynomial Equations
Polynomial with roots: 10, 1, 1 and 0.1:
f ( x ) = 1 12.1x + 22.2 x 2 12.1x 3 + x 4
Newton-Raphson:
xn
8.2644610-2
9.9339110-2
9.9999010-2
1.0000010-1
James Grimbleby
f(xn)
1.4484610-1
5.3078810-3
8.1191610-6
1.9098010-11
f'(xn)
-8.6762510+0
-8.0436410+0
-8.0190410+0
-8.0190010+0
xn+1
9.9339110-2
9.9999010-2
1.0000010-1
1.0000010-1
Slide 56
Polynomial Deflation
Roots are removed from a polynomial by a procedure known
as deflation
Any polynomial can be written in the form of the product of n
factors:
f ( x ) = a0 + a1x + a2 x 2 + ... + an 1x n 1 + an x n
= an ( x z1)( x z2 )( x z3 )...( x zn )
Slide 57
[G James p 88]
Polynomial Deflation
x3-12.0x2+21.0x -10
x-0.1 x4-12.1x3+22.2x2-12.1x+1
x4- 0.1x3
-12.0x3+22.2x2
-12.0x3+ 1.2x2
+21.0x2-12.1x
+21.0x2- 2.1x
-10.0x+1
-10.0x+1
Slide 58
Polynomial Equations
Deflated polynomial:
f ( x ) = x 3 12x 2 + 21x 10
Estimate of smallest root:
Newton-Raphson:
a0 10
x0 =
=
= 0.476190
a1 21
xn
f(xn)
f'(xn)
xn+1
0.476190
-2.613109
10.251701
0.731086
0.731086
-0.670281
5.057404
0.863621
. . . . . . . . . . . . . . . . . . . . . . . .
0.997837
-0.000042
0.038940
0.998919
0.998919
-0.000011
0.019469
0.999459
0.999459
-0.000003
0.009734
0.999730
0.999730
-0.000001
0.004867
0.999865
James Grimbleby
Slide 59
[G James p 88]
Polynomial Deflation
x2-11x +10
x-1 x3-12x2+21x-10
x3 -x2
-11x2+21x
-11x2+11x
10x-10
10x-10
Slide 60
Cubic Equations
f ( x ) = a0 + a1x + a2 x 2 + a3 x 3
f(x)
f(x)
3 real roots
1 real root
x
Slide 61
Cubic Equations
Example:
f ( x ) = x 3 6 x 2 + 13 x 20
Newton-Raphson:
xn
1.538462
7.981116
6.042918
4.841269
4.215604
4.019009
4.000165
4.000000
James Grimbleby
a0 20
x0 =
=
= 1.53
a1 13
f(xn)
-10.559854
209.948095
60.125160
15.778298
3.091784
0.249297
0.002144
0.000000
f'(xn)
1.639053
108.321259
50.035556
25.218432
15.726702
13.229197
13.001979
13.000000
xn+1
7.981116
6.042918
4.841269
4.215604
4.019009
4.000165
4.000000
4.000000
Slide 62
[G James p 88]
Cubic Equations
x2-2x +5
x-4 x3-6x2+13x-20
x3-4x2
-2x2+13x
-2x2 +8x
5x-20
5x-20
Slide 63
Curve Fitting
Curve fitting consists of obtaining a function f(x) which fits
experimental data points
This may be an exact fit if the number of experimental points
is small
Alternatively if the number of experimental points is large
then the function will normally be an approximate fit
It will be assumed in this course that the function is of
polynomial form and is fitted by adjusting the coefficients ai
f ( x ) = a0 + a1x + a2 x 2 + ... + an 1x n 1 + an x n
James Grimbleby
Slide 64
yi
y
ye
y3
y2
y1
y3
y2
y1
x1
xi
x2
x3
Interpolation
James Grimbleby
x1
x2
x3
x
xe
Extrapolation
Slide 65
Slide 66
Data points
James Grimbleby
Actual
function
x
Slide 67
Function Differentiation
Inverter Efficiency
Differentiate cubic
and set to zero to
obtain maximum
Cubic
Polynomial
Data points
Frequency
James Grimbleby
Slide 68
Function Integration
y
7th-Order
polynomial
fit to data
Data points
b
School of Systems Engineering - Electronic Engineering
x
Slide 69
[G James p 76]
Suppose that the data points are (x1, y1) and (x2, y2):
( x1, y1)
f ( x ) = ax + b
y
( x2 , y 2 )
x
James Grimbleby
Slide 70
y1
y2
Slide 71
so that:
James Grimbleby
R()
511.4
529.1
529.1 511.4
= 0.885
a=
40 20
40 511.4 20 529.1
= 493.7
b=
40 20
R = 0.885T + 493.7
School of Systems Engineering - Electronic Engineering
Slide 72
Slide 73
Suppose that the data points are (x1, y1), (x2, y2) and (x3, y3):
( x2 , y 2 )
y
f ( x ) = ax 2 + bx + c
( x3 , y 3 )
( x1, y1)
x
James Grimbleby
Slide 74
Slide 75
F (MHz)
1.00040
1.00107
1.00243
Slide 76
Now multiply the 1st equation by 2, and subtract from the 2nd:
200a = 0.00069
or
a = 3.450 10 6
Back substitute a:
0.001725 + 10b = 0.00067
or
b = 1.055 10 4
or
c = 1.00113
Slide 77
Slide 78
Slide 79
[G James p 87]
Lagrange Polynomials
The Lagrange polynomial which fits n data points (x1, y1),
(x2, y2) .. (xn, yn) is of n-1th order and has the form:
( x x2 )( x x3 )( x x 4 ) .. ( x xn )
y1
f ( x) =
( x1 x2 )( x1 x3 )( x1 x 4 ) .. ( x1 xn )
( x x1)( x x3 )( x x 4 ) .. ( x xn )
+
y2
( x 2 x1)( x 2 x3 )( x 2 x 4 ) .. ( x2 xn )
+ ...
( x x1)( x x 2 )( x x3 ) .. ( x xn 1)
+
yn
( xn x1)( xn x 2 )( xn x3 ) .. ( xn xn 1)
James Grimbleby
Slide 80
Lagrange Polynomials
The Lagrange formula exactly fits each of the data points
Consider the first data point y1=f(x1)
All terms in the formula except the first contain a numerator
factor (x-x1) which is zero at the first data point x=x1
Consequently all terms except the first are zero at the first
data point and the formula simplifies to:
( x1 x2 )( x1 x3 )( x1 x 4 ) .. ( x1 xn )
y1
f ( x1) =
( x1 x2 )( x1 x3 )( x1 x 4 ) .. ( x1 xn )
= y1
James Grimbleby
Slide 81
Lagrange Polynomials
Example: the hull drag F of a racing yacht as a function of
the hull speed V:
V (m/s)
0.0
0.5
1.0
1.5
2.0
2.5
F (N)
0.0
20.41
92.75
181.00
421.45
1265.23
Slide 82
Lagrange Polynomials
(V 0.5)(V 1.0)(V 1.5)(V 2.0)(V 2.5)
0.0
f (V ) =
(0.0 0.5)(0.0 1.0)(0.0 1.5)(0.0 2.0)(0.0 2.5)
(V 0.0)(V 1.0)(V 1.5)(V 2.0)(V 2.5)
+
20.41
(0.5 0.0)(0.5 1.0)(0.5 1.5)(0.5 2.0)(0.5 2.5)
(V 0.0)(V 0.5)(V 1.5)(V 2.0)(V 2.5)
+
92.75
(1.0 0.0)(1.0 0.5)(1.0 1.5)(1.0 2.0)(1.0 2.5)
(V 0.0)(V 0.5)(V 1.0)(V 2.0)(V 2.5)
181.00
+
(1.5 0.0)(1.5 0.5)(1.5 1.0)(1.5 2.0)(1.5 2.5)
(V 0.0)(V 0.5)(V 1.0)(V 1.5)(V 2.5)
+
421.45
(2.0 0.0)(2.0 0.5)(2.0 1.0)(2.0 1.5)(2.0 2.5)
(V 0.0)(V 0.5)(V 1.0)(V 1.5)(V 2.0)
+
1265.23
(2.5 0.0)(2.5 0.5)(2.5 1.0)(2.5 1.5)(2.5 2.0)
James Grimbleby
Slide 83
Lagrange Polynomials
Lagrange polynomials can be converted to explicit polynomial
form
T (C)
F (MHz)
20
1.00040
30
1.00107
40
1.00243
Lagrange polynomial:
(T 30)(T 40)
1.00040
f (T ) =
(20 30)(20 40)
(T 20)(T 40)
+
1.00107
(30 20)(30 40)
(T 20)(T 30)
+
1.00243
( 40 20)( 40 30)
James Grimbleby
Slide 84
Lagrange Polynomials
T 2 70T + 1200
f (T ) =
1.00040
200
T 2 60T + 800
+
1.00107
100
T 2 50T + 600
+
1.00243
200
Slide 85
[G James p 78]
i = f ( xi ) y i
James Grimbleby
Slide 86
E=
James Grimbleby
2
i
i =1
= {f ( xi ) y i }2
i =1
Slide 87
E = {axi + b y i } 2
Differentiating w.r.t. a:
i =1
n
n
n
dE n
2
= 2 xi {axi + b y i } = 2a xi + 2b xi 2 xi y i
da i =1
i =1
i =1
i =1
Differentiating w.r.t. b:
n
n
n
dE n
= 2{axi + b y i } = 2a xi + 2b 1 2 y i
db i =1
i =1
i =1
i =1
James Grimbleby
Slide 88
a xi2
i =1
n
a xi
i =1
i =1
n
i =1
n
i =1
i =1
+ b xi = xi y i
+ b 1 = yi
Slide 89
P (mW)
1.33
2.08
2.88
3.31
3.83
4.67
Slide 90
450
18.10
33820
1379.88
Slide 91
1 = 6
i =1
n
xi = 450
i =1
n
y i = 18.1
i =1
n
a=0.319 b=-21.0
2
x
i = 33820
i =1
n
xi y i = 1379.88
i =1
James Grimbleby
so least-squares fit:
P = 0.319 I - 21.0
Slide 92
20
18
Least-squares fit:
16
T = 19 - 0.01h
14
12
10
0
James Grimbleby
200
400
600
Height above sea level h (m)
School of Systems Engineering - Electronic Engineering
Slide 93
E = {axi2 + bxi + c y i } 2
i =1
dE n 2
= 2 xi {axi2 + bxi + c y i }
da i =1
=
James Grimbleby
2a xi4
i =1
+ 2b xi3
i =1
+ 2c xi2
i =1
2 xi2 y i
i =1
Slide 94
n
n
4
3
a xi + b xi + c xi2
i =1
i =1
i =1
n
n
n
3
2
a xi + b xi + c xi
i =1
i =1
i =1
n
n
n
2
a xi + b xi + c 1
i =1
i =1
i =1
= xi2 y i
i =1
n
= xi y i
i =1
n
= yi
i =1
Slide 95
Slide 96
xi
1
0
1 10
1 20
1 30
1 40
1 50
1 60
1 70
1 80
1 90
1 100
yi
xi2
0.000
0
0.391
100
0.789
400
1.196
900
1.611 1600
2.035 2500
2.467 3600
2.908 4900
3.357 6400
3.813 8100
4.277 10000
xi3
0.00100
1.00103
8.00103
2.70104
6.40104
1.25105
2.16105
3.43105
5.12105
7.29105
1.00106
xi4
0.000100
1.000104
1.600105
8.100105
2.560106
6.250106
1.296107
2.401107
4.096107
6.561107
1.000108
xiyi
xi2yi
0.00
3.91
15.78
35.88
64.44
101.75
148.02
203.56
268.56
343.17
427.70
0.0
39.1
315.6
1076.4
2577.6
5087.5
8881.2
14249.2
21484.8
30885.3
42770.0
Slide 97
a = 4.145 10 5
b = 3.863 10 2
c = 4.196 10 5
and the least-square quadratic fit is:
Slide 98
T
0.000
10.000
20.000
30.000
40.000
50.000
60.000
70.000
80.000
90.000
100.000
quadratic
0.072
10.003
19.946
29.940
39.951
49.990
60.024
70.063
80.073
90.021
99.917
error
0.072
0.003
-0.054
-0.060
-0.049
-0.010
0.024
0.063
0.073
0.021
-0.083
Slide 99
[G James p 569]
Numerical Integration
Numerical integration is the process of evaluating definite
integrals by numerical methods:
b
Z = f ( x ) dx
a
Slide 100
f(x)
Numerical Integration
area = f ( x ) dx
a
x
x=a
James Grimbleby
x=b
Slide 101
Numerical Integration
Analytical integration should be used where possible because
it will be more efficient and more accurate
Many functions cannot be integrated analytically, for example:
a
Z= e
-x
dx
Gaussian
distribution
1
2
Z=
H ( j) d
2 0
Equivalent noise
bandwidth
/2
Z=
d
1 k 2 sin2
Elliptic
integrals
Slide 102
Polar Planimeter
James Grimbleby
Slide 103
[G James p 570]
Trapezium Method
f(x)
f0
f1
x0
James Grimbleby
x1
x2
x3
x4
x5
x6
x
Slide 104
Trapezium Method
The area under the curve in each interval is approximated by a
trapezium:
f0
1
area = hf0 + h(f1 f0 )
2
h
= (f0 + f1)
2
f1
h=x1-x0
ba
h=
n
so that:
xk = a + kh
James Grimbleby
where
k = 0, 1, 2, .. , n
Slide 105
Trapezium Method
Total area:
h
Z = {f ( xk 1) + f ( xk )}
k =1 2
h n
= {f ( xk 1) + f ( xk )}
2 k =1
1
1
Z = h f ( x0 ) + f ( x k ) + f ( x n )
2
2
k =1
n 1
1
1
= h f (a) + f (a + kh) + f (b)
2
2
k =1
James Grimbleby
Slide 106
Trapezium Method
Example:
Z = x 4 exp x 2 dx
0
xk
0
1
2
3
4
5
6
7
8
*
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
James Grimbleby
f(xk)
0.000000
0.003670
0.048675
0.180283
0.367879
0.511747
0.533584
0.438657
0.293050
2.231020
Z = 0.25 2.231020
= 0.557755
(Weighted sum)
Slide 107
Trapezium Method
This result is not very accurate because of the small number
of intervals used (8)
To obtain a higher accuracy the integration range must be split
into much larger number of intervals:
n
100
200
500
1000
Z
0.560805
0.560820
0.560824
0.560824
Slide 108
Trapezium Method
Use the trapezium rule with successively larger values of n
until the results agree to within the required accuracy:
n=2
n=4
x1
x0
x2
x1
James Grimbleby
x2
x1
x2
x3
x4
n=8
n=4
x0
x0
x3
x4
x0 x1 x2 x3 x4 x5 x6 x7 x8
Slide 109
Trapezium Method
Example: increase number of intervals to 16:
k
1
3
5
7
9
11
13
15
xk
0.125
0.375
0.625
0.875
1.125
1.375
1.625
1.875
f(xk)
0.000240
0.017181
0.103246
0.272600
0.451810
0.539663
0.497284
0.367442
2.249467
Even values of k
have already been
evaluated for n=8
Slide 110
James Grimbleby
t(s)
a(m/s2)
0
1
2
3
4
5
6
7
8
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
24.1
27.5
31.2
34.4
37.9
41.7
45.1
48.4
51.9
Slide 111
v1 = v 0 + a dt
t0
1
v 0 + (t1 t 0 )(a0 + a1)
2
James Grimbleby
t(s)
a(m/s2)
0
1
2
3
4
5
6
7
8
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
24.1
27.5
31.2
34.4
37.9
41.7
45.1
48.4
51.9
v(m/s)
0.000
12.900
27.575
43.975
62.050
81.950
103.650
127.025
152.100
Slide 112
h1 = h0 + v dt
t0
1
h0 + (t1 t 0 )(v 0 + v1)
2
k
t(s)
a(m/s2)
0
1
2
3
4
5
6
7
8
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
24.1
27.5
31.2
34.4
37.9
41.7
45.1
48.4
51.9
James Grimbleby
v(m/s)
0.000
12.900
27.575
43.975
62.050
81.950
103.650
127.025
152.100
h(m)
0.000
3.225
13.344
31.231
57.738
93.738
140.138
197.806
267.588
Slide 113
[G James p 595]
Improper Integrals
In integral is termed improper if:
1. It has an integrable singularity between its upper and lower
limits, for example:
1
Z = x 1/ 3 dx
1
Z = x 1/ 3 dx
0
Slide 114
Improper Integrals
Type 1 improper integrals can always be converted to type 2
integrals by splitting the integration range
Suppose that f(x) has a singularity at c:
b
Z = f ( x ) dx = f ( x ) dx + f ( x ) dx
Slide 115
Improper Integrals
The trapezium method evaluates the integrand f(x) at each of
the integration limits
It cannot therefore be used with type 2 improper integrals
It is also unsuitable for integrating functions such as:
1
sin x
Z=
dx
0 x
Here the integrand is perfectly well-behaved at x=0
Nevertheless, any attempt to evaluate the integrand at x=0
on a computer will lead to a divide-by-zero error
James Grimbleby
Slide 116
[G James p 570]
Midpoint Method
f(x)
f1
x1
James Grimbleby
x2
x3
x4
x5
x6
x
Slide 117
Midpoint Method
Normally the n intervals will be of equal width h:
ba
h=
n
and:
(2k 1)h
xk = a +
2
where
k = 1, 2, .. , n
Total area:
n
Z = hf ( xk )
k =1
n
(2k 1)h
= h f a +
k =1
James Grimbleby
Slide 118
Midpoint Method
Example:
sin x
Z=
dx
x
0
xk
f(xk)
1
2
3
4
5
6
7
8
0.392699
1.178097
1.963496
2.748894
3.534292
4.319690
5.105089
5.890487
0.974496
0.784213
0.470528
0.139214
-0.108277
-0.213876
-0.180972
-0.064966
James Grimbleby
Z = 2 / 8 1.800360
= 0.785398 1.800360
= 1.413999
1.800360
School of Systems Engineering - Electronic Engineering
Slide 119
Midpoint Method
This result is not very accurate because of the small number
of intervals used (8)
To obtain a higher accuracy the integration range must be split
into much larger number of intervals:
n
100
200
500
1000
Z
1.418125
1.418145
1.418150
1.418151
Slide 120
Midpoint Method
Use the midpoint rule with successively larger values of n until
the results agree to within the required accuracy:
n=1
n=3
x1
x1
James Grimbleby
x2
x3
n=9
n=3
x1
x2
x3
x1 x2 x3 x4 x5 x6 x7 x8 x9
Slide 121
Z = f ( x ) dx
0
Slide 122
Z = f ( x ) dx
0
Let x=tan y; the limits x=0 and x= become y=0 and y=/2
dx
d sin y cos y sin2 y
=
=
+
= 1 + tan2 y
dy dy cos y cos y cos 2 y
so:
dx = {1 + tan2 y } dy
and the integral becomes:
/2
Z = f ( x ) dx = f ( tan y ) {1 + tan2 y} dy
James Grimbleby
Slide 123
1
2
B=
H ( j) d
2 0
where:
H ( j ) =
1 + ( j)2 / 42
d i = x / 2 x 3 / 84
2
2
n
n
+
2
r
i
f ( x ) = H ( jx ) =
d r2 + d i2
James Grimbleby
Slide 124
yk
0.098175
0.294524
0.490874
0.687224
0.883573
1.079923
1.276272
1.472622
xk=tan(yk)
0.098491
0.303347
0.534511
0.820679
1.218504
1.870869
3.296560
10.153170
f(xk)
0.999192
0.992355
0.976417
0.945119
0.882234
0.740300
0.365908
0.030949
(1+xk2)f(xk)
1.008885
1.083671
1.255382
1.581670
2.192133
3.331460
4.342343
3.221357
18.016891
Slide 125
Optimisation
Optimisation involves adjusting the parameters of a system to
obtain the best performance
Most applications of computers in engineering are concerned
with analysis
Optimisation differs from these techniques in that it is a true
design method
Essentially optimisation closes the loop around analysis; if a
system can be analysed then it can also be optimised
Optimisation is usually formulated as a function minimisation
problem
James Grimbleby
Slide 126
Function Minimisation
Many designs can be optimised by minimising the difference
between their behaviour and some target behaviour
If some performance item is to be maximised (for example
efficiency) then function minimisation can still be used:
maximum(f ) = minimum( f )
The function to be minimised is known as the objective function
It is possible to optimise functions of several variables, but the
discussion here will be limited to single-variable optimisation
Two methods will be considered: approximation methods and
golden-section search
James Grimbleby
Slide 127
Thermistor Thermometer
Rth
Vref
R0
Rth = Ae
B /T
Vout
R0
R0
Vout = Vref
= Vref
R0 + Rth
R0 + Ae B / T
Slide 128
Thermistor Thermometer
V(50)
Linear
approximation
Vout
Temperature
error
Actual voltage
V(0)
0C
James Grimbleby
Temperature
School of Systems Engineering - Electronic Engineering
50C
Slide 129
Thermistor Thermometer
Temperature Error
10C
0C
100
James Grimbleby
R0
1k
10 k
Slide 130
Approximation Methods
Fit a cubic polynomial to the objective function at 4 points:
Objective
function f(x)
Cubic
polynomial
x
James Grimbleby
Slide 131
Approximation Methods
Coefficients a, b, c, d of cubic polynomial are chosen so that it
fits f(x) at four x values:
p( x ) = ax 3 + bx 2 + cx + d
Differentiating:
dp
= 3ax 2 + 2bx + c = 0
dx
b b 2 3ac
xmin =
3a
Select minimum using second derivative:
d2 p
dx
James Grimbleby
= 6ax + 2b > 0
Slide 132
Approximation Methods
The minimum xmin of the polynomial is only an approximation
to the minimum of the objective function
The process must be repeated, using xmin in place of the worst
of the original points, until a satisfactory accuracy has been
obtained
Approximation methods are useful in certain applications but
they require the objective function to be well-behaved
The objective function for the thermistor thermometer has a
discontinuous gradient at the minimum, and cannot be
accurately approximated around this point
James Grimbleby
Slide 133
Search Methods
Given an initial interval containing a minimum split the interval
into two or more sections and determine which of these
contains the minimum
Repeat the process until the minimum has been located to a
sufficient degree of accuracy
Search methods do not require the objective function to be
well-behaved and will find the minimum even if the objective
function is discontinuous.
Since the interval is reduced by a fixed ratio at each iteration,
search methods have an entirely predictable convergence
James Grimbleby
Slide 134
Binary Search
a
James Grimbleby
c-h c c+h
School of Systems Engineering - Electronic Engineering
Slide 135
Golden-Section Search
The binary search method has the disadvantage that it is
difficult to choose a suitable value for h
If h is large then the convergence rate is reduced
If h is small then errors can occur because of rounding
These problems are overcome in the Golden-Section
method, where h is large but only one objective evaluation is
required at each iteration
The name Golden-Section derives from the convergence rate
which is 1.618
James Grimbleby
Slide 136
Golden-Section Search
James Grimbleby
p
a
q
p
a
q
p q
b
b
b
Slide 137
b/ = b
p/ = q
b/ p/
or:
bq
James Grimbleby
b/ a/
k
bp
k
Slide 138
gives:
bp
k
or:
ba
k2
1
k2
thus:
2
k k 1= 0
James Grimbleby
(b a) (q a)
ba
(b a )
k
1
1
k
1+ 5
k=
= 1.618
2
Slide 139
Slide 140
Golden-Section Search
The search is continued until the search interval b-a is less
than the required precision
Convergence rate is k = 1.618
If required precision is and the initial search interval is a to
b, then number of iterations n will be:
log(b a) log( )
n=
log(k )
Slide 141
Golden-Section Search
Thermistor thermometer: choose initial range a=1k, b=4k
b a 3 k
qa = b p =
=
= 1.854 k
1.618
k
so that:
1.854k
a=1k
James Grimbleby
1.854k
p=2.146k
q=2.854k
f(p)=1.171
f(q)=2.061
b=4k
Slide 142
Golden-Section Search
Next iteration: a'=1k, b'=2.854k
so that:
b'a' 1.854 k
q ' a' = b ' p ' =
=
= 1.146 k
k
1.618
1.146k
a'=1k
James Grimbleby
1.146k
p'=1.708k
q'=2.146k
f(p')=2.179
f(q')=1.171
b'=2.854k
Slide 143
Golden-Section Search
a
1.00010+3
1.00010+3
1.70810+3
1.70810+3
1.97910+3
2.14610+3
2.14610+3
2.21010+3
2.21010+3
2.23410+3
2.24910+3
2.24910+3
2.25510+3
2.25510+3
2.25710+3
2.25710+3
2.14610+3
1.70810+3
2.14610+3
1.97910+3
2.14610+3
2.24910+3
2.21010+3
2.24910+3
2.23410+3
2.24910+3
2.25910+3
2.25510+3
2.25910+3
2.25710+3
2.25910+3
2.25810+3
q
2.85410+3
2.14610+3
2.41610+3
2.14610+3
2.24910+3
2.31310+3
2.24910+3
2.27410+3
2.24910+3
2.25910+3
2.26410+3
2.25910+3
2.26110+3
2.25910+3
2.25910+3
2.25910+3
b
4.00010+3
2.85410+3
2.85410+3
2.41610+3
2.41610+3
2.41610+3
2.31310+3
2.31310+3
2.27410+3
2.27410+3
2.27410+3
2.26410+3
2.26410+3
2.26110+3
2.26110+3
2.25910+3
f(p)
1.17110+0
2.17910+0
1.17110+0
1.51310+0
1.17110+0
9.81010-1
1.05210+0
9.81010-1
1.00810+0
9.81010-1
9.65710-1
9.71510-1
9.65710-1
9.67910-1
9.65710-1
9.66510-1
f(q)
2.06110+0
1.17110+0
1.25710+0
1.17110+0
9.81010-1
1.06310+0
9.81010-1
9.92110-1
9.81010-1
9.65710-1
9.75310-1
9.65710-1
9.68910-1
9.65710-1
9.66410-1
9.65710-1
Slide 144
Bracketing a Minimum
James Grimbleby
b
a
c
b
a
x
c
b
c
Slide 145
Differential Equations
The time-domain response of electronic systems and the
dynamical behaviour of mechanical systems can be described
by differential equations
Many differential equations can be solved analytically,
particularly those describing idealised systems
The more realistic the mathematical model of a physical
system, the less likely it is that an analytical solution can be
found
In cases where an analytical solution is not available the
differential equations must be solved by numerical methods
and as a consequence the solution will not be exact
James Grimbleby
Slide 146
Differential Equations
We shall first consider the problem of solving first-order nonlinear differential equations of the form:
dx
= f ( x,t )
dt
Slide 147
Differential Equations
R
100k
i(t)
vi
4Vdc
C
10nF
vC(t)
Suppose that prior to t=0 the switch is open and the capacitor
discharged: vC=0
Then at t=0 the switch is closed and the capacitor starts to
charge towards the input voltage vi
James Grimbleby
Slide 148
Differential Equations
R
100k
i(t)
vi
4Vdc
C
10nF
vC(t)
dv c (t )
i (t ) = C
dt
Ri (t ) + v c (t ) = v i
dv c (t )
RC
+ v c (t ) = v i
dt
v c (t ) = v i {1 e RC }
James Grimbleby
Slide 149
Euler Integration
A first-order differential equation can be written in the form:
dx
= f ( x,t )
dt
Provided that x and t are sufficiently small:
x dx
t dt
or
dx
x t
= t f ( x,t )
dt
Slide 150
Euler Integration
Equation for the capacitor voltage vc can be rewritten:
dv c (t )
1
=
{v i v c (t )}
dt
RC
Slide 151
Euler Integration
t
0.0000
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0.0007
0.0008
0.0009
0.0010
x(t)
0.000000
0.400000
0.760000
1.084000
1.375600
1.638040
1.874236
2.086812
2.278131
2.450318
2.605286
f(x,t)
4000.00
3600.00
3240.00
2916.00
2624.40
2361.96
2125.76
1913.19
1721.87
1549.68
1394.71
x=t.f(x,t) x(exact)
0.400000
0.360000
0.324000
0.291600
0.262440
0.236196
0.212576
0.191319
0.172187
0.154968
0.139471
0.000000
0.380650
0.725077
1.036727
1.318720
1.573877
1.804753
2.013659
2.202684
2.373721
2.528482
Slide 152
Euler Integration
Euler integration using smaller time steps:
t
10-4
10-5
10-6
Iterations
10
100
1000
vc
2.605286
2.535871
2.529218
error
0.076804
0.007389
0.000736
Slide 153
Euler Integration
x(t+t)
dx
= f ( x,t )
slope =
dt
x(t)
t
James Grimbleby
x = t f ( x,t )
t+t
t
Slide 154
Heun Integration
The Heun method obtains a better estimate of the slope by
averaging the derivative at the start and end of the interval
k1 = f ( x,t )
x1 = x(t ) + t .k1
Slide 155
Heun Integration
x
x(t+t)
x1
slope = k 2 = f ( x1,t + t )
t
{k1 + k 2 }
x =
2
slope = k1 = f ( x,t )
x(t)
t
k1 + k 2
slope =
2
t
James Grimbleby
t+t
t
Slide 156
Heun Integration
t
0.0000
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0.0007
0.0008
0.0009
0.0010
x(t)
0.000000
0.380000
0.723900
1.035130
1.316792
1.571697
1.802386
2.011159
2.200099
2.371090
2.525836
k1=f(x,t)
4000.00
3620.00
3276.10
2964.87
2683.21
2428.30
2197.61
1988.84
1799.90
1628.91
1474.16
x1(t)
0.400000
0.742000
1.051510
1.331617
1.585113
1.814527
2.022147
2.210043
2.380089
2.533981
2.673252
k2
3600.00
3258.00
2948.49
2668.38
2414.89
2185.47
1977.85
1789.96
1619.91
1466.02
1326.75
x
0.380000
0.343900
0.311230
0.281663
0.254905
0.230689
0.208773
0.188940
0.170991
0.154746
0.140046
exact
0.000000
0.380650
0.725077
1.036727
1.318720
1.573877
1.804753
2.013659
2.202684
2.373721
2.528482
Slide 157
Heun Integration
Heun integration using smaller time steps:
t
10-4
10-5
10-6
Iterations
10
100
1000
vc
2.525836
2.528458
2.528482
error
0.002646
0.000025
0.000000
Slide 158
t
x1 = x(t ) + k1
2
Slide 159
slope = k 2 = f ( x1,t + t / 2)
x(t+t)
x1
slope = k1 = f ( x,t )
slope = k 2
x = t k 2
x(t)
t / 2
t
James Grimbleby
t+t/2
t / 2
t+t
t
Slide 160
x(t)
k1=f(x,t) t/2.f(x,t)
0.000000
0.380000
0.723900
1.035129
1.316792
1.571697
1.802386
2.011159
2.200099
2.371090
2.525836
4000.00
3620.00
3276.10
2964.87
2683.21
2428.30
2197.61
1988.84
1799.90
1628.91
1474.16
0.200000
0.181000
0.163805
0.148244
0.134160
0.121415
0.109881
0.099442
0.089995
0.081446
0.073708
x1
0.200000
0.561000
0.887705
1.183373
1.450953
1.693112
1.912266
2.110601
2.290094
2.452535
2.599544
k2
3800.00
3439.00
3112.30
2816.63
2549.05
2306.89
2087.73
1889.40
1709.91
1547.47
1400.46
t.k2
x(exact)
0.380000
0.343900
0.311230
0.281663
0.254905
0.230689
0.208773
0.188940
0.170991
0.154747
0.140046
0.000000
0.380650
0.725077
1.036727
1.318720
1.573877
1.804753
2.013659
2.202684
2.373721
2.528482
Slide 161
Iterations
10
100
1000
vc
2.525836
2.528458
2.528482
error
0.002646
0.000024
0.000000
Slide 162
k 2 = f x + k1 ,t +
2
2
t
t
k 3 = f x + k 2 ,t +
2
2
k 4 = f (x + k 3 t ,t + t )
Slide 163
x(t+t)
k1
k3
k2
x(t)
k4
k3
1
k = (k1 + 2k 2 + 2k 3 + k 4 )
6
t
James Grimbleby
t+t/2
t+t
t
Slide 164
Iterations
10
100
1000
vc
2.528481
2.528482
2.528482
error
0.000001
0.000000
0.000000
Slide 165
dx d2 x
dn x
d
x
, ...
,t
= f x, ,
n 1
dt dt 2
d
t
dt n
James Grimbleby
Slide 166
dx1
x2 =
=
dt
dt 2
d3 x
dx 2
x3 =
=
dt
dt 3
. . . . . . . . . . . . .
dn 1x
dxn 2
xn 1 =
=
dt
dt n 1
James Grimbleby
dxn 1 dn x
=
dt
dt n
n 1
dx d2 x
d
x
= f x, ,
,t
, ...
n 1
dt dt 2
d
t
Slide 167
Slide 168
d3 x
d2 x
dx
+2
+ 2 + 8 xt = 5
3
2
dt
dt
dt
dx1
=
x2 =
2
dt
dt
James Grimbleby
dx
= x1
dt
dx1
= x2
dt
dx2 d3 x
=
= 2 x2 2 x1 8 xt + 5
3
dt
dt
Slide 169
Slide 170
x1
x2
0.00
0.01
0.02
0.03
0.04
0.05
0.00000
0.00000
0.00000
0.00000
0.00002
0.00005
0.00000
0.00000
0.00050
0.00149
0.00296
0.00490
0.00000
0.00000
0.00000
0.00001
0.00003
0.00004
0.00000
0.00050
0.00099
0.00147
0.00194
0.00240
0.05000
0.04900
0.04801
0.04703
0.04606
0.04510
1.99
2.00
-0.04879
-0.04926
-0.04728
-0.04555
0.00000
0.05000
0.09900
0.14701
0.19404
0.24010
x=t.f0
x1=t.f1 x2=t.f2
Slide 171
where X(j) is the Fourier transform of the input x(t), and Y(j)
is the Fourier transform of the output y(t)
Introduce an auxiliary variable z(t) with Fourier transform Z(j):
Y ( j) = {a0 + a1( j) + a2 ( j)2 + .. + an 1( j)n 1 + an ( j)n } Z ( j)
so that:
{b0 + b1( j) + b2 ( j)2 + .. + bn 1( j)n 1 + bn ( j)n } Z ( j) = X ( j)
James Grimbleby
Slide 172
( j)
dt
( j)
dt n
d
d2
dn 1
dn
b0 + b1 + b2 2 + .. + bn 1 n 1 + bn n z(t ) = x(t )
dt
dt
dt
dt
Slide 173
Slide 174
d
dn 1
dn
+ an
y (t ) = a0 + a1 + .. + an 1
z(t )
dt
dt n 1
dt n
dzn 1
= a0 z + a1z1 + .. + an 1zn 1 + an
dt
an
= a0 z + a1z1 + .. + an 1zn 1 +
{ b0 z b1z1 .. bn 1zn 1 + x(t )}
bn
an b0
an b1
an bn 1
an
= a0
x(t )
z + a1
z1 + .. + an 1
zn 1 +
bn
bn
bn
bn
Slide 175
0.01F
10k
Input
1.67H
0.05F
Output
1 + 1.67 10 8 ( j)2
1 + 5.00 10 4 ( j) + 1.00 10 7 ( j)2 + 8.33 10 12 ( j)2
James Grimbleby
Slide 176
Slide 177
1.0
(t 2.0 10 3 )2
2(0.5 10 3 )2
Output
y(t)
0.0
0.0ms
James Grimbleby
t
5.0ms
Slide 178
Numerical Methods
J. B. Grimbleby, October 07
James Grimbleby
Slide 179