Você está na página 1de 40

Prof. A.

Meher Prasad
Department of Civil Engineering
Indian Institute of Technology Madras
email: prasadam@iitm.ac.in

Direct Integration of the Equations of Motion


Provides the response of system at discrete intervals of time (which
are usually equally spaced).
A process of marching along the time dimension, in which the response
parameters (i.e., acceleration, velocities & displacements at a give time
point) are evaluated from their known historic values.

For a SDOF system, this requires three equations to determine three


unknowns.

(a) Two of these equations are usually derived from assumptions


regarding the manner in which the response parameters vary during a
time step.
(b) The third equation is the equation of motion written at a selected
time point.

Direct Integration of the Equations of Motion

When the selected point represents the current time (n), the method of
integration is referred to as Explicit method (e.g. Central difference
method).
When the equation of motion is written at the next time point in the
future (n+1), the method is said to be Implicit method (e.g. Newmarks
method, Wilson method).

Direct Integration of the Equations of Motion


..

mx + cx + kx = P(t)

For SDOF System

Let t = time interval


tn = n t
..
xn, xn, xn
Displacement, velocity and acceleration
.

at time station n
Pn is the applied force at time tn
P(t)
Pn Pn+1

0 1

tn tn+1

General Expression for the time integration methods

x n1

Ax
l

lnk

n1

B x
l

lnk

n1

..

C x
l

(1)

lnk

R is a remainder term representing the error given by

Em (m)
x ()
m!

(n-k)t (n+1) t

x(m) is the value of mth differential of x at t=


Al, Bl and Cl are constants ( some of which may be equal to zero)

..

Eq(1) relates xn, xn, xn @ tn+1 to their values at the previous time
stations n-k, n-k+1, n
Eq(1) has m = 5+3k undetermined constants A, B and C.
The equation is employed to represent exactly a polynomial of order p-1,
p being smaller than m.
Then (m-p) constants become available which can be assigned arbitrary
chosen values so as to improve stability or convergence characteristics
of the resulting formula.
Formulas of type eq(1) for time integration can also be obtained from
physical considerations, such as, for example, an assumed variation in
the acceleration or from the finite difference approximations of the
differentials.

Newmarks Method
In 1959, Newmark devised a series of numerical integration formulas
collectively known as Newmarks methods.
xn+1
The velocity expression is of the form
xn
.
.
..
..
xn+1= a1 xn+a2xn+a3xn+1
(1)
t
The displacement expression is of the form
..
..
.
xn+1= b1 xn+b2xn+b3xn+b4xn+1

(2)

To determine constants make equations (1) & (2) for x=1, x=t, x=t 2,
we get

a1=1, 2t = 2a2+2a3
b1=1, b2=t, 2b3+2b4=(t)2
Say a3=t & b4=(t)2

Newmarks Method
Then equations (1) & (2) reduce to
.
.
..
..
xn+1= xn+ t(1-)xn+ txn+1+R
..
..
.
2
2
xn+1= xn+ t xn+ (t) (1/2 )xn+ (t) xn+1+ R
Third relationship
..
.
m xn+1+ cxn+1+ kxn+1= Pn+1

(3)
(4)

(5)

..

Substituting eqn.(3) and (4) in eqn.(5), we get expression for xn+1


.

..

To begin the time integration, we need to know the values of xo, xo and xo
at time t=0.

=0, =0

Constant Acceleration

..
Acceleration, x

t
0.636 (stable)
T

..
xn

Time, t

..
Acceleration, x

=1/2, =1/4

Average Acceleration

..
xn+1
..
xn

..
..
..
x n x n1
x=
2

Time, t

=1/2, =1/6

Linear Acceleration

..
Acceleration, x

t
0.55 (stable)
T

..
xn+1
..
xn

..
.. ..
t ..
( x n1 x n )
x = xn
t

t
t

Time, t

Algorithm
Enter k, m, c, , and P(t)
.

..

P(t 0) - c u0 ku0
x0 =
m

Select t

k^ k

m
c

;
t

c
m

t
(t)2

t(
1)c
2
2

i=0
^ = p + a x. + b x..
p
i
i
i
i

xi =

i = i+1

^
p
i
^

.
ui .
..
ui
ui t(1
)ui
.
t

2
.
..
ui
ui
ui
ui

2
( t)
t 2
..

xi+1= xi+ xi ;

.
.
.
xi+1= xi+ xi ;

..
..
..
xi+1= xi+ xi

Newmarks Method --

Elastoplastic System

Enter k, m, c, Rt, Rc and P(t)

Set x0 = 0,

..

x0 =

x0 = 0;

P(t 0)
;
m

xt =

Select t

Rt
;
k

xc =

Rc
k

Define key = 0 (elastic)


key = -1 (plastic behavior in compression
key = 1 (plastic behavior in tension)

i=0

i = i+1

.
Calculate xi and xi
.
xi > 0

key = 0; xt= xi; xc= xi (Rt Rc)/k


R = Rt (xt xi) k

.
xi < 0
xi > xc

xi < xt
n
xi < xc

y
y

xt= xi + (Rt Rc)/k


R = Rt (xt xi) k

R = Rt (xt xi) k

key = -1; R=Rc

n
xi > xt

key = 0; xc= xi;

key = 1; R=Rt

..
.
xi+1 = (P(ti+1) ci+1xi+1 R) /m

Central Difference Method

Displacement, u

The method is based on finite difference approximations of the time


derivatives of displacement (velocity and acceleration) at selected
time intervals

xn+1

xn-1

(n-1)t

(n+1)t

Time, t

1
( x n1 x n )
2h

Algorithm
Enter k, m, c, and P(t)
.

..

P(t 0) - c x 0 kx 0
x0 =
m

..
.
2
x-1 = x0 - t x0 + 0.5 t x0

^
k

m
c

( t)2 2t

m
c
2m
a

; bk
;
2
2
( t)
2t
( t)

i=0
p^i = pi a xi-1 b xi

xi+1 =

ui

^
p
i

k^

ui1 - ui-1
2t

ui1 2ui ui-1


ui
( t)2

..

i = i+1

Wilson- Method

Acceleration, ..
x

..
x..n+
xn+1
..
xn

t
t

Time, t

This method is similar to the linear acceleration method and is based on


the assumption that the acceleration varies linearly over an extended
interval t.
, which is always greater than 1, is selected to give the desired
characteristics of accuracy and stability.

Algorithm
Enter k, m, c, , t and P(t)

Specify initial conditions

a1

6
t
3
6
;
a

;
a

;
a

2
3
4
(t)2
2
(t)
(t)
n=0
pn+ = pn (1- ) + pn+1

^
k = a1m + a3c + k ;

..
..
.
.
a5 = a1xn + a4xn + 2xn ; a6 = a3xn + 2xn + a2xn
.

xn+= ( pn++ma5 +ca6 ) /k

..

x n a1 {xn

( t) 2 ..
xn ( t)xn
xn }
3
.

..
..
..
..
xn+1 = xn+ (xn+ xn) /
..
.
.
..
xn+1 = xn+ (xn + xn+1) h/2

t 2
t 2
&
&
x n 1 x n tx&n
x&n
x&n 1
3
6

n = n+1

Errors involved in the Numerical Integration


Round off errors
Introduced by repeated computation using a small step size.
Random in nature
To reduce use higher precision
Truncation errors
.
Involved in representing xn+1 and xn+1 by a finite number of terms in the
Taylors series expansion.
Represented by R in the previous slides
Accumulated locally at each step.
If integration method is stable, then truncation error indicates the
accuracy.
Propagated error
Introduced by replacing the differential equation by a finite difference
equivalent.

Stability of the Integration method


Effect of the error introduced at one step on the computations at the next
step determines the stability.
If error grows, the solution becomes unbounded and meaningless.
Spectral radius of a matrix (A) = max of (magnitude of eigen values of A)
.
A is amplification matrix

x n1

xn

t x n1
[ A ] t x n

2 ..
2 ..
(

t
)
x
(

t
)
x

n1
n

(A) > 1

Unstable

If 1.37 Wilson- is unconditionally stable

Attributes required for good Direct Integration method

1. Unconditional stability when applied to linear problems


2. Not more than one set of implicit equation to be solved at each
step
3. Second order accuracy
4. Controllable algorithmic dissipation in the higher modes
5. Self starting Wilson- is reasonably good

* For MDOF systems, scalar equations of the SDOF systems


become matrix equations.

Spectral radii for -methods, optimal collocation schemes


and Houbolt, Newmark, Park and Wilson methods

Selection of a numerical integration method

Period elongation vs. t/T

Amplitude decay vs. t/T

* For the numerical integration of SDOF systems, the linear acceleration


method, which gives no amplitude decay and the lowest period
elongation, is the most suitable of the methods presented

Selection of time step t

t must be small enough to get a good accuracy, and long enough to


be computationally efficient.
pt < 1 i.e., t/T 0.16 (arrived at from truncation errors for a free
vibration case)
Typically t/T 0.1 is acceptable.
Sampling of exciting function at intervals equal to selected t
inspection of the forcing function.

Mass Condensation or Guyan Reduction

Extensively used to reduce the number of D.O.F for eigen value


extraction.
Unless properly used it is detrimental to accuracy
This method is never used when optimal damping is used for mass
matrix

K M u 0
Let 'm' represent those to be restrained
Let 's' represent those to be condensed
K mm

K sm

K ms
K ss

Master d .o. f

um
us Slave

d .o. f

M mm M ms

M
M
sm ss

um

u
s

Assumption: Slave d.o.f do not have masses only elastic forces are
important

M ms M sm M ss 0
1
T
u

K
K
s ss ms um
s 1 s s s m m 1 --- Guass Elimination Scheme

um
I
T

T um
1
T
K ms
us
K ss
Kr T T K T ,
Mr T T M T

Reduced eigen problem

K r um
m m m 1

M r um master d.o.f
m m m m m 1

Slave d.o.f

us i K ss i M ss i

T
K ms T i M ms

um i

Choice of Slave d.o.f


All rotational d.o.f
Find

Rii
mii

ratio, neglect those having large values for this ratio

If [ Mss ] = 0, diagonal, [Kr] = same as static condensation then


there is no loss of accuracy

Subspace Iteration Method


Most powerful method for obtaining first few Eigen values/Eigen vectors
Minimum storage is necessary as the subroutine can be implemented as
out-of core solver
Basic Steps
Establish p starting vectors, where p is the number of Eigen
values/vectors required P<<n
Use simultaneous inverse iteration on p vectors and Ritz analysis to
extract best Eigen values/vectors
After iteration converges, use STRUM sequence check to verify on
missing Eigen values

K M

L D L

Method is called Subspace iteration because it is equivalent


to iterating on whole of p dimension (rather that n) and not
as simultaneous iteration of p individual vectors
Starting vectors
Strum sequence property
For better convergence of initial lower eigen values ,it is better
if subspace is increased to q > p such that,
q = min( 2p , p+8)
Smallest eigen value is best approximated than largest value in
subspace q.

Starting Vectors
(1) When some masses are zero, for non zero d.o.f have
one as vector entry.
0

1
2
, { X }

0
0

1
0

0
0
0
1

(2) Take ku&/ mu& ratio .The element that has minimum value
will have 1 and rest zero in the starting vector.
3

ku&/ mu& 3/ 2, ,1,8


0
0
{X }
1

1
0

Starting vectors can be generated by Lanczos algorithmconverges fast.


In dynamic optimisation , where structure is modified previous
vectors could be good starting values.
Eigen value problem

[k ][ ] [][ M ][ ]
[k ]n p , [ ]n p

(1)

[ ]T [k ][ ] [] p p

(2)

[ ]T [ M ][ ] [ I ]

(3)

Eqn. 2 are not true. Eigen values unless P = n

If [] satisfies (2) and (3),they cannot be said that they are true
Eigen vectors. If [] satisfies (1),then they are true Eigen vectors.

Since we have reduced the space from n to p. It is only


necessary that subspace of P as a whole converge and not
individual vectors.

Algorithm:
Pick starting vector XR of size n x p
For k=1,2,..

Smaller eigen value


problem, Jacobi

k+1

[ K ][ X k 1 ] [ M ]{ X k }

static

[ K ]k 1 { X }Tk 1[ K ]{ X k 1}

pxp

[ M ]k 1 { X }Tk 1[ M ]{ X k 1} p x p
[ K ]k 1{Q}k 1 [ M ]k 1{Qk 1}[]k 1
[ X ]k 1 { X }k 1[Q ]k 1

{ X }k+1 -

k-

Factorization

[k ] [ L][ D ][ L]T

(1/2)nm2 + (3/2)nm

Subspace Iteration

[k ][ X ]k 1 [Y ]k

nq(2m+1)

[k ]k 1 [ X ]Tk 1 [Yk ]

(nq/2)(q+1)
(nq/2)(q+1)

[ M k 1 ] [ X k 1 ] [Yk 1 ]
T

[k ]k 1[Q ]k 1 [ M ]k 1[Q ]k 1[ ]k 1
Sturm sequence check

[Y ]k 1 [Y ]k 1[Q ]k 1

nq2

[k ] [k ] [ M ]

n(m+1)

[k ] [ L][ D ][ L]T

(1/2)nm2 + (3/2)nm

[k ][ ]ik 1 ik 1[ M ][i ]k 1
[k ][i ]

k 1

4nm + 5n

Total for p lowest vector.


@ 10 iteration with
q = min(2p , p+8) is

nm2 + nm(4+4p)+5np
20np(2m+q+3/2)

This factor increases as that iteration increases.


N = 70000,b = 1000, p = 100, q = 108 Time = 17 hours

Example
Use the subspace Iteration to calculate the eigen pairs (1,1) and
(2,2) of the problem K = M ,where
0
2 1 0
0
1 2 1 0

; M

K
0 1 2 1

0 1 1
0

2
1

1
2
1
0

2
4
X2
4

0
1
2
1

0
X2

0
2
0
0

0
0

1
2

and
3

4
2 1
6 4
K2 4
;
M

8 4 3
1
1

1
2

4
2

and

X2

; Q2

1
2

2
4


1
1

4
4

1
1

2
2

1 2 1 2
4
4

2
2
2
2

1
8 4 2
1
44 2

1
4
2 8

1
4 2 4

Você também pode gostar