L. Quartapelle, Editor Dipartimento di Ingegneria Aerospaziale Politecnico di Milano Via La Masa 34, 20158 Milano, Italy January 9, 2013
Contents
1 Introduction (V. Lucifredi) 2 Trafc equation: a scalar example (S. Bianchi and A. Savorani) 2.1 The equation of the trafc model . . . . . . . . . . . . . . . . 2.2 RankineHugoniot condition and entropy condition . . . . . . 2.3 Riemann problem and jump solutions . . . . . . . . . . . . . 2.4 Rarefaction solutions . . . . . . . . . . . . . . . . . . . . . . 2.5 General solution of the Riemann problem . . . . . . . . . . . 2.6 Program of exact Riemann solver for trafc equation . . . . . 3 Weak solutions (M. Tagliabue and M. Valentini) 3.1 Conservation law equation in weak form . . . 3.2 Weak equations in discrete form . . . . . . . 3.3 RankineHugoniot jump condition (F. Villa) . 3.4 Principles of Godunov method . . . . . . . . 4 Psystem for the isothermal ideal gas (V. Ronchi) 4.1 The uidynamic model . . . . . . . . . . . . 4.2 Governing equations . . . . . . . . . . . . . 4.3 Eigenvalue problem . . . . . . . . . . . . . . 4.4 Genuine nonlinearity . . . . . . . . . . . . . 5 Simple solutions (S. Scala and C. Truscello) 5.1 Shock wave solutions: Hugoniot locus . 5.2 Lax entropy condition for a system . . . 5.3 Riemann invariants . . . . . . . . . . . 5.4 Rarefaction waves: integral curves . . . 5.5 Shock wave reection . . . . . . . . . . 11 13 . . 13 . . 14 . . 16 . . 16 . . 18 . . 20 23 23 24 25 29 33 33 34 35 37 41 41 42 44 46 50
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
6 Riemann problem for the isothermal ideal gas (F. Villa) 6.1 Riemann problem . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Explicit analytical rarefactionrarefaction solution 6.1.2 Analytical shockshock solution . . . . . . . . . . 6.2 Transitions between different types of solution . . . . . . . 6.3 Limiting relative velocities . . . . . . . . . . . . . . . . . 6.4 A new parametrization for the shock . . . . . . . . . . . . 6.5 Program of Riemann solver as a system of two equations .
53 . . . . 53 . . . . 54 . . . . 55 . . . . 55 . . . . 56 . . . . 59 . . . . 61
7 Godunov method for the isothermal ideal gas 7.1 Finite volumes and cell average . . . . . . . . . . . 7.2 Variable update and interface ux . . . . . . . . . 7.3 Godunov numerical ux . . . . . . . . . . . . . . 7.4 Program of the Godunov ux for the isothermal gas 8 Roe linearization (L. Alimonti and D. De Santis) 8.1 Conservative linearization . . . . . . . . . . . 8.2 Linearization in Jacobian form . . . . . . . . . 8.3 Linearization for the isothermal ideal gas . . . 8.4 Numerical method for Roe vertical ux . . . . 8.5 Entropy x for a scalar equation . . . . . . . . 8.6 Program for computing the Roe numerical ux
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
67 67 68 68 70 75 75 76 77 80 83 90
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
9 LaxWendroff scheme and conservation form (GL. Cucchi and P. Scioscia) 9.1 The scheme for linear advection . . . . . . . . . . . . 9.2 The scheme for a conservation law . . . . . . . . . . . 9.3 The scheme for nonlinear systems of conservation laws 9.4 Isothermal ideal gas system with linearization . . . . . 9.5 Boundary conditions in the conservative LW scheme . 9.6 Program of the LaxWendroff numerical ux . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
93 93 95 98 99 102 106 108 108 111 117 118 119 123 124 126 128 132 136 136 139 141 144
16 High Resolution Methods (Randall J. LeVeque) 16.1 Articial viscosity . . . . . . . . . . . . . . . . . . . . . . . 16.2 Fluxlimiter methods . . . . . . . . . . . . . . . . . . . . . 16.2.1 Linear systems . . . . . . . . . . . . . . . . . . . . 16.2.2 Nonlinear systems . . . . . . . . . . . . . . . . . . 16.3 Slopelimiter methods . . . . . . . . . . . . . . . . . . . . . 16.3.1 Linear systems . . . . . . . . . . . . . . . . . . . . 16.3.2 Nonlinear scalar equation . . . . . . . . . . . . . . 16.3.3 Nonlinear systems . . . . . . . . . . . . . . . . . . 16.4 Program of the numerical ux for the high resolution method 11 Conclusion A Psystem and its Riemann problem A.1 The Psystem and its eigenstructure A.2 Riemann invariants . . . . . . . . . A.3 Rarefaction waves . . . . . . . . . . A.4 Vacuum formation . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Shock waves . . . . . . . . . . . . . . . . . . . The Riemann problem as a twoequation system . Existence and uniqueness theorem . . . . . . . . Roe linearization (A. Guardone and L. Vigevano)
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
145 149 150 151 156 156 157 160 160 166 166 171 191
B. Boundary conditions in nonlinear hyperbolic systems B.1 Introduction . . . . . . . . . . . . . . . . . . . . . B.2 Conservative, characteristic and physical variables . B.3 The boundary values for a scalar unknown . . . . . B.4 Steps of the boundary procedure for a system . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Il saggio sa di essere stupido, lo stupido invece che crede di essere saggio. William Shakespeare
Non c libro tanto cattivo che in qualche sua parte non possa giovare. Plinio il Vecchio Sii padrone dellargomento, le parole verranno. Catone il Censore
Il vero problema dello scrivere non tanto sapere ci che dobbiamo mettere nella pagina, ma ci che da questa dobbiamo togliere. Gustave Flaubert .
Abstract This report describes numerical methods and techniques for solving hyperbolic problems in one dimension. The solution of nonlinear hyperbolic equations can be discontinuous and is very difcult to be determined in a discrete setting. Very specialized methods have been developed in the last decades to deal with possibly discontinuous solutions to conservation laws and systems, as, for instance, methods based on exact or approximate Riemann solvers, following the seminal ideas of Godunov. In the present work we introduce the basic mathematical concepts and numerical tools for solving this kind of problems. The classical examples of the equation for the trafc ow and the system of gasdynamic equations for an isothermal ideal gas are considered. We describe how low order discretization schemes, such as Godunov and Roe methods, can be combined with the second order LaxWendroff discretization to build conservative high resolution methods. These schemes have been implemented and tested successfully against a few simple representative examples including the reection of a shock wave by a xed wall. In an appendix, the issue of existence and uniqueness for solution to the Riemann problem of the Psystem is also addressed. An original way of formulating the Riemann problem as a system of two equations is proposed which suggests a guideline for proving existence and uniqueness of solution in the convex case with arbitrary (large) data by exploiting the very nature of the nonlinearity of the Riemann problem and of its symmetric doubledfaced character.
. .
1 Introduction
Transonic and supersonic ow simulation is a challenge that has a helllike taste; however, it has also a great appeal to the aeronautical and aerospace research. Some examples of this kind of ows are: transonic streams around airliners, external jet streams behind nozzles, ows around helicopters or spacecrafts reentering into the atmosphere. Other ows with features similar or identical to aeronautical jets are encountered in astrophysical observations, and can be simulatated by means of the same numerical tools developed for gasdynamics, as described, for instance, in SaasFee summer school [10]. In this report, we focus on equations associated with conservation laws and on their numerical solution, especially in the presence of discontinuities. Because of this, local approximation methods based upon Taylor series result in a failure. Several alternative methods have been developed in the recent years and they proved to be valuable design tools in aerospace engineering. These new methods are characterized by a certain degree of complexity, which is far greater than in methods used to solve viscous probems, typically the incompressible Navier Stokes equations. So it can be useful to provide an elementary description gathering up a great part of the fundamental elements that allow to solve discontinuous ow problems. The crucial points are, on the one hand, the nonlinearity of conservation laws and, on the other hand, the nature of being a system of the gasdynamic equations. So, from a didactic viewpoint, it can be sufcient to describe the problem by examinating rst the trafc scalar equation and only subsequently the system of gasdynamic equations for an isothermal ideal gas. The mathematical problems dealt with in this report are only onedimensional (1D) to make easier the approach to this difcult topic. The very important and interesting case of the multidimensional equations is intentionally neglected in this work and is left for a further study. Boundary conditions are instead discussed in the following, because of their fundamental importance in the actual numerical solution of real problems. In particular, an appendix contains a detailed description of a new procedure proposed by Alberto Guardone to determine the boundary values including the correct number of boundary data in nonlinear hyperbolic systems. The structure of this report parallels that followed in the insightfull Numerical Methods for Conservation Laws by Randall LeVeque [8], which highlights the basic elements of the mathemathical theory and then introduces the numerical methods. The rst topic discussed in the present work is the scalar example of the trafc equation and its numerical approximation by means of Godunovs method. Subsequently, the gasdynamic equations for the isothermal ideal gas, which is a particular example of the PSystem category, is considered and its approximation by the same type of method is developed. The last part is centered on a different 11
sort of numerical methods, used to solve nonlinear hyperbolic problems: Roes linearization, LaxWendroff conservation schemes and high resolution methods. In the end, a conclusion section summarizes all the results achieved. A couple of appendices are included, dealing with a new theorem of the existence and uniqueness for the PSystem, and with the aforementioned Guardones procedure for the boundary values. The list of the programs implemented in Fortran 90 are also given.
12
Such a relationship allows to represent the typical behaviour of drivers on a highway: they tune the car speed according to the local car density. For instance, when the highway is empty, the car velocity reaches its maximum value u max , but in heavy trafc cars slow down, in other words, the velocity decreases as the density increases.
13
To dene a unique trafc ow model, a denite function for the velocity must be chosen. A sufciently realistic model is obtained by assuming the simple linear relation: u () u max 1 , 0 jam (2.3) jam where it must be noticed the bounded domain of variation for the unknown variable , see the plot (a) in gure 2.1. Thus, the velocity decrease from u max to zero as the density increases from zero to jam . The ux of cars f depends on the car density according to the standard denition f () u (), 0 jam (2.4)
The graph of the ux function is drawn as plot (b) of gure 2.1. The representation with the vertical axis for the independent variable and the horizontal one for the function f () is adopted since it allows to interpret the straight lines in this diagram as the characteristic lines of the spacetime plane. In fact the slope in the rst diagram represents a velocity and their positive or negative slopes corresponds to propagation to the left or to the right in the second diagram. In terms of the ux function, the conservation law (2.1) can be also rewritten as t + x f () = 0 (2.5) and for differentiable solutions (x , t ), it is equivalent to the form t + f () x = 0 that is called quasilinear, a nomenclature employed usually for systems. (2.6)
jam
200
1 2 jam
100
125
50
100
u100 max a ()
(a)
(b)
(c)
Figure 2.1: (a) linear relation u (), (b) ux f () = u (), (c) characteristic speed a () at the two sides of the moving discontinuity. In fact, the speed s is given by the RankineHugoniot relation s= f ( ) f (r ) r (2.7)
which is deduced from the denition of weak solutions. Unfortunately, with the new denitions of weak solutions it turns out that any initial value problem for a nonlinear conservation law admits innitely many (weak) solutions. In other words, to include also discontinuous solutions the space of solutions has been enlarged so much that the nal space of all weak solutions is so enormously large that the uniqueness of the solution is fully lost. However, the uniqueness among the innitely many possible weak solution is recovered by including a supplementary condition, called entropy condition. This condition selects the only physically admissible solution which can be the limit for zero viscosity of the (unique) solution to the conservation law including a dissipative term, namely a tem involving the second spatial derivative of the unknown. For instance 2 u (2.8) t + x f () = x where > 0 is a diffusion coefcient. For a conservation law with a convex ux, i.e., a ux with convexity always positive or always negative, the entropy condition reads f ( ) > s > f (r ). 15 (2.9)
In particular, for the ux of the trafc model it is easy to see that condition (2.9) reduces to < r . (2.10) If the entropy condition is veried, a jump (discontinuity) will propagate with speed s given by RankineHugoniot condition (2.7), otherwise the jump must evolve in a different pattern and will give rise to a continuous solution, that will be described below.
More generally, if the initial jump would be located at x = x0 , the solution of the corresponding Riemann problem would be (x , t ) = r x < x0 + st x > x0 + st
16
a solution (x , t ) which depends on the two independent variables x and t only throught a single variable which is combination of x and t of the form = x , t (2.13)
which is called similarity variable. The unknown of the trafc equation will become a new function of only one variable, as follows ( ) = (x / t ) = (x , t ). (2.14)
By substituting ( ) in the trafc equation (2.6), it becomes t + x f ( ) = 0. By evaluating the time and space derivative by means of the chain rule, the trafc equation assumes the form x 1 + f ( ) = 0 t2 t
where all primes denote differentiation with respect to its own argument of each function. It must be noted that the search of a similarity solution has transformed a partial differential equation into an ordinary differential one. After simplifying a common factor, the equation reduces to [ f ( ) ] = 0 The equation = 0 gives = constant but this solution is useless since it cannot satisfy the two inital values, except for the trivial situation r . Therefore, we obtain the equation f ( ( )) = (2.15)
Remarkably enough, the equation for the new unknown = ( ) has become a simple functional, i.e., not differential, relationship. In fact the determination of the solution requires only to invert the function f ( ), which is known and for the trafc model at hand is the straight line shown in plot (c) of gure 2.1. Analytically, we have the explicit f ( ) = u max 1 2 ( ) jam =
that can be solved with respect to the new unknown ( ) to give ( ) = jam 1 2 u max (2.16)
We have now to impose the inital condition of the Riemann problem. This means, for the left value it must be f ( ) = and for the right value it must be 17
f (r ) = r : in other words, the account of the initial value means that a well dened interval of values for the similarity variable is selected. It must be emphasized that the variable must increase going from to r , and this means that the rarefaction solution is possible only < r . In the opposite case, the solution will be a propagating jump. Having so determined the actual range of the similiar solution with the denitions f ( ) and r f (r ), the complete solution of the equation for the unknown is < 1 ( ) = 2 < < r jam 1 u max <
r r
The solution for the unknown can be nally translated in the language of the originary unknown of the car density in space and in time, which assumes the form x < f ( ) t 1 x (x , t ) = 2 (2.17) f ( ) t < x < f (r ) t jam 1 u max t f ( ) t < x
r r
The general solution of this Riemann problem will comprise either a propagating jump or a rarefaction fan, emanating from x = x0 , according to the ordering of the two initial values of density. Thus the solution will be given by the following function jump (x , t ) for < r (x , t ) = fan (x , t ) for > r
By exploiting the solutions obtained explicitly in the previous sections and by shifting the initial discontinuity to the position x = x0 , we obtain the nal complete
18
solution x < x0 + st r x > x0 + st (x , t ) = 1 x 2 jam 1 u max t r for < r x < x 0 + a t x0 + a t < x < x0 + ar t
for > r
x0 + ar t < x
where a = f ( ) and ar = f (r ). For completeness, the program in Fortran 90 implementing these relations for use in a Godunov method based on the exact Riemann solver of the trafc equation is provided below.
19
! ! ! ! ! ! ! ! ! ! ! ! !
Exact solution rho(x,t) for traffic equation: d_t rho + d_x f(rho)
for for
rho (1  rho/rho_jam)
IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: xx REAL(KIND=8), INTENT(IN) :: x_0, rho_l, rho_r, t REAL(KIND=8), DIMENSION(:), INTENT(OUT) :: rho REAL(KIND=8) :: s, al, ar REAL(KIND=8), PARAMETERS :: vel_max = 1.0, rho_jam = 1.0
IF (t <= 0) THEN WHERE (xx < x_0) rho = rho_l ELSEWHERE rho = rho_r END WHERE RETURN ENDIF
rho = rho_l;
RETURN;
ENDIF
propagating jump
20
s = vel_max * (1  (rho_l + rho_r)/rho_jam) WHERE (xx < x_0 + s*t) rho = rho_l ELSEWHERE rho = rho_r END WHERE ELSE ! rarefaction wave
al = char_speed(rho_l) ar = char_speed(rho_r) WHERE (xx < x_0 + al*t) rho = rho_l ELSEWHERE (xx > x_0 + ar*t) rho = rho_r ELSEWHERE ! inside the rarefaction fan
CONTAINS
FUNCTION
char_speed(rho) RESULT(s)
IMPLICIT NONE REAL(KIND=8), INTENT(IN) :: rho REAL(KIND=8) :: s IF (rho < 0 .OR. rho > rho_jam) THEN
WRITE(*,*) car density rho outside the permitted range WRITE(*,*) in FUNCTION char_speed. STOP. WRITE(*,*) rho = , rho STOP ENDIF
21
END SUBROUTINE
exact_traffic
22
3 Weak solutions
Godunov method is a rstorder numerical scheme that can be used to compute weak solutions to hyperbolic problems, a class of timedependent partial differential equation systems. To simplify the exposition, we rst describe the method in connection with a scalar conservation law, taking the example of the trafc equation described in the previous section. The basic relations will be extended to a system as an anticipation of the general discussion of Godunov for the isothermal gas equations that will be given in 7.
where the derivative f () is an advection velocity dependent on the unknown. For smooth (i.e., differentiable) solutions the two equations (3.1) and (3.2) are entirely equivalent. On the contrary, if has a jump, the second term of (3.2) will contain the product of a discontinuous function f () with the derivative of discontinuous function [1]. While the latter in itself can be given a meaning in the sense of the theory of distributions, the product of the discontinuous factor with the Dirac function located at the point of the jump is not well dened. This means that the equation in the quasilinear form (3.2) is meaningful only within the class of continuous functions. It is important to make clear that the difculty caused by discontinuous solution is not related to the lack of smoothness of the initial condition. In fact, for nonlinear equations, even innitely smooth initial data can lead to discontinuous solutions which can develop after some nite time. In other words, the initial data are not the responsable for the break down of the continuous solutions: the true responsable is the nonlinearty of the hyperbolic equation. To include the possibility 23
where f () is the ux function previously considered and 0 (x ) is an initial distribution of the car density. By means of the chain rule, equation (3.1) can be rewritten in the quasilinear form t + f () x = 0 (3.2)
of discontinuous solution we are therefore obliged to introduce a new denition of solution and this require to consider an integral form of the conservation law (3.1). The integral form of equation (3.1) requires multiplicating each term of the 1 , where C 1 denotes equation by a smooth, compact support function (x , t ) Cc c the space of continuously differentiable functions with compact support, to give
0
(x , t ) t + x f () d x dt = 0
(3.3)
After integrating by parts, both in time and in space, the integral relation above becomes
0
t (x , t ) + f () x (x , t ) d x dt =
(x , 0)0 (x ) d x (3.4)
1 is A function (x , t ) which satises this integral relationship for any (x , t ) Cc called to be a weak solution to the Cauchy problem (3.1).
The test function (x , t ) represents a brickshaped function, whose support is a nite rectangular region of the spacetime domain. The derivatives of the (x , t ) with respect to its two variables are calculated as x (x , t ) = [(x x1 ) (x x2 )][ H (t t1 ) H (t t2 )] t (x , t ) = [(t t1 ) (t t2 )][ H (x x1 ) H (x x2 )] (x ) being the Dirac delta (function). We can now insert the function and its derivatives into the weak statement (3.4). The rst term of the equation, after performing the time integration, yields
0 x2
[t (x , t )] d x dt = 24
x1
[(x , t1 ) (x , t2 )] d x
since Dirac delta is a sampling function and the spatial Heaviside factor reduces the x integration to the interval [x1 , x2 ]. Similar operations are done on the second term of equation (3.4), with the roles of space and time variables interchanged, to give
0 t2
[x (x , t )] f () d x dt =
t1
f ((x1 , t )) f ((x2 , t )) dt
By exploiting these two results in the weak equation (3.4) and since t1 > 0, so that there is no contribution by the initial condition, we obtain
x2 x1 t2
[(x , t1 ) (x , t2 )] d x +
t1
f ((x1 , t )) f ((x2 , t )) dt = 0
(x , t2 ) d x =
(x , t1 ) d x
f ((x2 , t )) f ((x1 , t )) dt
(3.6)
which shows explicitly that the increase of the number of cars within the segment [x1 , x2 ] of highway in the time interval from t1 to t2 is due to the net ux of cars entering the two extremes in the same interval. Notice that at the left end x1 the ux contribution is with the positive sign, since the cars go inside the highway segment [x1 , x2 ], while at the right end x2 the sign is negative since the cars are going out the considered highway segment.
We now demonstrate that RankineHugoniot condition is a consequence of the weak formulation of the conservation law. Let us consider a small domain D of the spacetime half plane R R+ , namely, D R R+ , and suppose that a weak solution (x , t ) to the trafc equation is discontinuous inside D . For simplicity, we assume that D is small enough that there is only one line of discontinuity inside D and that the line is accurately described as a segment S of a straight line crossing the domain D . Since (x , t ) is a weak solution in the spacetime half plane, the integral equation (3.4) can be written also over the small domain D and becomes [(t ) + (x ) f ()] d x dt = 0
and is called RankineHugoniot jump condition. Being a scalar relation, the propagation speed s of the density jump in the trafc ow can be expressed explicitly as follows f (r ) f ( ) s= (3.8) r
1 and where the integral at the initial time is zero where (x , t ) is any function Cc since D does not intersect the x axis. The discontinuity allows to divide the domain in two parts that will be denoted by D and Dr , and in each of them the solution (x , t ) is a classical solution: these two solutions will be indicated by (x , t ) and r (x , t ), respectively. Thus, the integral of the equation above can be split in two contributions, as follows,
[(t ) u + (x ) f (u )] d x dt +
Dr
[(t ) u r + (x ) f (u r )] d x dt = 0
In order to simplify the interpretation of the mathematical expressions, we change the time variable t to become a second spatial coordinate y dened by t y ct where c is some velocity, so that y has the physical dimension of a length. The choice of the velocity c is completely arbitrary and, just to x the idea, we can take c to be the value of our typical strolling velocity, when we are relaxed. With the new variable, the unknown car density function is changed as follows (x , t ) (x , y ), tolerating the slight abuse of mathematical notation. Then the trafc equation for the new unknown (x , y ) can be written as follows: c y + x f () = 0 = 26 x f () =0 + c y
Therefore, after introducing the gradient operator in the plane (x , y ) and the twocomponent vector R ( f /c, ) the transformed trafc equation assumes the compact form R=0 in all points (x , y ) of the upper half plane (x , y > 0). Therefore, any classical solution (x , y ) satises this equation with R = ( f ()/c, ). According to the new purely spatial representation, the integral weak equation will be rewritten as ( ) R d D + ( ) Rr d D = 0 , x y
Dr
where d D = d x dy , and R = ( f ( )/c, ) and Rr = ( f (r )/c, r ). From the elementary differential identity ( R) = ( ) R + R, for a classical solution (x , t ) we have ( R) = ( ) R. But (x , t ) in R and r (x , t ) in Rr are classical solutions in their respective domains D and Dr , so that the integral equation is equivalent to ( R ) d D + ( Rr ) d D = 0
Dr
Dr
where dl is the length element of the boundaries D and Dr and N is the outwardly directed unit vector. 1 is now chosen such that  The test function Cc D = 0. It follows that the line integrals in the weak equation reduce only to the segment S of intersection beetwen the domain D and the discontinuity of the solution, to give N R dl + Nr Rr dl = 0
S D
S Dr
27
N R Rr dl = 0
By the complete arbitrarity of the function on S , the integral can vanish only provided N R Rr = 0 This is equivalent to Nx f ( ) f (r ) + N y ( r ) = 0 c c
But cN x / N y is the speed associated with the slope of the segment S , so the last relation can be rewritten as s ( r ) = f ( ) f (r ) or equivalently s= f ( ) f (r ) r
which completes the proof. Let us now consider a system of hyperbolic equations for a vector unknown u, with two or more components, and ux vector f (u), and let us suppose there is an initial discontinuity of this vector variable between the two states, u and ur . If this discontinuity could move in space unaltered, its speed s would be obliged to satisfy the vector counterpart of RankineHugoniot condition, namely the relation s (ur u ) = f (ur ) f (u ) which can be established by applying the previous argument to the vector conservation law, namely, to the hyperbolic system: t u + x f (u) = 0 However, for initial data u and ur specied arbitrarily, the RankineHugoniot (vector) relation cannot be satised since the direction of ur u is in general different from that of f (ur ) f (u ), the ux f (u) being an independent function. As a consequence, in general the initial jump of the vector variable of a system cannot propagate rigidly with some velocity and must instead disintegrate in two or more distinct waves, each propagating in some denite manner. In fact, for a hyperbolic system the (vector) RankineHugoniot condition can be satised only for a restricted set of left and right states u and ur . In other words, only when u and ur fulll some denite constraint, the jump ur u can propagate in space unaltered. 28
with end points x j + 1 = (x j + x j +1 )/2 located in the middle of two consecutive 2 grid points. The length of each cell is C j  = x j = x j+ 1 x j 1
2 2
(3.10)
and each mid points x j + 1 represents the interface between two consecutive cells 2 C j and C j +1 . Let n (x , t ) be a solution which depends on the continuous variables x and t tn but has been obtained from a given discrete initial condition at time t = tn . From n (x , t ) we can dene its cell average R j (t ) 1 C j 
x j + x j
n (x , t ) d x
(3.11)
over the cell C j and at time t tn provided that t tn is suitably small. In particular, the cell average at the initial discrete time level tn will be denoted by Rn j 1 C j 
x j + x j
n (x , tn ) d x
(3.12)
Let us assume that the discrete set of values R n j has been calculated at time t = tn . They provide a piecewise constant representation of the solution, i.e., they give an approximation consisting of cell averages for the solution. Our aim is to determine +1 the discrete set of values R n at a later time tn +1 = tn + t . j The idea of Godunov method is to consider the various initial value problems associated with the different jumps at all the interfaces and solve these Riemann 29
problems exactly. This amounts to nd the kind of wave emanating from each interface. For the trafc equation, the solutions are provided by the Riemann solver described in section 2.5. By patching these local solutions together, we can build a solution that has been denoted by n (x , t ), which si dened for any x and for t tn . Note the superscript n to remind that this solution depends on the set of the initial data { R n j , j }. More precisely, this solution will exist over the time interval allowing a juxtaposition of the local Riemann solutions, which is limited by the request that interactions can occur only between contacting cells. This introduces a limit on the maximum allowable time step t for a given distribution of the initial data R n j , expressed by the following condition t max
j
 f ( Rn j ) C j 
<1
(3.13)
known as CFL condition after Courant, Friedrichs and Lewy. Provided this condition is satised, we can consider the weak equation (3.6) and particularize it to the spacetime rectangle x j 1 , x j + 1 [tn , tn +1 ], to give
2 2
x j + x j
(x , tn +1 ) d x =
x j + x j
(x , tn ) d x +
tn +1 tn n
tn +1 tn
f n x j 1 , t
2
dt (3.14)
f x j+ 1 , t
2
dt
This relation is identically satised since in the considered strip [tn t tn +1 ] the function n (x , t ) is an exact weak solution. Moreover, it respects the entropy condition since only entropic solutions to the Riemann problems are emloyed. This equation is remakable in that it involves integrals only on the boundary of the spacetime rectangle x j 1 , x j + 1 [tn , tn +1 ]. According to the interpretation 2 2 x in (3.12) the term x j + n (x , tn ) d x is the cell average, multiplied by C j , at the j initial time. In the same spirit, the rst integral can be taken to give the cell average on the same cell but at the new time, according to the denition
+1 Rn j
1 C j 
x j + x j
n (x , tn +1 ) d x
(3.15)
Moreover, the other two terms of equation (3.14) are integrals over a time integral t but with constant integrand since the solution of the Riemann problem at each interface is a similarity solution and hence assumes the same value on any ray issuing from the interface point. The rays involved in the two integrals are vertical and thus it is convenient to introduce the following special notation to denote the 30
value for = 0 of the solution of the interface Riemann problem, i.e., along the vertical ray:  ( R j , R j +1 ) = Riemann (0, R j , R j +1 ) (3.16)
where Riemann (, R j , R j +1 ) is the solution of the Riemann poblem with inital data R j and R j +1 . The time integrals can therefore be evaluated most simply as follows
tn +1 tn
f n x j + 1 , t
2
dt =
tn +1 tn
f ( R j , R j +1 ) dt = f ( R j , R j +1 )
Fj + 1 = F ( R j , R j +1 ) f ( R j , R j +1 )
2
(3.18)
in terms of which the discrete weak equation (3.14) assumes the remarably simple form t  n  n +1 Fj Rn = Rn (3.19) 1 Fj 1 j j + 2 2 C j  This is the celebrated method introduced by Godunov [5] to solve the Euler equations, particularized here to the very simple case of a scalar conservation law. It is the basis for the numerical methods called Finite Volumes for solving nonlinear hyperbolic systems, to be described in the next sections. As a matter of fact, it is more convenient to implement the method in a different form, by accounting for the contribution of the ux at each interface to the two neighbouring cells, as follows. One considers the cycle over the interfaces and  n +1 initialize the new cell averages R n Rn j j . The ux contribution Fj + 1 to the averages of the conserved quantity in the two cells j and j + 1 is taken into account by the double update instructions
+1 +1 Rn Rn j j
2
t  Fj + n 1 2 C j 
and
n +1 +1 Rn j +1 R j +1 +
t C j +1 
n Fj (3.20) +1
2
In this way a single evaluation of the ux at each interface is guaranteed algorithmically. The scheme can be extended to hyperbolic systems. For the moment being, as an anticipation, it is sufcient to highlight that the vector counterpart of the discrete weak equations (3.14) for the system t w + x f (w) = 0 assumes the form
x j + x j
(x , tn +1 ) d x = w
x j +
x j
(x , tn ) d x + w 31
tn +1 tn
tn +1
tn n
n x j 1 , t f w
2 2
dt (3.21)
x j+ 1 , t f w
dt
and can be recast as the following update of the vector counterpart of the Godunov method t  n  (3.22) Wjn +1 = Wjn Fj + n 1 1 F j 2 2 C j  Here Fj = F(Wj , Wj +1 ) f w(Wj , Wj +1 ) +1
2    
(3.23)
with w(Wj , Wj +1 ) = wRiemann (0, Wj , Wj +1 ) is the solution for = 0 of the Riemann problem with data Wj and Wj +1 at the interface x = x j + 1 . Thus, 2 most of the difculty of a nite volume method based on Godunov ideas lies in the solution of the Riemann problem for the considered hyperbolic system. Of course the scheme can (and will) be implemented in the form of the cycle over the interfaces, after the initialization Wjn +1 Wjn Wjn +1 Wjn +1 t  Fj + n 1 2 C j 
+1 n +1 and Wjn+ 1 Wj +1 +
t C j +1 
n Fj +1
2
(3.24)
An even more convenient manner of implementing the Godunov method (as any other conservative scheme) is to introduce the total ux contribution R j to any cell, through the simple accumulation, after the initialization R j = 0,
n R j R j Fj +1
2 
and
n R j +1 R j +1 + Fj +1
2
(3.25)
The cellcumulated total ux is indicated as R j since for steadystate solutions it is proportionol to the cell residual. In the complete algorithm, the quantity R j must include the contribution of all cell interfaces together with those due to the ux evaluated on the boundary, not discussed here. After the nal total ux R j has been accumulated for all cells, the update step for the cell averages W j of the conservative variables is most simply achieved by considering the global array W = {W j , j = 1, 2, . . . , J } of all cell averages and the analogous array R = {R j , j = 1, 2, . . . , J } of all cellcumulated uxes (residuals). In terms of these global arrays, the update step for the conservationlaw system will read W n +1 = W n + t R C  (3.26)
32
33
after having introduced the isothermal speed of sound a according to the denition a2 P = RT = constant (4.3)
In the light of our present knowledge of thermodynamics, the concept of a speed of sound at a constant temperature may appear somewhat articial and physically implausible. However this gas model corresponds exactly to Isaac Newtons ideas1 when he conceived the propagation of acoustic waves as an isothermal phenomenonshould not we call it Newton gas? Another interesting example of Psystem is the ow of a nearly incompressible uid in a tube when the transient phenomenon called water hammer (in italian colpo dariete) occurs. This very important highly unsteady hydrodynamic phenomenon can be studied in a rigorous mathematical form only if placed in the context of the Riemann problems, for a detailed discussion see [11].
To write the system compactly, we introduce the vector unknow w and the associated ux vector f , dened as follows, w
1 The
where u is the velocity of the gas, t the time and x the spatial coordinate. This is a hyperbolic system of nonlinear equations written in the conservative form. By adopting the conservative variables mass density and momentum density m u as unknowns, the governing equations read t + x m = 0 (4.5) m + a 2 + m 2 / = 0 t x m f (w) f m (w) m m 2/ + a 2
and
f (w)
(4.6)
interested reader can benet from the scholar interpretation of Newtons Principia by Chandrasekhar [2, pp. 586 and 591].
34
which allows to write the considered hyperbolic system in quasilinear form t w + A(w) x w = 0
For the purpose of the present study, it is convenient to write the system above also in the socalled quasilinear form. This requires to introduce the Jacobian matrix: 0 1 f (w) 2 2m = A(w) = (4.8) m 2 w a 2 (4.9)
where the capitals letters should be used to distinguish the function unknowns dependent on a single variablethe similarity variable from those dependent on the two variables, x and t . However, to have more easily readable expressions, the unknown of the ODE system for the similarity problem is denoted by the same symbol w used for the unknown of the original PDE system. In other words, the relation above will be rewritten as follows w( x , t ) = w x t = w( ) = ( ) m ( ) (4.10)
although this represents a slight abuse of mathematical notation. With the adopted notation, the time and space derivatives of a solution of similarity type can be easily
35
For similarity solutions, the quasilinear system becomes w + A(w) w = 0 A(w) I w = 0 (4.13)
and therefore the original partial differential system has been reduced to an ordinary differential one, consisting of two coupled scalar equations. The new system is a rstorder nonlinear system with a zero righthand side. It admits the simple solution w = 0, which implies w( ) = constant: however, this solution cannot represent a wave and is therefore useless for our purposes. Other solutions can exist only when the matrix A(w) I is singular, since only in this case the system (4.13) can be satised by a w( ) different from a constant vector. We are therefore led to consider the eigenvalue problem associated with matrix A(w), which amounts to solve the determinantal equation det A(w) (w) I = 0, namely, 1 =0 (4.14) m 2 2m a2 2 This equation yields the characteristic equation 2 m2 2m + 2 a2 = 0 (4.15)
which is a second order equation; the solutions are provided by the quadratic formula, to give the eigenvalues of A(w), 1,2 (w) = m a (4.16)
where the two eigenvalues 1 and 2 are written in increasing order. Once the identication of the similarity variable with the eigenvalue (w) is recalled, this relation implies m = a (4.17) which represents an algebraic (i.e., not differential) relationship between the two components and m of the similarity solution w = (r ho, m ) and the independent variable . 36
The (right) eigenvectors corresponding to the eigenvalues are found by replacing the eigenvalues 1,2 (w) = m / a in matrix A(w) (w) I and writing the (now singular) system: m 1 R1,2 a (4.18) =0 2 2 m m a a 2 M1,2
where R1 and M1 are the components of the rst eigenvector r1 , while R2 and M2 of the second r2 . The two rows of the system are linearly dependent, so it is sufcient to consider only one equation, for example the rst one: m a R1,2 + M1,2 = 0 (4.19)
The solution of (4.19) denes uniquely only the ratio between the two components of each eigenvector, while the eigenvector length remains arbitrary. To obtain an eigenvector, an arbitrary real value can be specied for one of its components: for example, if the rst component is xed R1,2 = 1, the second one results to be m M1,2 = a . With this choice, the two eigenvectors will be 1 1 and r2 (w) = m (4.20) r1 (w) = m a +a
represent the unit vectors in the plane (w) [it is actually a halfplane] and m where of the variables. Evaluating the partial derivatives we obtain 1,2 (w) = 37 1 m + m 2 (4.22)
In general there are as many gradient elds of this kind as the number of different eigenvalues. For the considered hyperbolic system the two gradient elds 1 (w) and 2 (w) are actually coincident. The denition of nonlinearity for a system of hyperbolic equations involves the vector eld of the eigenvectors. In fact, the denition is centered on the following scalar product between the vector eld of the eigenvectors r = r1,2 (w) and the gradient elds 1,2 (w), namely, ri (w) i (w) i = 1, 2 (4.23)
The value of this scalar product depends on the point w and when it is zero at w the two vectors elds are orthogonal at that point. According to Lax denition, the wave associated to the i th eigenvalue is said genuinely nonlinear whenever ri (w) i (w) = 0 while it is said linearly degenerate whenever ri (w) i (w) = 0 , m (4.25) , m (4.24)
In other words, the genuine nonlinearity of an eigenvalue implies that the two vector elds i (w) and ri (w) are nowhere orthogonal, while the linear degeneracy means that they are orthogonal everywhere. We can collect these denitions in the following scheme ri (w) i (w) = 0 ri (w) i (w) = 0 , m , m genuine nonlinearity (4.26) linear degeneracy
The expression linearly degenerate is confusing because the attribute degenerate is employed normally to denote a multiple eigenvalue, namley, when two or more eigenvalues are coincident, whereas here we are considering a nondegenerate eigenvalue. In the case of a single (i.e. not multiple) eigenvalue, linearly degenerate means simply that the eigenvalue is linear. The nomenclature of linear degeneracy comes from a very important theorem due to Boillat (1972), see Godlewski and Raviart [4, p. 85], that states that every degenerate (i.e. multiple) eigenvalue of a hyperbolic system is also necessarily linear. In other words, the degeneracy implies linearity. Coming to the case of the isothermal gas system of interest here, a direct calculation gives a r1,2 (w) 1,2 (w) = (4.27) 38
Therefore, both waves of the isothermal gas system are genuinely nonlinear. As a consequence, in the physical process of the ow along the isothermal tube, the ideal gas can develop both rarefaction waves as well as shock wave solutions, which are the simple solutions that will be described in the next section. It is important to observe that for genuinely nonlinear eigenvalues, the nonvanishing of the scalar product ri (w) i (w) can be exploited to normalize the eigenvectors in a unique way. For instance, since a / is always different from norm zero, the eigenvectors r1 ,2 (w) can be divided by a / and we obtain the two scaled eigenvectors: a a norm norm r1 (w) = and r2 (w) = (4.28) m m + a a which are normalized in the sense that
norm r1 ,2 (w) 1,2 (w) = 1
(4.29)
39
40
5 Simple solutions
In this section we concentrate on simple (weak) solutions to the equations for the isothermal ideal gas. We rst consider the weak solution which represents shock wave propagating with a constant velocity between two uniform states of the uid. Then we describe the rarefaction wave by integrating the ordinary differential system of equations that has been introduced in the previous section 4. These elementary solutions allows to formulate the Riemann problem for this gasdynamic model as well as to write the basic nonlinear equation for this problem. The section ends with the determination of the analytical solution of a special Riemann problem consisting in a single shock wave that is reected by a stationary wall. This solution will be used to check the performance of a procedure for satisfying the boundary condition that has been proposed by Alberto Guardone and that will be described in appendix B.
This gives a system of two nonlinear equations in the three unknowns s , and m . One possible choice for the parameter is to take the rst unknown itself to parametrize the curve. The idea is to eliminate the variable s in order to obtain one relationship between the other two variables and m . The rst equation of the system (5.2) gives s= mm 41
For the isothermal gas equations described in section 4, the vector relation above gives a system of 2 equations in 2 + 1 = 3 unknowns: the two components of w and the scalar value s . The solution consists of one parameter families of solutions whose description is given in terms of a parameter that can be chosen arbitrarily. Reminding the denition of the vector unknown w and ux f given in (4.6) for the considered gas model, the RankineHugoniot conditions (5.1) become =mm , s ( ) (5.2) m2 m 2 s (m m ) = + a2 a 2 .
(5.3)
This reduces to a second order equation in the unknown m , with coefcients dependent on , of the form m2 m 2 2 2m m+ a 2 ( ) 2 = 0. 2 m a ( ) , m a , (5.5)
and the substitution of m () into relation (5.3) for the shock speed s s1,2 () = (5.7)
where the two solutions s1 and s2 have been taken in increasing order. More , the complete precisely, making explicit the dependence on the pivotal state w solution of the RankineHugoniot conditions is written as m ) = a ( ) , m 1,2 (, w (5.8) m ) = a . s1,2 (, w
) and m 2 (, w ) represent two curves that are called the The functions m 1 (, w ) and s2 (, w ) represent the velocity of the correHugoniot loci while s1 (, w = (, sponding discontinuities propagating between the uniform states w m ) and w = (, m ).
innitely many weak solutions to the nonliner hyperbolic system the only one that can be obtained as the limit of a viscous solution for a vanisingly small viscosity. In the present context, the selection mechanism is capable of dropping out half of the possible weak solutions and it selects the piece of the Hugoniot locus not contradicting the irreversibilty. According to Lax, the version of the entropy condition for a system must be expressed with reference to a given eigenvalue and considering a discontinuity moving with a known speed s . If the discontinuity propagates between a left state w L and a right state w R , this shock is admissible only if k (w L ) > s > k (w R ) (5.9)
where k (w) is one of the eigenvalue of the hyperbolic system. Consider for instance the wave that can be connected with the left state w of the Riemann problem, that is the rst discontinuous solution satisfying the RankineHugoniot condition. This solution is given by m = m 1 (, w ) and s = s1 (, w ). We take this solution since it is related to the rst eigenvalue of the problem which must be considered for the left wave. Then, Lax entropy condition for a possible shock wave on the left reads 1 (w ) > s1 (, w ) > 1 (w1 (, w )) (5.10)
where w1 (, w ) = (, m 1 (, w )). Reminding the expression of the rst eigen = w into this expression value and by substituting the rst solution in (5.8) with w we nd m m m 1 (, w ) a a > a > The letf part of this inequality reduces to: >1 = >
= w , becomes The second part, upon substitution of m 1 (, w ) from (5.8) with w m m a > a 1 that can be simplied to give <1 which leads to the same condition > provided by the left inequality. Thus, due to the entropy condition the admissible shock waves associated with the rst nonlinear eld in the isothermal ideal gas has the nal state that is compressed. 43 a
For the second nonlinear wave and always with reference to the left state w , the entropy condition reads 2 (w ) > s2 (, w ) > 2 (w2 (, w )) (5.11)
An analogous calculation shows that both inequalities require that < : thus, the shock wave connecting the state w = (, m 2 (, w )) with w has the compressed (postshoc) uid in latter and the lower density (preshock) uid in the former. This shock wave is however useless for the solution of the Riemann problem since the left state w must be connected with states associated only with the rst wave. Coming now to the right state wr of the Riemann problem and to the states that can be connected with it by the second wave, the Lax condition reads 2 (w2 (, wr )) > s2 (, wr ) > 2 (wr ) (5.12)
By the same reasoning we deduce the condition > r . Thus, the states that can be connected with wr by the second wave are shock waves with the compressed uid in the nal state, exactly as found for the nal states connecting the left state w by means of the rst nonlinear wave.
For a hyperbolic system with m equations there are m 1 Riemann invariants for each mode. In the case of the Psystem m = 2 and therefore there is only one Riemann invariant for each mode. Let us determine the Riemann invariants of the Psystem for the isothermal gas. Consider the rst mode with normalized eigenvector r1 (w), see section 4.4, a r1 (w) = (5.14) m a The Riemann invariant i = i (w) = i (, m ) is dened as the solution to the equation m i i + =0 (5.15) a a m which reduces immediately to i m + a 44 i =0 m
(5.16)
This is a rstorder partial differential equation, linear but with variable coefcients. To determine its solution let us consider the change of the independent variable m u = m /
and the corresponding change of the unknown i (, m ) I (, u ) i (, u ) i (, m ) I (, m /) The partial derivatives of the Riemann invariant with the original independent variables and m can be expressed in terms of those with respect to the new variables: I m I i 1 I i = 2 and = u m u The substitution into the equation for i yields, after simplifying two terms, the partial differential equation for the new unknown I (, u ) I I + =0 a u (5.17)
The form of this equation suggests to look for the Riemann invariant I (, u ) as the sum of two functions of only one variable, as follows I (, u ) = A() + B (u ). Then the equation becomes A () + B (u ) = 0 a (5.19) (5.18)
where the prime denotes differentiation with respect to the independent variable of any function of a single variable. The rst term of the equation is a function only of variable whereas the second is a function only of u . Thus the equation can be satised only provided the two terms are equal to one and the same constant, but with opposite signs, as is typical when a PDE is solved by the technique of the separation of variables. As a consequence, the two functions A() and B (u ) must satisfy the two independent rst order ordinary differential equation A () = K a and B (u ) = K (5.20)
where K is a separation constant. The integration of the two equations is immediate and gives A() = C + K a ln and B (u ) = D + K u 45
where C and D are integration constants. Therefore the solution is found to be I (, u ) = H + K (a ln + u ) with H = C + D is a single constant. The constant H is better taken into account replaced by another constant 0 which permits the argument of the logarithm to be dimensionless I (, u ) = K a ln +u (5.21) 0 The Riemann invariant as a function of the original variables reads i (, m ) = K a ln m + 0 (5.22)
The Riemann invariant associated with the second mode can be obtained in the same way.
where (q ) is an arbitrary function representing a scalar factor that xes the parametrization of the curve. The integral curves are very important in the solution of the hyperbolic system since they are strictly related to the solutions of similarity type. In order to prove this, let us recall the results obtained in section 4.3 for the quasilinear system t w + A(w) x w = 0 and look for solution of similarity type w = w( ), where = x / t . As already remarked in section 4, it would be notationally more appropriate to denote the unknown of the similarity problem by a letter different from w, for instance by the capital letter W, to avoid confusion with originary unknown w(x , t ) which is a function of two variables. However, as before the simpler lower case letter
46
w is retained, although at the price a slight abuse of mathematical notation. For similarity solutions, this system was shown to lead to the following system A(w) I dw =0 d (5.24)
The function ( ) is actually xed by the eigenvalue relation and can be determined by observing that the eigenvalue relation can be written in general as = k (w( )). This relation can be differentiated by means of the chain rule to give 1= dw k (w) d w1 k (w) d w2 + + = k (w) w1 d w2 d d 47
where ( ) is a function which depends on the parametrization. To avoid a too cumbersome notation, let wk be indicated by w, so that the previous system simplies to dw = ( ) rk (w) d (5.27) m = a
Thus, equation (5.24) says that d w/d must be proportional to some eigenvector rk (w) of A(w) and that the independent (similarity) variable must be the corresponding eigenvalue k (w). Therefore, nontrivial rarefaction solutions will be solution to the following ordinary differential problem d wk = ( ) rk (wk ) d (5.26) m = a
of ordinary differential equations for the unknown w = w( ), where I denotes the 2 2 identity matrix. Nontrivial (i.e., nonconstant) solution to this system required to solve the eigenvalue problem for matrix A(w). For the isothermal gas equations the two eigenvalues, written in increasing order, and the associated eigenvectors were found to be 1 m 1,2 (w) = a (5.25) and r1,2 (w) = m a
and, on account of the rst equation of (5.27), 1 = ( ) k (w( )) rk (w( )) so that, thanks to the genuine nonlinearity of the waves, we can obtain ( ) = 1 . k (w( )) rk (w( ))
Using this result, the ordinary differential system becomes rk (w) dw norm = = rk (w) d k (w) rk (w) (5.28)
For this gasdynamic model the ODE system (5.28) assumes the form d d = a dm m = d a
norm where rk (w) is the normalized eigenvector that for the isothermal gas has been calculated in section 4.4: a norm r1,2 (w) = (5.29) m a
(5.30)
(5.31)
where
(5.32)
It must be stressed that, differently from most initial value problems of mathematical physics, here the initial value of the independent variable cannot be chosen at will, but is xed by the presence of the relation for the eigenvalue. The solution of the rst equation of the system is quite simple and is ( ) = Ae/a 48
where A is a constant which is determined by imposing the rst initial condition of (5.31), to give ) = Ae /a = (
A= e /a (5.33)
This result can be replaced in the right hand side of the second equation of (5.31), which becomes dm m = e( )/a (5.34) d a This equation is a linear, nonhomogeneous, rst order ODE and its solution can be sought for as the sum of the general solution of the homogeneous equation and a particular solution to the complete equation, as follows m ( ) = m homo ( ) + m part ( ) with an obvious meaning of the two components. The particular solution is sougth for in the form m part ( ) = K e/a with the value of the constant K obtained by substituting m part ( ) into equation (5.34) to give K = e /a so that m part ( ) = e( )/a The solution to the homogeneous equation is easily found to be m homo ( ) = Ce/a with the constant C to be determined by imposing on the general solution m ( ) = Ce/a + e( )/a ) = m the initial condition m ( . One nds C = a e /a , so that m 1,2 ( ) = ( a ) e( )/a The complete solution of system (5.30) is therefore written by making explicit the dependence on the initial state w (, w ) = e( )/a (5.35) )/a ( ) = ( m 1,2 (, w a) e (5.36)
m (, w ) = ( + a ) e( )/a 1
where = m / a . This solution hold only for > , since the similarity variable must increase starting from the left state w moving to right. From this solution it is possible to eliminate and solve for m as a function of . This give an explicit expression for the integral curves in the phase plane. If we solve for the rst equation of (5.35) and use the expression found for in the second, we obtain the integral curve m 1 (, w ) = m a ln for < (5.37)
where r = m r /r + a , and where < r , since the similarity variable must decrease moving from the right state wr toward the left. The representation of the integral curve of the right rarefaction wave, using as parameter, is m 2 (, wr ) = mr + a ln r r for < r
which is now parametrized by the density variable. Coming to the right wave, we can denote the similarity variable as to distinguish this solution from the left wave. Choosing the second sign in the expressions of (5.35) since the right wave involves the second eigenvalue, we obtain the right rarefaction wave (, wr ) = r e(r )/a (5.38) m (, w ) = ( a ) e(r )/a 2 r r
(5.39)
As calculated before, shock wave solutions will be given the following relations
50
for the Hugoniot locus and the shockwave speed: ) = m 1,2 (, w ) = s1,2 (, w m a ( ) m a (5.41) (5.42)
Consider the data of a Riemann problem when the uid is at rest on the right of the initial discontinuity: between w = ( , m ) and wr = ww = (w , m w = 0). In order that this discontinuity may produce only a shock propagating to the right in the uid at rest, the value of momentum m behind the shock must be given by the momentum function above for the second eigenvalue, with the pivot state represented by the right state, i.e., = w and m = m w = 0. Thus, by the rst equation of (5.8) we are led to the condition m = m 2 ( , w , 0) = a ( w ) or equivalently, in dimensionless form, m = a w w (5.44) w (5.43)
The value of m depends on the two initial densities. Thus, the solution of the Riemann problem with left data ( , m 2 ( , w , 0)) and right data (w , 0), with > w , consists of a single shock wave propagating to the right with speed sincid = s2 ( , w , 0) = a w (5.45)
This shock wave reaches the rigid wall and is there reected to propagate in the opposite direction. Just after the reection the left state of the uid ( , m ) becomes the state in front of a shock propagating to the left. Thus the solution above of the Hugoniot locus can be used with the pivot state being now the left state, i.e., = and m = m , and the rst solution with the minus side must be considered. But the uid behind the reected wave is at rest due to the velocity the uid density boundary condition on the xed wall. Therefore, indicating by w , w ) = m = 0; in other words, behind the reected wave, we must have m 1 (w w by the rst equation of system (5.8) we have the condition:
m 1 (w , w ) =
m w a (w ) w = 0 51
(5.46)
(5.47)
(5.48)
/ and the density ratio r = / , the equation Introducing the unknown x = w w becomes
1 1 r= x x r
(1 r ) x = r (x 1)
(5.49)
= whose solutions are x1 = r and x2 = 1/ r . The rst solution x1 = r gives w and is a spurious solution introduced by the squaring, that is to be discarded. The second solution x2 = 1/ r gives insted the density behind the reected shock wave = 2 / > , which corresponds to a gas compression, as required. w w By the second equation of system (5.8) the reected wave propagates at a speed w m a
sre = s1 (w , w ) =
(5.51)
= 2 / we obtain, in dimenSubstituting the value m just established and w w sionless form, sre w w = = (5.52) a w w
The absolute value of this quantity is < 1 but we must observe that relection is described in the reference of the wall in which the velocity of the uid entering the shock front sums with that of the moving front of the reected shock.
52
section 5, for the left state we have < m a ln , m 1 (, w ) = , > m a ( ) < r m r r + a ln r , m 2 (, wr ) = , > r m r + a ( r ) r r m 1 (, w ) = m 2 (, wr )
(6.2)
(6.3)
The Riemann problem consists in nding the point (, m ) of intersection of the two functions above and therefore boils down to the solution of the following equation (6.4) where the solution gives the density of the intermediate state. This equation is in general nonlinear and for its resolution an iterative method is generally needed. The solution of the Riemann problem consists in two waves, a left wave which can be either a shock or a rarefaction, and a right wave which can be similarly either a shock or a rarefaction. Therefore, the complete solution consists in a combination of either two shocks, or one shock together with one rarefaction, or two rarefactions. The kind of waves of the solution depends on the initial data w and wr of the left and the right states of the Riemann problem and is discovered after the nonlinear equation above has been solved. However, it is also possible to determine the kind of waves of the Riemann solution before solving this nonlinear equation starting by some considerations about the states w and wr of the initial condition. 6.1.1 Explicit analytical rarefactionrarefaction solution
The particularly simple form of the pressure relation of the isothermal gas allows one to determine the rarefactionrarefaction solution to the Riemann problem by an exact analytical formula. When both waves emerging from the initial discontinuity are rarefaction waves the density of the intermediate state between them is the ic solution of the equation m ic 1 (, w ) = m 2 (, wr ) which reads in explicit form, after simplifying the common factor > 0, m mr a ln = + a ln (6.5) r r Solving for , the density = i of the intermediate is obtained u u r i = r e 2a . (6.6) 54
6.1.2
Also the shockshock solution of the Riemann problem for the isothermal gas can be obtained in closed analytical form. When both waves issuing from the initial discontinuity are shock waves the density of the intermediate state between them Hl is the solution of the equation m Hl 1 (, w ) = m 2 (, wr ) which reads in explicit form = m r + a ( r ) (6.7) m a ( ) r r By simple algebra, we obtain the equation for the density = i of the intermediate state r + = u ur (6.8) + a a r which reduces to the following quadratic equation 1 1 1 z2 + + r a for the unknown z = . mr m r z + r = 0, (6.9)
is sufciently high, it is intuitive that two rarefaction fans will form between the two uids, emerging from the intial discontinuity. Then, it is expected that there
55
will be a limiting value of the relative velocity over which the solution has the generation of two rarefaction fans, that will be indicated by 2 f . On the other side, we can suppose a different specication of the initial velocities in which the uid on the right is pushing to left at u r < 0 and contemporarily the uid on left is pushing to right at u > 0. For a sufciently high negative the initial discontinuity will generate two shock waves. The limiting relative velocity for producing a two shock solution will be denoted by 2s . For a relative velocity falling inside the interval 2s < < 2 f , the solution of the Riemann problem will consist of one rarefaction and one shock. Our target is now to nd the analytic expressions of 2s and 2 f in terms of the data w and wr . For that purposes it is convenient to nd preliminarly the behaviour of the velocity along the shock and rarefaction solutions connected to the left and right states. More precisely, we dene u i (, w ) m i (, w ) , i = 1, 2, (6.11)
and similarly for the right wave. From equations (6.2) it follows immediately that the velocity along the left wave is < u a ln , u 1 (, w ) = (6.12) > u a where u m / , and from (6.3) along the right wave is < r u r + a ln r , u 2 (, wr ) = r > r u r + a r u 1 (, w ) = u 2 (, wr ) which is still a single nonlinear equation in the unknown .
(6.13)
where u r m r /r . In the light of the consideration above, the Riemann problem can be recast in terms of the velocity as follows: (6.14)
By equations (6.12) and (6.13), when the two states are connected with w and wr by a shock wave this difference is found to be u swsw (, w , wr ) = u r u + a + r r . (6.16)
By inspection, this function is seen to be monotonically increasing Therefore, the limiting value of for obtaining a solution with two shocks and marking the transition to solutions with one rarefaction wave will correspond to the maximum value of the independent variable . This value, due to the double constraint > and > r for a solution consisting of two shock waves, will be the greater value between and r . We then dene max = max( , r ) and min = min( , r ), and substitute = max in the expression (6.16) of the relative velocity. In any case, two terms within parentheses vanish. Moreover, we have u swsw (max , w , wr ) = 0 so that we nd the limiting value 2s ( , r ) a which can be written also as follows a (max min )  r  2s ( , r ) = = a max min r (6.18) max min min max (6.17)
When the solution consists of two rarefaction waves, equations (6.12) and (6.13) select the functions satisfying < and < r in accordance with the entropy condition. Consequently, the velocity difference u rarrar (, w , wr ) for a tworarefaction solution is given by u
rarrar
2 (, w , wr ) = u r r + a ln r
(6.19)
and this function is manifestly a monotonically increasing function of . Therefore the minimum value of the function for solution with two rarefaction waves and marking the transition to solutions with one shock wave is obtained for the minimum value of , compatible with the double constraint < and < r . Therefore this sought for value of density must be the minimum between and r . Since u rarrar (min , w , wr ) = 0, it follows that the limiting relative velocity for a tworarefaction solution of the Riemann problem is 2 f ( , r ) a ln max = a ln min r (6.20)
57
For u r u < 2 f ( , r ) the solution of the Riemann problem comprises one shock wave and a rarefaction wave, while for greater values both waves are expansion waves. We have dened the two relative velocities that divide the three kinds of solution of the Riemann problem for the isothermal ideal gas: < 2s ( , r ) two shocks zone two rarefactions zone
2s ( , r ) < < 2 f ( , r ) shockrarefaction zone 2 f ( , r ) < Limiting values of exist also for Psystem different from the isothermal ideal gas considered here. The expressions valid for the general case will depend on the form of the pressure function P = P (). They are derived in appendix A, where the possibility of solutions with very strong rarefaction waves with a region of vacuum between their receding tails will also be analyzed. To conclude this section on the isothermal ideal gas, it could be worthwile to describe how the change of type of the solution depends on the value of the solution itself of the Riemann problem as runs the interval [min , max ]. From what we have found in the previous two sections, for < min the solution consist of two rarefaction waves, for min < < max one wave is a rarefaction and the other is a shock, while for > max the two waves are both shocks. In particular, the intermediate case can occur with either < r or r < . In the rst case, by equations (6.12) and (6.13) the mixedwave solution of the Riemann problem is dened by the equation u a = u r + a ln r
Since = min and r = max this equation for can be recast as the following relation involving the relative velocity u r u = a min min a ln r max
In the second case, with r < , the mixedwave solution is given by the equation u a ln = ur + a r
Now = max and r = min and therefore the previous equation for can be recast in the following form involving the relative velocity u r u = a ln max a min min
dened in the interval < m / a . To parametrize the states of the shock wave that can be connected to the right state wr we introduce a different symbol so that the shock solution will assume 59
where r 2 (wr ) = m r /r + a , for > m r /r + a . We can therefore combine the rarefaction and shock solutions in a single function of the same variable. For the left wave the function of the density will be e( )/a > 1 (, w ) = 1 ( )/a < and that for the momentum ( )/a ( + a ) e m 1 (, w ) = m 1 ( )/a + ( ) 1 ( )/a > <
(6.23)
and that for the momentum (r )/a r ( a ) e m 2 (, wr ) = m r 1 + ( r )/a +r ( r ) 1 + ( r )/a 1 (, w ) = 2 (, wr ), m 1 (, w ) = m 2 (, wr ), of two nonlinear equations in the two unknowns and .
The analogous expressions for the functions pertaining to the right wave are, for the density r e(r )/a < r 2 (, wr ) = 1 + ( )/a > r r r < r > r
The Riemann problem can therefore be formulated as the following system (6.24)
60
w1 = rho w2 = m
USE
isothermal_pig
IMPLICIT NONE
CONTAINS !======= SUBROUTINE two_eqs_riemann_solver_isot_pig & (w_L, w_R, w_I, rel_err, iterations)
!=============================================================== ! ! Solution of the Riemann Problem for the ! Isothermal version of the Polytropic Ideal Gas ! ! Iterative solution by Newton method in incremental form ! for the SYSTEM of 2 EQUATIONS ! ! phi_LR(csi, eta) = 0 ! ! psi_LR(csi, eta) = 0 ! ! where ! ! phi_LR(csi, eta) <=== rho(csi; L)  rho(eta; R) ! ! psi_LR(csi, eta) <=== mom(csi; L)  mom(eta; R) !
61
!! ! INPUT ! ! w_L : left state (density, momentum) ! w_R : right state (density, momentum) ! ! ! OPTIONAL ! ! rel_err : relative error (of depth) to stop the ! iterations [default = 1.0d7] ! ! iterations : maximum number of iterations ! [default = 100] ! !! ! OUTPUT ! ! w_I : intermediate state between waves ! ! ! OPTIONAL ! ! iterations : number of actually computed iterations ! !=================================================================
IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) REAL(KIND=8), DIMENSION(:), INTENT(OUT) REAL(KIND=8), OPTIONAL, INTENT(IN) INTEGER, OPTIONAL, INTENT(INOUT) REAL(KIND=8), DIMENSION(2,2) :: AA REAL(KIND=8), DIMENSION(2) :: w1, w2 REAL(KIND=8) :: lambda_L, lambda_R, det, csi, eta, d_csi, d_eta dr, dm, & :: :: :: :: w_L, w_R w_I rel_err iterations
INTEGER :: max_It_Int = 100 ! default REAL(KIND=8) :: rel_err_Int = 1.0d07 ! default INTEGER :: it !
62
IF (PRESENT(iterations)) IF (PRESENT(rel_err))
lambda_L = w_L(2)/w_L(1)  a lambda_R = w_R(2)/w_R(1) + a ! initial guess for the curve parameters csi = lambda_L; eta = lambda_R; d_csi = csi d_eta = eta
! solve the Riemann problem by Newton iteration WRITE(*,*) WRITE(*,*) Riemann solver start DO it = 1, max_It_Int ! convergence check IF ((d_csi**2 + d_eta**2)/(csi**2 + eta**2) < rel_err_Int) THEN CALL loci (1, csi, w_L, IF (PRESENT(iterations)) w_I) iterations = it
WRITE(*,*) WRITE(*,*) Successful convergence in Newton iteration! RETURN ENDIF CALL loci (1, csi, w_L, CALL loci (2, eta, w_R, det = AA(1,1) * AA(2,2) w1, w2, AA(:,1)) AA(:,2));
AA(:,2) =  AA(:,2)
AA(1,2) * AA(2,1)
IF (ABS(det) < 1.0d13) THEN WRITE(*,*) WRITE(*,*) WRITE(*,*) WRITE(*,*) WRITE(*,*) STOP
STOP.
63
ENDIF dr = w1(1)  w2(1) dm = w1(2)  w2(2) d_csi =  (AA(2,2) * dr  AA(1,2) * dm)/det d_eta =  (AA(1,1) * dm  AA(2,1) * dr)/det csi = csi + d_csi eta = eta + d_eta ENDDO
END SUBROUTINE
two_eqs_riemann_solver_isot_pig
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
w,
Dw)
Isothermal polytropic ideal gas in one dimension Evaluation of the bifurcating functions and their derivatives defining the states w = (rho, m) that can be connected with a given pivotal state w_ = (rho_, m_) for a given value of the parameter variable csi along the curve. The variable csi has the physical dimensions of a velocity. For the rarefaction wave, csi is the standard similarity variable while for the shock wave it is a curve parameter assuring the continuity at the pivotal point of the functions and of their derivative along the curve. IMPLICIT NONE INTEGER, INTENT(IN) :: REAL (KIND=8), INTENT(IN) :: REAL (KIND=8), DIMENSION(:), INTENT(IN) :: REAL (KIND=8), DIMENSION(:), INTENT(OUT) :: REAL (KIND=8), DIMENSION(:), OPTIONAL :: REAL (KIND=8) :: s, rho_, m_, & csi_, c_c, s_c_ca, i_12 csi w_ w Dw
ops,
64
s = (1)**i_12
! s = 1 ! s = 1 m_ = w_(2) + s * a
for for
i == 1 i == 2
c_c = csi  csi_ s_c_ca = s * c_c/a IF (s * c_c > 0) THEN ! shock wave ops = 1 + s_c_ca t = SQRT(ops) w(1) = rho_ * ops w(2) = m_ * ops +
rho_ * c_c * t
IF (PRESENT(Dw)) THEN Dw(1) = s * rho_/a Dw(2) = s * m_/a + ENDIF ELSE ! rarefaction wave t = EXP(s_c_ca)
w(1) = rho_ * t w(2) = w(1) * (csi  s*a) IF (PRESENT(Dw)) THEN Dw(1) = s * w(1)/a Dw(2) = Dw(1) * csi ENDIF ENDIF END SUBROUTINE loci END MODULE two_eqs_riemann_solver
65
66
(7.1)
with end points x j + 1 = (x j + x j +1 )/2 located in the middle of two consecutive 2 grid points. The length of each cell is C j  = x j = x j+ 1 x j 1
2 2
(7.2)
and each mid points x j + 1 represents the interface between two consecutive cells 2 C j and C j +1 . In two or threedimensional problems the cells are also called nite volumes. In Godunov method the numerical solution Wjn provide a piecewise constant representation over the cells of the solution at time tn . In other words, Wjn is the cell average of the variable w over the cell x j 1 < x < x j + 1 . This piecewise 2 2 constant representation of the solution at time t = tn is used as initial data for the hyperbolic system which is now solved for t > tn . The equations are now solved exactly over a short time interval by solving the Riemann problems dened at all interfaces. The Riemann solutions can be pieced together at least up to a nal time tn +1 when the waves from two Riemann problems of adjacent cells begin to n (x , t ), for any x and for time in the interact. In this way, one obtain a solution w 67
interval tn < t < tn +1 . This solution is the exact weak solution to the initial value problem with piecewise constant initial data and can be exploited to obtain the new numerical solution Wjn +1 by a cell averaging process Wjn +1 = 1 C j 
x j + x j
n (x , tn +1 ) d x w
(7.3)
where the time tn +1 in the integrand must be noticed. Thus a piecewise constant representation of the unknown at the nal time tn +1 has been determined.
n (x , tn +1 ) d x = w
x j + x j
n (x , tn ) d x + w
tn +1 tn
tn +1 tn
n x j 1 , t f w
2 2
dt (7.4)
n x j+ 1 , t f w
dt
n (x , tn ) gives By dividing this relation by C j  and reminding that the integral of w the initial cell averages Wjn , the equation above reduces to Wjn +1 = Wjn t F Wjn , Wjn+1 F Wjn1 , Wjn C j  1 = t
tn +1 tn
(7.5)
dt
(7.6)
n (x j + 1 , t ) is constant over the time interval. This The time integral is trivial since w 2 follows from the fact that the solution of the Riemann problem at the interface x j + 1 2 is a similarity solution, constant along each ray x x j + 1 / t = constant.
2
wRiemann (, Wj , Wj +1 ) is well dened for any value of the similarity variable . The solution for = 0 gives the vertical state that will be denoted as follows: w(Wj , Wj +1 ) = wRiemann (0, Wj , Wj +1 )

(7.7)
The corresponding numerical ux for the Godunov method is dened by evaluating the vector ux analytically at the vertical state and will be given by
Fj = F(Wj , Wj +1 ) f w(Wj , Wj +1 ) +1
2   
(7.8)
In terms of this numrical ux the timeadvancement of Godunov method will be expressed in the followng form Wjn +1 = Wjn t  n  Fj + n 1 1 F j 2 2 C j  (7.9)
The scheme is implemented in the form of a cycle over the interfaces by updating the cell averages of the two neighbouring cells j and j + 1, as follows, after the initialization Wjn +1 Wjn , Wjn +1 Wjn +1 t  Fj + n 1 2 C j 
n +1 +1 and Wjn+ 1 Wj +1 +
t C j +1 
n Fj +1
2
(7.10)
The timestep t is subject to a restriction to guarantee numerical stability in the discrete time integration, which assumes the classical form of the CFL condition t max k (Wjn ) C j  <1 (7.11)
k =1,2; j
69
! ORDERED GRID ! ! ! ! ! Given the vector xx of nodal points and the vector ww of the cell average of the conservative variables, the program calculates the Godunov vertical flux for the Euler equations of gasdynamics for a polytropic ideal gas
IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: xx REAL(KIND=8), DIMENSION(:,:), INTENT(IN) :: ww REAL(KIND=8), DIMENSION(:,:), INTENT(OUT) :: FF_G REAL(KIND=8), DIMENSION(SIZE(ww,1)) :: wl, wr, wi, ws, & lambda_l, lambda_r, lambda_i REAL(KIND=8) :: vi, ui, INTEGER :: i, j vl, ul, vr, ur, ss ! ss shock speed
j = SIZE(xx);
DO i = 1, SIZE(FF_G, 2);
j = i
! Special situations in which the solution ! of the Riemann problem can be avoided wl = ww(:, j) lambda_l = eigenvalue(wl) IF (lambda_l(1) > 0) THEN FF_G(:, i) = flux(wl) CYCLE ENDIF
70
wr = ww(:, j+1) lambda_r = eigenvalue(wr) IF (lambda_r(2) < 0) THEN FF_G(:, i) = flux(wr) CYCLE ENDIF
! Examine the three waves, starting from left. vl = 1/wl(1); vi = 1/wi(1); ul = wl(2)/wl(1) ui = wi(2)/wi(1)
! Is the first wave a shock wave or a rarefaction fan? IF (vi < vl) THEN ! The first wave is a shock wave ! Compute the shock speed ss = (vl*ui  vi*ul)/(vl  vi) IF (ss > 0) THEN ! The shock propagates to the right: ! the vertical state is wl. FF_G(:, i) = flux(wl) CYCLE ENDIF
ELSE ! The first wave is a rarefaction fan. IF (lambda_l(1) >= 0) THEN ! The rarefaction wave propagates all to the right: ! The vertical state is wl FF_G(:, i) = flux(wl)
71
CYCLE ELSE ! Is the left rarefaction fan transonic? lambda_i = eigenvalue(wi) IF (lambda_i(1) > 0) THEN ! Yes, it is transonic. ! Sonic values of the similar solution at xi = 0: CALL transonic_rarefaction (1, wl, FF_G(:, i) = flux(ws) CYCLE ENDIF ENDIF ! The first wave is a NONtransonic rarefaction ENDIF ! The first wave propagates to the left and it is ! either a shock wave or a (nontransonic) rarefaction ws)
! Is the second wave a shock wave or a rarefaction fan? IF (vi < vr) THEN ! The second wave is a shock wave. ! Compute the shock speed ss = (vr*ui  vi*ur)/(vr  vi) IF (ss > 0) THEN ! The shock propagates to the right: ! The vertical state is wi. FF_G(:, i) = flux(wi) CYCLE ENDIF
72
ELSE ! The second wave is a rarefaction fan. lambda_i = eigenvalue(wi) IF (lambda_i(2) >= 0) THEN ! The rarefaction wave propagates all to the right: ! The vertical state is wi FF_G(:, i) = flux(wi) CYCLE ELSE ! Is the right rarefaction fan transonic? IF (lambda_r(2) > 0) THEN ! Yes, it is transonic. ! Sonic values of the similar solution at xi = 0: CALL transonic_rarefaction (2, wr, FF_G(:, i) = flux(ws) CYCLE ENDIF ENDIF ! The second wave is a NONtransonic rarefaction ! which propagates all to the left ENDIF ! The second wave propagates to the left and it is ! a nontransonic rarefaction fan or a shock wave: ! therefore, in both cases the vertical state is wr FF_G(:, i) = flux(wr) ! Process now the next interface ENDDO ws)
END SUBROUTINE
god_num_flux
73
74
8 Roe linearization
As explained in the previous section 7, the application of Godunov method to a  system of hyperbolic equations requires to know the ux F at each cell interface. This information is only a part of the entire solution to the Riemann problem and is  simply the ux vector evaluated at the vertical state w embedded within the similar   solution: F = f (w ). In general, the computational effort needed to nd the exact solution of the Riemann problem can be high, due to its nonlinear character. However, the full knowledge of the Riemann solution is redundant insofar as a cell average is performed by Godunov method at the end of the time step. For this reason one can think that the essential elements of the information can be provided also by a suitable reduction or approximation of the Riemann problem. In other words, the exact Riemann problem might be replaced by some simpler approximate counterpart which is capable of respecting fundamental properties of the nonlinear governing equations and, at the same time, of reducing the computational effort substantially.
(w , wr ) is a matrix which depends on the left and right states and has to where A satisfy the following conditions (w , wr )(wr w ) = f (wr ) f (w ), i) A (w , wr ) A(w) iii) A (conservation) (hyperbolicity) (consistency)
The rst condition assures that the resulting scheme is conservative, for details see [8, section 14.1]. Another effect is that, when w and wr are connected by a single discontinuity (shock or contact discontinuity) satisfying the RankineHugoniot jump condition, the approximate Riemann solution agrees with the exact Riemann solution. In fact, let us suppose that the RankineHugoniot jump condition f (wr ) f (w ) = s (wr w ) (8.2)
75
is satised, for some s . By taking into account condition i) the last relation, we obtain (w , wr )(wr w ) = s (wr w ), A (8.3) with eigenvalue s and so showing that wr w must be an eigenvector of A the solution to the approximate linear Riemann problem also consists of a single jump wr w propagating with speed s . Condition ii) assures that the linearized problem is still hyperbolic and solvable. Finally, condition iii) guarantees that smooth solutions are represented correctly, in the sense that the solutions to the approximate problem correspond to those of the original exact one. By summarizing the three conditions, Roe method represents a conservative and consistent linearization of the hyperbolic system. The problem now, for any given pair of states w and wr , is to determine the , namely its p p elements a linearizing matrix A i j (w , wr ), for i , j = 1, . . . , p, under the Roes conditions above. For a system with p equations, condition i) represents p algebraic relations, while conditions ii) and iii) are merely qualitative. Therefore we have to solve a system of p equations in p 2 unknowns, which implies that the solution is a ( p 2 p)parameter family of solutions. For exampe, in Psystem p = 2 and there is a twoparameter family of solutions. The approximate Riemann problem associated with Roe method is linear and its solution consists only of propagating discontinuities, with no rarefaction waves. This circumstance does not introduce any difculty in a Godunov method, with the only exception of solutions to the Riemann problem characterized by the occurrence of a transonic rarefaction. In this case the numerical ux provided by the approximate Riemann solution leads to a violation of the entropy condition and it is therefore necessary to introduce an entropy x.
=w (w , wr ) emphasizes the dependence of the unknown where the notation w intermediate state on the states at the interface. If the linearizing matrix is chosen and therefore one has in Jacobian form, the unknown becomes the state vector w apparently a system with an equal number of equations and unknowns, see below. It must be noticed that the nonlinearity of the hyperbolic system prevents a simple 1 choice like 2 (w + wr ) to be a solution. For subsequent reference, it is convenient to 76
state the Roe conditions for the particular case of a solution sought for in Jacobian form: )(wr w ) = f (wr ) f (w ) (conservation) i) A(w ) (hyperbolicity, automatically satised) ii) A(w (w , wr ) w iii) w as w and wr w (consistency).
. The where and m are the two components of the unknown intermediate state w linearization in Jacobian form of the considered system amounts to determine the such that the rst condition i) of Roes linearization intermediate state w )(wr w ) = f (wr ) f (w ) A(w (8.8)
from which the Jacobian matrix of the isothermal gas (4.8), is easily derived 0 1 2 2m , A(w) = (8.6) m a2 2
(8.7)
is satised. This condition leads to a system of two equations in the two unknowns and m , namely m r m = m r m (8.9) 2 m2 m 2 m mr (r ) + 2 ( m m ) = . r 2 r 77
The rst equation is identically satised and therefore we have a single equation . Thus we have a oneparameter family of in the two components of the vector w solutions, which is determined as follows. First we note that if we rewrite the second equation of (8.9) in terms of speed u =m / , we obtain the following quadratic equation in u :
2 m2 mr =0 (r ) u 2(m r m ) u + r 2
(8.10)
(8.11)
2 / m 2 / mr 2(m r m ) r u u + = 0. r r
2 m2 mr m 1 mr (m r m )2 (r ) r r r
2
r m ) The discriminant of this equation is = ( m r and is a perfect square, r so that there are two real distinct solutions m r r m mr m u 1,2 = r (r ) r (8.12) r (m r m ) ( m r r m ) = . (r ) r r r + , Using the following elementary identity r = we obtain m r r r m r u 1,2 = r r + r m r r m r = r r + r
The expressions within the parentheses can be simplied with either of the two possible signs and this yields the two solutions: u 1 =
m
mr r
and 78
u 2 =
. r
mr r
(8.13)
(8.14)
An alternative way of obtaining the two solutions of (8.10), suggested by Marco Bernasconi, is to separate the terms associated with the left and right states in the two sides of the equation to give u 2 2m u + and interpreting each as a square so that m u
2
m2 m2 = r u 2 2m r u + r r
mr r u r
m mr u r u = r mr m r u = r
and therefore
which gives the two solutions already found. Only the rst solution with the positive sign satises the consistency requirement iii ) under the request of matching with the solution u = (u + u r )/2 in the special case = r . In conclusion, the only physically admissible solution for the velocity of the intermediate state is u + r u r u = (8.15) + r which is called Roeaverage or also average, u Roe . It gives the speed of the unknown state as the speed averaged between the left and right states using the root of density as the weight. We now remind that the solution to the linearization problem is actually a one parameter family of solutions, while u does provide neither the density nor the momentum of the intermediate state. Therefore we need to specify one of two unknowns to determine completely this state. For example we can assign and obtain m = u through relation (8.15). Since A = A(w) = A(, m ) the Jacobian matrix of the linearized system can also be expressed as follows = A(w ) = A(u A ) = 79 0 1 (8.16)
a2 u 2 2u
namely as a function only of the intermediate velocity u , even though with a slight abuse of mathematical notation since the same symbol A is used to indicate two different functions, one of a vector variable the other of a scalar variable. The are given very simply by eigenvalues of A 1 (u ) = u a with the corresponding eigenvectors 1 (u r ) = 1 u a and 2 (u r ) = 1 u +a (8.18) and 2 (u ) = u +a (8.17)
The matrices of right and left eigenvectors are given by (u R ) = 1 1 and 1 (u L ) = 2a u + a 1 1 , (8.19)
u a u +a
au
  wr w . F (w , wr ) = f (wr ) A
(8.20)
Recall that a generic linear hyperbolic system with p equations t w + A x w = 0, can be reduced to diagonal form t v + where x v = 0, (8.22) (8.21)
(8.23)
and v is the vector of the new variables, called characteristic variables, v R1 w = Lw, (8.24)
where R is the matrix of right eigenvectors and L = R1 is the matrix of left eigenvectors. Consider now a Riemann problem with initial data w and wr . We can decompose the jump wr w of the inital condition onto the basis of the right eigenvectors k of the linearized problem r
p
wr w =
k =1
v, k = R vk r
(8.25)
where vk , k = 1, . . . , p, represent the characteristic components of the considered jump. Here we have used the fact that matrix multiplication of a vector is a linear combination of the column vectors of the matrix with the vector components being the coefcients of the linear combination (see Trefethen). Since R =R 1 R = I, the inverse of the relation above is L [wr w ]. v=L (8.26) From [8, Chap. 6, p. 58], the similarity solution of the linear Riemann problem can be expressed either in the form ( ) = w + w or, equivalently, as ( ) = wr w
k > k  k < k 
k , vk r k vk r
is used to where x / t is the similarity variable and where we the notation w remind that this solution pertains to a linearized hyperbolic system. The vertical state corresponds to = 0 and therefore we have = w (0) = w + w or alternatively = w (0) = wr w
 
k < 0 k 
k vk r
(8.27)
k > 0 k 
k vk r
We are now able to compute the numerical ux, by exploiting either of the two alternative expressions above in the relations (8.20), to give
 F (w , wr ) = f (w ) + A
k < 0 k 
k = f (w ) + vk r 81
k < 0 k 
k r k vk
(8.28)
or, equivalently,
 F (w , wr ) = f (wr ) A
k > 0 k 
k = f (wr ) vk r
k > 0 k 
k r k . vk
(8.29)
that selects the negative part of its argument, equation (8.28) becomes
 F (w , wr ) = f (w ) +
with eigenvalue k. k is an eigenvector of A where we have used the fact that r If we introduce the following operator if < 0 = N () = (8.30) 0 if > 0
p k =1
r vk k k .
(8.31)
Alternatively, we could introduce the operator for the positive part if > 0 + = P () = 0 if < 0 so that relation (8.29) becomes
 (w , wr ) = f (wr ) F
(8.32)
p k =1
+ r vk k k .
(8.33)
It is also possible to write the equation for vertical ux in a symmetric form by averaging the two forms above, and we obtain 1 1  F (w , wr ) = [f (w ) + f (wr )] + 2 2 1 1 = [f (w ) + f (wr )] + 2 2
p k =1 p k =1 p
r vk k k vk k + k
k =1
+ r vk k k (8.34)
k . r
But the difference of the two quantities dependent on the eigenvalues is simply the + =  k , so the expression of the absolute value of the true eigenvalue, k k numerical ux of the Roe method is 1 1  (w , wr ) = [f (w ) + f (wr )] F 2 2 82
p k =1
k r k . v k 
(8.35)
It is classical to use also another form of the Roe numerical ux in terms of the v, which gives variation of the conservation variables wr w = w = R R v=R L w, from which 1 1  (wr w ) (w , wr ) = [f (w ) + f (wr )] A F 2 2 (8.36)
=R L . where A Let us now particularize the Roe mumerical ux to the isothermal gas system with only two components. In this case, the expression of the vertical ux is given by the alternative expressions
 F (w , wr ) = f (w ) +  F (w , wr ) = f (wr )
r v1 1 1 + + r v1 1 1
r v2 2 2, + r v2 2 2,
Applying the condition i) of the Roes linearization (see section 8.1) we obtain that the initial jump propagates with a speed given by a =a ( , r ) = f (r ) f ( ) . r (8.39)
Note that in the scalar case the linear approximation provides a propagation speed a =a ( , r ) of the jump coincident with that of RankineHugoniot condition. Thus the linearization gives the exact solution, but only when the entropy condition is satised, i.e., only when the discontinuous solution is entropic. On the contrary, if the entropic solution is a rarefaction fan and it is transonic, the solution of the linearized Riemann problem must be modied. A possibility consists in replacing the single jump by two jumps, propagating with velocity s1 and s2 , that can be determined by the following argument. Consider a Riemann problem where the two states and r cause a rarefaction wave and represent the initial condition in the ( f (), ) plane. Now we can draw the two tangent lines with slope f ( ) = a and f (r ) = ar , and these values are the propagating speed of the two articial jumps, namely, s1 = a and s2 = ar . Therefore, in case of a transonic rarefaction, namely when a < 0 and ar > 0, the linearized ux f( ; , r ) = f ( ) + a ( ), (8.40) will be replaced by the entropyxed ux f ( ) + a ( ) e.f. f ( ; , r ) = f ( ) + a ( ) r r r
where the value i corresponds to intersection point of the two straight lines and is therefore obtained solving the equation f ( ) + a (i ) = f (r ) + ar (i r ). A direct calculation gives i = f ( ) f (r ) + ar r a ar a 84 (8.41)
The value i represents an approximation of the exact sonic value s which is the root of the equation f (s ) = 0 and which would be used to evaluate the vertical ux in the absence of linearization.
Linearized flux
i r g
f()
Figure 8.2: Construction for the entropy x Therefore, the modied solution to the linearized Riemann problem including the entropy x reads x < a t (8.43) e.f. (x , t ) = i a t < x < ar t x > a t r r
a correction to be applied only when a < 0 and ar > 0. The entropyxed vertical
85
ux resulting from this modication will be, using i expressed by relation (8.42):
 F ( , r ) = fe.f. (i ; , r ) = f ( ) + a (i )
ar a a (r ) = f ( ) + ar a
(8.44)
when a < 0 and ar > 0. Equivalently, we have the expression based on the right part of the xed function a a  F ( , r ) = f (r ) + ar (r ). ar a By averaging the two expressions we obtain the symmetric expression 1 1 (a + ar ) a 2a ar  e.f. F ( , r ) = [ f ( ) + f (r )] (r ) (8.46) 2 2 ar a always under the assumption a < 0 and ar > 0. Otherwise, for nontransonic rarefactions the ux of the linearized ux given in (8.40) must be used to construct  the numerical ux at the vertical state F ( , r ) whenever a > 0 or ar < 0. Flux (8.40) describes a jump propagating with speed a , which can be either an entropic or a nonentropic jump, but the transonic rarefaction case is excluded by the assumed condition a > 0 or ar < 0. Thus the entropy xed version of the speed absolute value will be dened by  a > 0 or ar < 0 a e.f. (8.47) a  = (a + ar ) a 2a ar a < 0 and ar > 0 ar a (8.45)
The entropy xed version of the symmetric expression of the numerical ux will be, in any case, 1 1 e.f.  e.f. F ( , r ) = [ f ( ) + f (r )] a  (r ) 2 2 86
This expression halds whenever the subscript denotes a true left position and r a true right position. By contrast, if the two spatial states correspond to the opposite spatial direction, the denition of the entropyxed speed absolute value becomes  ar > 0 or a < 0 a e.f. a  = (a + ar ) a (8.48) 2a ar ar < 0 and a > 0 a ar
(8.49)
a t
r dx
X
X a t
r dx a t X
x
Figure 8.3: Evaluation of car number in the presence of a moving jump As discussed previously, the linearization has been built so as to respect conservation, not to contradict the basic physical property of the nonlinear hyperbolic equation. We now show that the entropy x just described of replacing the entropyviolating jump by two jumps propagating at speed s1 = a and s2 = ar is indeed exactly conservative. Let us evaluate the total number of cars contained between two arbitrary boundary points X and X and at time t . The solution containing the rarefaction wave is x < a t (8.50) (x , t ) = a 1 (x / t ) a t < x < ar t x > ar t r
87
(x , t ) d x
X a t X
d x +
ar t a t
a 1 (x / t ) d x +
ar
r d x
ar t
= [a t + X ] + t
a 1 ( ) d + r [ X ar t ],
where a change of variable has been used in the integral over the rarefaction wave. But, thanks to the relation a () = and also d /d = a (), the indenite integral can be evaluated by changing rst the integration variable , as follows: a 1 ( ) d = a 1 (a ()) a () d = a () d
= ( + r ) X + [ f ( ) f (r )] t . The same result can be obtained more easily also by considering the nonentropic jump propagating with speed a = [ f (r ) f ( )]/(r ). In fact, this solution is in any case a weak solution and guarantees the exact conservation of cars. In terms of this nonphysical weak solution, the total number of cars contained between two arbitrary boundary points X and X and at time t can be calculated as follows, see gure 8.3,
tot C[ X,X ] = X X
(x , t ) d x =
at X
d x +
at
r d x .
Since in the Riemann problem the left and right states are uniform, we can carry and r out from each integral. Finally we have
tot + X ] + r [ X at ] C[ X , X ] = [at
= ( + r ) X + ( r ) at = ( + r ) X + [ f ( ) f (r )] t . 88
Consider now the entropyxed solution (8.41) containing a uniform intermediate state between the left and right states. The total number of cars contained in the same interval considered above is C[ X , X ]
tot, entropy xed X
= =
e.f. (x , t ) d x d x +
ar t a t X
X a t X
i d x +
r d x
ar t
= ( + r ) X + [ f ( ) f (r )] t
which is coincident with the previous results. Thus, the entropy x that has been introduced is conservative and it does not loose or create cars on the highway. Finally, gure 8.4 shows solutions of the Riemann problem for the trafc equation in the case of a sonic rarefaction wave. The exact solution is compared with two numerical solutions provided by the approximate Riemann solver with linearization, without the entropy x and including the entropy x. The improvement achieved by the entropy x is substantial. Note that the residual error present in the entropyxed solution in the transonic zone is similar to the error of the Godunov method and is of order x .
1.4 exact fixed nofixed
1.3
1.2
0.1
0.2
0.3
0.4
0.5 x
0.6
0.7
0.8
0.9
89
! including ENTROPY FIX ! ORDERED GRID ! ! ! ! Given the vector xx of nodal points and the vector ww of the cell average of the conservative variables, the program calculates the Roe vertical flux for the equations of the PSystem of the isothermal ideal gas
IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: xx REAL(KIND=8), DIMENSION(:,:), INTENT(IN) :: ww REAL(KIND=8), DIMENSION(:,:), INTENT(OUT) :: FF_R LOGICAL, PARAMETER :: ENTROPY_FIX = .TRUE. REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,2)) :: ff REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,1)) :: R, L, & ABS_A_ef, ABS_lambda_ef REAL(KIND=8), DIMENSION(SIZE(ww,1)) :: lambda, A_lambda_ef, lambda_l, lambda_r, lambda_i, dv, wi REAL(KIND=8) :: N_l, INTEGER :: i, j, p P_l, P_r, N_r & &
j = SIZE(xx);
ff = flux(ww) ! Store the nodal fluxes to avoid a double ! evaluation of the flux at the grid nodes DO i = 1, SIZE(FF_R, 2); j = i
90
lambda, L, R)
! LeVeque version of Harten and Hyman ENTROPY FIX ! implemented by means of Maricas symmetric expression ! states and TRUE characteristic speeds A_lambda_ef = ABS(lambda)
IF (ENTROPY_FIX) THEN ! LeVeque version of Harten and Hyman ENTROPY FIX ! Implementation of the symmetric expression by Marika Pelanti ! see Pelanti, Quartapelle and Vigevano dv = MATMUL(L, ww(:,j+1)  ww(:,j)) wi = ww(:,j) + R(:,1) * dv(1)
lambda_l = eigenvalue(ww(:,j)) lambda_r = eigenvalue(ww(:,j+1)) lambda_i = eigenvalue(wi) N_l = MIN(lambda_l(1), 0.0d0); P_r = MAX(lambda_r(2), 0.0d0); P_l = MAX(lambda_i(1), 0.0d0) N_r = MIN(lambda_i(2), 0.0d0)
IF (P_l /= N_l) & A_lambda_ef(1) = ((P_l + N_l)*lambda(1)  2*P_l*N_l)/(P_l  N_l) IF (P_r /= N_r) & A_lambda_ef(2) = ((P_r + N_r)*lambda(2)  2*P_r*N_r)/(P_r  N_r) ENDIF DO p = 1, SIZE(lambda) ABS_lambda_ef(p,p) = A_lambda_ef(p) ENDDO ABS_A_ef = MATMUL(R, MATMUL(ABS_lambda_ef, L))
FF_R(:, i) = (ff(:,j) + ff(:,j+1)) / 2 &  MATMUL(ABS_A_ef, ww(:,j+1)  ww(:,j)) / 2 ENDDO END SUBROUTINE Roe_num_flux ! with entropy fix
91
92
where the advection velocity a is constant. We want to compute a numerical solution on a dened spatial grid and at specic time instants, namely, the values u (x j , tn ), for j = . . . , 1, 2, . . . and n = 1, 2, 3, . . . , starting from a known initial condition. We analyze rst the temporal discretization of equation (9.1) by considering a Taylor expansion of the solution u (x , tn + t ) in the timestep t around the value at time tn . We have 1 u (x , tn + t ) = u (x , tn ) + t u (x , tn ) t + t2 u (x , tn )( t )2 + O ( t )3 (9.2) 2 93
By introducing the denitions u n ( x ) u ( x , tn ) u n +1 (x ) u (x , tn + equation (9.2) is equivalent to 1 2 n u + O ( t )3 u n +1 = u n a t x u n + a 2 ( t )2 x 2 namely u n +1 u n 1 2 n u + O ( t )3 (9.3) = a x u n + a 2 t x t 2 The lefthand side of equation (9.3) is an approximation of time derivative, while its righthand side is equal to the second term of the original advection equation plus a corrective term. Thus the idea of using the Taylor series leads to the occurrence a new term that was not in the advection equation and that would have been occurred if the time derivative would had been approximated by means of an Euler method. Now we introduce the spatial discretization for the semidiscrete equation (9.3) and indicate by U n j value of the fully discrete solution at point x j of a uniform grid and at time tn . The rst spatial derivative is approximated by means of a centered approximation, to give U j +1 U j 1 (9.4) x u 2 x and similarly the second spatial derivative is discretized using again a centered approximation
2 x u
t)
(x u ) j + (x u ) j = x U j 1 2U j + U j +1 = ( x )2
U j +1 U j x
U j U j 1 x
(9.5)
Notice that to have a discrete representation of the second derivative spatial points in the middle of each cell of the grid have been used, as shown in Figure 9.1.
94
Thus LaxWendroff scheme for the linear advection equation in one dimension, based on nite differences, is
+1 Un Un j j
= a
n Un j +1 U j 1
2 x
1 a2 t n n Un + j 1 2U j + U j +1 2 2 ( x)
(9.6)
This scheme has a second order accuracy both in space and in time.
Thus, the two time derivatives of the unknown are given by t u = x f (u ) t2 u = x f (u )x f (u ) where f (u ) = d f (u )/du . By introducing the denitions u n ( x ) u ( x , tn ) u n +1 (x ) u (x , tn + t)
namely t u n +1 u n = x f (u n ) + x f (u n ) x f (u n ) + O ( t )2 t 2 (9.8)
Now, in the same way of the previous case, we discretize in space the semidiscrete equation (9.7) by means of centered approximations of the rst and second derivatives to give
+1 Un Un j j
= +
n f (U n j +1 ) f (U j 1 )
2 x
t 1 2 ( x )2
f (u n ) x f (u n )
j +
f (u n ) x f (u n )
(9.9) The centered approximation of the internal derivative is easily obtained using the relation f (u ) x f (u )
j +
1 2 (U j
+ U j +1 )
f (U j +1 ) f (U j )
= f U j +
f (U j +1 ) f (U j )
(9.10)
1 where the average value U j + = 2 (U j + U j +1 ) has been introduced. Thus, the fully centered spatial discretization gives the LaxWendroff scheme for the scalar conservation law (9.7) in one dimension +1 Un Un j j n f (U n j +1 ) f (U j 1 )
= +
2 x
t 1 f Un j + 2 ( x )2
n f (U n j +1 ) f (U j ) n f (U n j ) f (U j 1 )
(9.11)
f Un j
The keystone to the conservation form of this scheme is adding and subtracting the same quantity f (U n j ) inside the rst derivative term on the righthand side of the equation to give
+1 Un Un j j
= +
n n n f (U n j +1 ) + f (U j ) f (U j ) f (U j 1 )
2 x
t 1 f Un j + 2 2 ( x)
n f (U n j +1 ) f (U j ) n f (U n j ) f (U j 1 )
(9.12)
f Un j 96
Now the terms associated respectively with the right and left parts of the computational molecule can be regrouped together to obtain the discrete equations
+1 Un Un j j
= +
n f (U n j ) + f (U j +1 )
2 x
1 t f Un j + 2 2 ( x) 1 t f Un j 2 2 ( x)
n f (U n j +1 ) f (U j ) n f (U n j ) f (U j 1 )
n f (U n j 1 ) + f (U j )
2 x
(9.13) In this form, the complete expression occurring in the rst line on the righthand side is seen to be obtainable from that in the second line by a simple right shift of one grid point, letting aside the sign change. As a consequence, it is straightforward to introduce the LaxWendroff numerical ux (9.14) 1 t 1 f U j + f (U j +1 ) f (U j ) f (U j ) + f (U j +1 ) 2 2 x which allows one recast the LawWendroff scheme above for a conservation law in the following conservative form t ,n ,n F jLW (9.15) F jLW 1 1 +2 2 x When a conservation law is discretized such as (9.15) for some numerical ux F U j , U j +1 , the discretization is said to be in conservation form. The scheme is implemented in the usual way by considering the cycle over the interfaces and by taking into account the contribution of each ux to the two neighbouring cells, +1 after the initialization U n Un j j , as follows,
+1 Un = Un j j
FjLW F LW U j , U j +1 +1
2
t LW, n t LW, n n +1 +1 Fj+ 1 F 1 and U n (9.16) j +1 U j +1 + x x j+ 2 2 Clearly, this form assures the exact conservation of the quantity associated with the variable u since the same quantity of ux is attributed, with opposite signs, to the two neighbouring cells. In particular, the update of the average quantity U0 in the rst cell as well as the quantity U J , in the last cell, must take into account the reduced size of the extremal cells, which is half that of any other internal cell, assumed here all equal the grid being uniform. As a consequence, a more general form of the unpdate relation valid for nonuniform grids with cells of sizes x j +1 would be, after the initialization U n Un j j,
+1 +1 Un Un j j +1 +1 Un Un j j
t ,n F jLW 1 + xj 2
+1 n +1 and U n j +1 U j +1 +
t x j +1
,n (9.17) F jLW +1
2
97
In this form, the correct treatment of the two special halfsized cells at the extremes of the integration interval is obtained automatically, since x1 = x /2 and x J = x /2. By virtue of the weak form of the conservation law, a discretization in conservation form guarantes that a only weak solutions are obtained (eventually) and therefore also that the shocks propagate at the correct speed. On the other hand, for a solution provided by a scheme in conservation form it is impossible to guarantee anything about the satisfaction of the entropy condition. In other words, nonentropic weak solutions can be produced by the conservative scheme. All these considerations are contained in a rigorous form in a theorem due to Lax and Wendroff, which is however mathematically too much sophisticated for our analysis. The interesed reader is referred to LeVeque [8, Theorem 12.1, p. 130133]. For the present purposes it sufces to recall that the technical difculty with this fundamental theorem lies in the fact that there are always innitely many solutions to any Cauchy problem for a nonlinear hyperbolic equation, for whatsoever data. As a consequence, it is not possible to speak of the convergence of the discrete to an exact weak solution in the same manner as it is done for other initial value problems with unique solutions. What the LaxWendroff theorem states is that if a converging subsuccession can be picked out the limit is a weak solution. In other words, the theorem cannot guarantee the convergence to a weak solution. Moreover, the LaxWendroff theorem does not say anything about the satifaction of the entropy condition. To this aim it is necessary to extend the theorem in the sense that one must characterize the scheme also in its ability of respecting the entropy condition at a discrete level. We refer the interested rader to the dissertation of Marica Pelanti [12].
n f (Un j +1 ) f (U j 1 )
2 x
1 t n n n n f (Un A Un 1 j +1 ) f (U j ) A U j 1 f (U j ) f (U j 1 ) j+ 2 2 2 ( x )2 (9.18)
98
where A(u) = f (u)/ u represents the Jacobian matrix of the hyperbolic system. By introducing the LaxWendroff numerical ux for the system
LW FLW U j , U j +1 Fj +1
2
1 1 f (U j ) + f (U j +1 ) 2 2
t A U j + 1 f (U j +1 ) f (U j ) 2 x
(9.19)
it is a simple matter to verify that the LaxWendroff scheme (9.18) can be recast in the following conservative or conservation form
+1 Un = Un j j
t LW LW Fj 1 Fj 1 + 2 2 x
(9.20)
As in the scalar case, the scheme is implemented by considering the cycle over the interfaces and by taking into account the contribution of each ux to the two neighbouring cell, after the initialization Ujn +1 Ujn , Ujn +1 Ujn +1 t LW, n F 1 x j+ 2
n +1 +1 and Ujn+ 1 Uj +1 +
t LW, n F 1 x j+ 2
(9.21)
For system of conservatin laws the LaxWendroff scheme meets the same difculties with assuring the convergence and the respect of the entropy conditions discussed for the scalar case.
The LaxWendroff method for the isothermal gas is therefore simply Wjn +1 Wjn t = + f (Wjn+1 ) f (Wjn1 ) 2 x (9.22)
This scheme is of course in conservation form Wjn +1 = Wjn t LW, n LW, n Fj Fj 1 1 2 + x 2 (9.23)
1 1 f (Wj ) + f (Wj +1 ) 2 2
t A Wj + 1 f (Wj +1 ) f (Wj ) 2 x
(9.24)
The scheme is implemented in the usual way by considering the cycle over the interfaces and by taking into account the contribution of each ux to the two neighbouring cells, after the initialization Wjn +1 Wjn , Wjn +1 Wjn +1 t LW, n F 1 x j+ 2
n +1 +1 and Wjn+ 1 Wj +1 +
t LW, n F 1 x j+ 2
(9.25)
In gure 9.2 we report numerical results obtained in the solution of the isothermal gas equations by means of the conservative LaxWendroff scheme. The initial conditions is w = (0.6, 0) and wr = (0.6, 0.8). The numerical solution on a uniform grid of 200 points at time t = 0.2 is compared with the exact solution.
100
replacements
0.7
0.6
0.5
rho
m
0.5 0 1 2
0.4
0.3
0.2
0.1
0 0.08 0 1 2
101
In accordance with LaxWendroff theorem, the shock fronts propagate with the correct velocity, as it must be for any weak solution provided by a scheme in conservation form. On the other hand, strong oscillations are produced behind the fronts by the second order scheme. This is a manifestation of a Gibbs phenomenon and reects Godunov theorem [8, Theorem 15.6, p. 170] that a motonicity preserving scheme must be at most rst order accurate. For the purpose of using this scheme in connection with a low order scheme of Godunow type in a high resolution method, as it will be discussed in the next section, we must consider also the possibility of evaluating the Jacobian in a different form. In fact, as described in section 8, it is necessary to build a conservative linearization in order to obtain a numerical ux capable of yielding a valid approximate solution to the Riemann problem. In particular, according to Roe linearization the Jacobian of the linearized problem for the left state w = Wj and the right state j+ 1 = A w (Wj , Wj +1 ) , where the intermediate wr = Wj +1 can be dened as A 2 (Wj , Wj +1 ) is dened by Roe average. The LaxWendroff scheme for the state w linearized problem would read Wjn +1 Wjn t + = f (Wjn+1 ) f (Wjn1 ) 2 x
Also this scheme can be written in conservation form Wjn +1 = Wjn t LW, n LW, n F j+ 1 F j 1 x 2 2 t A 1 f (Wj +1 ) f (Wj ) x j+ 2 (9.27)
with the LaxWendroff numerical ux, inlcuding Roe linearization, dened by LW1 1 f (Wj ) + f (Wj +1 ) 1 F j+ 2 2 2 (9.28)
Again, the scheme is implemented in the usual way by considering the cycle over the interfaces and by taking into account the contribution of each ux to the two neighbouring cells, after the initialization Wjn +1 Wjn , Wjn +1 Wjn +1 t LW, n F 1 x j+ 2
+1 n +1 and Wjn+ 1 Wj +1 +
t LW, n F 1 x j+ 2
(9.29)
1 t 1 FLW [ f ( U ) + f ( U ) ] f U [ f (U1 ) f (U0 )] = (9.30) 0 1 2 2 x which will provide the partial contribution to the cell average of the rst cell of half size equal to x /2 according to U0 U0 U0 t FLW x /2 ( t )2 t [ f (U0 ) + f (U1 )] + f U [ f (U1 ) f (U0 )] x ( x )2
the hyperbolic problem and to the presence of the second spatial derivative in the numerical scheme. Since the fully discrete equations are in conservative form, the update of the cell averages is based on evaluating the numerical uxes at all interfaces, which are inside the computational domain, by denition. However, the presence af a boundary requires also to consider the ux of the conserved quantity through the extremes of the integration interval. Therefore, the LaxWendroff numerical uxes at the internal interfaces as well as the ux evaluated on the boundary provide their own contribution to the cell averages. In particular, the rst and last cells, which contain the end points, must be analyzed with a special attention for two reasons: they involve the two uxes through the interval ends and at the same time the secondorder correction term of the LaxWendroff scheme cannot be accounted directly since the two end poinst of the interval are without the threepoint stencil needed to evaluate the second spatial derivative, discretely. Another difference which must be considered is that, even for the most simple situation of a uniform grid, the size of the rst and last cells is half of that of all other internal cells, and this difference must be included when updating the cell averages of the extremes cells. To discuss all these aspect we consider for simplicity the case of a scalar conservation law for a variable u and examine the treatment necessary at the left extreme x = x0 of the integration interval. The extension of the results to nonlinear hyperbolic systems will be considered at the end of the section, while the treatment required at the right end point can be easily deduced from the analysis presented here. Consider rst the numerical ux (9.14) associated with the rst internal grid point x = x1 = x0 + x , namely
(9.31)
The second term contributing to the average quantity of the rst cell si due to the ux F 0 evaluated at the left end point and time integrated over the temporal interval t , to give t F0 (9.32) U0 U0 + x /2 where F 0 is calculated from the boundary value u of the unknown variable by means of the relation F 0 = f (u ). The quantity u is dened in terms of either 103
the old value U0 of the variable at the left end point or a left boundary value u (t ) specied from outside according to the relation u= U0 u (t ) f (U0 ) 0 f (U0 ) > 0 (9.33)
The reasons of this denition are discussed in details in appendix B. There is however a third contribution to the cell average of the rst halfsized cell, which is due to the secondorder spatial derivative term of the LaxWendroff scheme. In fact the numerical ux of this scheme cannot account for the presence of this term up to the boundary: the numerical scheme has been established only at the internal grid points, whereas for the two end points it is necessary to restort to an integration by parts, as explained below. Consider the secondorder correction term of the LaxWendroff scheme in conservation form ( t )2 d 2 dx f (u ) d f (u ) dx (9.34)
The integration of this term over the halfsized rst cell, from x = x0 to x = x0 + x /2, yields, after division by the cell size x /2, 1 ( t )2 x /2 2 = = (
x0 + x /2 x0 t )2
d dx
f (u )
d f (u ) d x dx (9.35)
x ( t )2 x
d f (u ) f (u ) dx
x0 + x /2 x0
f U
where the derivative d f (u )/d x on the boundary has been approximated by a onesided formula. The expression can be recast in the following form ( t )2 ( t )2 [ f (U1 ) f (U0 )] f U f (U0 )[ f (U1 ) f (U0 )] ( x )2 ( x )2 (9.36)
The rst term is seen to be equal to the second piece included in the LaxWendroff numerical ux, while the other term is new and stems from the lower boundary of the integration. This term can be included as a third contribution of the numerical ux in the relation for updating the cell average of the rst cell, by introducing the denition of a SecondOrder Boundary Correction ux
LW FSOBC =
(9.37)
The nal complete expression of the rst cell update will be U0 = U0 + t f (u ) x /2 t FLW + x /2 t LW FSOBC x /2 (9.38)
More precisely, all quantities refer to a well dened time level and the expression with the respective time levels explicitly indicated is
n +1 n + = U0 U0
t f (u n ) x /2
t FLW,n + x /2
t F LW,n x /2 SOBC
(9.39)
By substituting the two expressions (9.30) and (9.37) and reordering the terms we obtain
n +1 n U0 = U0
The extension of this result to a system of hyperbolic equations is immediate and the expression for updating the vector equation reads
n +1 n + = U0 U0
t f (u n ) x /2
t LW,n F + x /2
t LW,n F x /2 SOBC
(9.41)
where the denition of u is given in Appendix B and t 1 LW F A U [f (U1 ) f (U0 )] = [f (U0 ) + f (U1 )] 2 2 x t LW FSOBC = A(U0 )[f (U1 ) f (U0 )] 2 x
(9.42)
By summarizing, the update of the variables requires to include three contributions: the rst one is associated to the ux on the boundary, which may include the effect of external boundary values, the second one is the contribution of the standard numerical ux of all interfaces, and the third contribution is the correction stemming from the boundary term provided by the integration over the rst cell of the second spatial derivative.
105
! ORDERED GRID ! ! ! ! Given the vector xx of nodal points and the vector ww of the cell average of the conservative variables, the program calculates the LaxWendroff vertical flux at each interface inside the grid.
IMPLICIT NONE REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), INTENT(IN) INTENT(IN) INTENT(IN) INTENT(OUT) INTENT(OUT) :: :: :: :: :: dt xx ww FF_LW F_SO ! (:,2)
REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,2)) :: ff REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,1)) :: A_i REAL(KIND=8) :: dx, rat INTEGER :: i, j, Np dx = xx(2)  xx(1) ! UNIFORM ORDERED GRID rat = dt/dx
ff = flux(ww) ! Store the nodal fluxes to avoid a double ! evaluation of the flux at the grid nodes
DO i = 1, SIZE(FF_LW, 2);
j = i
A_i = AA((ww(:,j) + ww(:,j+1))/2) FF_LW(:, i) = (ff(:,j) + ff(:,j+1)) / 2 &  rat * MATMUL(A_i, ff(:,j+1)  ff(:,j)) / 2 ENDDO
106
END SUBROUTINE
LWc_num_flux verbatim
107
for Hyperbolic Problems, Birkhuser, Basel, 1992, except for only some very minor notational changes. The gures referred to in these pages are the same as the originals.
108
As an example, consider the advection equation t u + a x u = 0 and suppose we modify the standard LaxWendroff method from Table 10.1 as follows: n 1 2 n +1 n n n Un = Un j (U j +1 U j 1 ) + (U j 1 2U j + U j +1 ) j 2 2 n n + k Q (U n j 1 2U j + U j +1 )
(16.1)
where = ak / h is the Courant number and Q is the newlyintroduced articial viscosity. Clearly the truncation error L (x , t ) for this method can be written in terms of the truncation error L LW (x , t ) of the LaxWendroff method as L (x , t ) = L LW (x , t ) Q [u (x h , t ) 2u (x , t ) + u (x + h , t )]
2 = L LW (x , t ) Qh 2 x u ( x , t ) + O (h 4 )
= O (k 2 ) as k 0
since L LW (x , t ) = O (k 2 ) and h 2 = O (k 2 ) as k 0. The method remains second order accurate for any choice of Q = constant. The modied equation for (16.1) is similarly related to the modied equation (11.7) for LaxWendroff. The method (16.1) produces a third order accurate approximation to the solution of the PDE 1 2 3 t u + a x u = h 2 Q x u + h 2 a ( 2 1) x u. 6 (16.2)
The dispersive term that causes oscillations in LaxWendroff now has competition from a dissipative term, and one might hope that for Q sufciently high the method would be nonoscillatory. Unfortunately, this is not the case. The method (16.1) with constant Q is still a linear method, and is second order accurate, and so the following theorem due to Godunov shows that it cannot be monotonicity preserving. Theorem 16.1 (Godunov). A linear, monotonicity preserving method is at most rst order accurate. Proof. We will show tha any linear, monotonicity preserving method is monotone and then the result follows from Theorem 15.6 (p. 170). n n Let U n be any grid function and let V jn = U n j for j = J while V J > U J . +1 Then we need to show that V jn +1 U n for all j , which implies that the method j is monotone. Let W n be the monotone Riemann data dened by
n Wj = n UJ n VJ
j<J jJ
(16.3)
109
(16.4)
for all j , since the last term is zero except when j = J . Since the method is linear, we also have from (16.4) that
n +1 +1 n +1 n +1 Un ) Wj = Wj j 1 + ( V j
(16.5) (16.6)
so that
n But since the method is monotonicity preserving and W n is monotone with W j 1 n +1 n +1 n +1 n +1 n W j , we have that W j W j 1 0 and so (16.6) gives V j U j . This shows that the method is monotone and hence rst order. Nonlinear methods. To have any hope of achieving a monotonicity preserving method of the form (16.1), we must let depend on the data U n so that the method is nonlinear, even for the linear advection equation. As already noted, allowing this dpendence makes sense from the standpoint of accuracy as well: where the solution is smooth adding more dissipation merely increases the truncation error and should be avoided. Near a discontinuity we should increase Q to maintain monotonicity (and also, one hopes, enforce the entropy condition). The idea of adding a variable amount of articial viscosity, based on the structure of the data, goes back to some of the earliest work on the numerical solution of uid dynamics equations, notably the paper by von Neumann and Richtmyer [95]. Lax and Wendroff [46] also suggested this in their original presentation of Lax Wendroff method. Note that, in order to maintain conservation form, one should introduce variable viscosity by replacing the nal term of (16.1) by a term of the form n n n n 1 1 k Q (U n ; j + 2 )(U n j +1 U j ) Q (U ; j 2 )(U j U j 1 )
+1 n +1 n +1 V jn +1 = U n + (W j Wj j 1 ).
(16.7)
1 ) is the articial viscosity which now depends on some nite where Q (U n ; j + 2 n n set of values of U , say U n j p , . . . , U j + p . More generally, given any high order 1 ux function FH (U , j + 2 ) for a general conservation law, the addition of articial viscosity replaces this by the modied numerical ux function 1 1 1 ) = FH (U ; j + 2 ) h Q (U ; j + 2 )(U j +1 U j ) F (U ; j + 2
(16.8)
The difculty with the articial viscosity approach is that it is hard to determine an appropriate form for Q that introduces just enough dissipation to preserve monotonicity without causing unnecessary smearing. Typically these goals are not achieved very reliably. 110
For this reason, the high resolution methods developed more recently are based on very different approaches, in which the nonoscillatory requirement can be imposed more directly. It is generally possible to rewrite the resulting method as a high order method plus some articial viscosity, but the resulting viscosity coefcient is typically very complicated and not at all intuitive. There are now a wide variety of approaches that can be taken, and very often there are close connections between the methods developed by quite different means. We will concentrate on just two classes of methods that are quite popular: uxlimiter methods and slopelimiter methods. More comprehensive reviews of many high resolution methods are given by Colella and Woodward [98] and Zalesak [102].
(16.9)
In a uxlimiter method, the magnitude of this correction is limited depending on the data, so the ux becomes
1 1 1 (U ; j + 2 ) FH (U ; j + 2 ) FL (U ; j + 2 ), (16.10) 1 1 ) is the limiter. If the data U is smooth near U j then (U ; j + 2 ) where (U ; j + 2 1 should be near 1 while in the vicinity of a discontinuity we want (U ; j + 2 ) to be near zero. (In practice, allowing a wider range of values for often works better.) Note that we can rewrite (16.10) as 1 1 F (U ; j + 2 ) = FH (U ; j + 2 ) 1 1 F (U ; j + 2 ) = FL (U ; j + 2 )+
1 1 1 ) FH (U ; j + 2 ) FL (U ; j + 2 ) (U ; j + 2
(16.11)
and comparison with (16.8) gives the equivalent articial viscosity for this type of method. 111
One of the earliest high resolution method, the uxcorrected transport (FCT) method of Boris and Book [2], can be viewed as a uxlimiter method. They refer to the corrrection term in (16.11) as the antidiffusive ux, since the low order ux FL contains too much diffusion for smooth data and the correction compensates. The FCT strategy is to add in as much of this antidiffusive ux as possible without increasing the variation of the solution, leading to a simple and effective algorithm. Hybrid methods of this form were also introduced by Harten and Zwas [36] at roughly the same time. More recently, a wide variety of methods of this form have been proposed, e.g., [27], [60], [65], [101]. A reasonably large class of uxlimiter methods has been studied by Sweby [83], who derived algebraic conditions on the limiter function which guarantee second order accuracy and the TVD property. The discussion here closely follows his presentation. To introduce the main ideas in a simple setting, we rst consider the linear advection equation and take FH to be the LaxWendroff ux while FL is the rst order upwind ux. If we assume a > 0, then we can rewrite LaxWendroff to look like the upwind method plus a correction as follows: 1 +1 n n n n n Un = Un j (U j U j 1 ) (1 )(U j 1 2U j + U j +1 ). j 2 The corresponding ux can be written as (16.12)
1 1 (16.13) ) = aU j + a (1 )(U j +1 U j ). FLW (U ; j + 2 2 The rst term is the upwind ux for a > 0 and the second term is the LaxWendroff correction, so that this gives a splitting of the ux of the form (16.9). To dene a uxlimiter method, we replace (16.13) by 1 1 F (U ; j + 2 ) = aU j + a (1 )(U j +1 U j ) j + 1 . 2 2 (16.14)
1 where j + 1 is a shorthand for (U ; j + 2 ), and represents the uxlimiter. 2 There are various ways one might measure the smoothness of the data. One possibility is to look for the function at the ratio of consecutive gradients, and consider the variable U j U j 1 1 , (16.15) )= j + 1 = (U ; j + 2 2 U j +1 U j
in the case a > 0. If j + 1 is near 1 then the data is presumably smooth near U j 2 and U j +1 . If j + 1 is far from 1 then there is some sort of kink in the data near U j
2
1 (U ; j + 2 )=
1 (U ; j + 2 ) = ( j + 1 ),
2
(16.16)
112
where ( ) is some given function. Note that this measure of smoothness breaks down near extreme points of U , where the denominator may be close to zero and j + 1 arbitrarily large, or negative, 2 even if the solution is smooth. As we will see, maintaining second order accuracy at extreme points is impossible with TVD methods. For the time being, we will be content with second order accuracy away from these points, and the following theorem gives conditions on which guarantee this. Theorem 16.2. The ux limiter method with ux (16.14) (where j + 1 is given 2 by (16.16)) is consistent with the advection equation provided ( ) is a bounded function. It is second order accurate (on smooth solutions with x u bounded away from zero) provided (1) = 1 with Lipschitz continuous at = 1. Exercis e 16.1. Prove this theorem. To see what conditions are required to give a TVD method, we use (16.14) to obtain the following method (dropping the superscript on U n for clarity):
+1 Un = Uj j
k 1 a (U j U j 1 ) + a (1 ) (U j +1 U j ) j + 1 (U j U j 1 ) j 1 2 2 h 2 1 1 = U j (1 ) j 1 (U j U j 1 ) (1 ) j 1 (U j +1 U j ). 2 2 2 2 (16.17)
(16.18)
then the following theorem of Harten [27] can be used to give constraints on the j+ 1
2
Theorem 16.2 (Harten). In order for the method (16.18) to be TVD, the following conditions on the coefcients are sufcient: C j 1 0 j Dj 0 j Cj + Dj 1 j
(16.19)
+ C j 1 (U j U j 1 ).
+1 n +1 We now sum U n over j and use the nonnegativity of each coefcient j +1 U j as in previous arguments of this type to show that T V (U n +1 ) T V (U n ).
113
Exercis e 16.2. Complete this proof. The form (16.19) suggests that we try taking 1 C j 1 = 1 (1 ) j 1 2 2 1 D j = (1 ) j + 1 . 2 2 Unfortunately, there is no hope of satisfying the condition (16.19) using this, since D j < 0 when j + 1 is near 1. 2 At the expense of extending the stencil for dening C j , we can also obtain (16.17) by taking (U j +1 U j ) j + 1 (U j U j 1 ) j 1 1 2 2 C j 1 = + (1 ) 2 U j U j 1 D j = 0. The conditions (16.19) are then satised provided 0 C j 1 1. Using (16.15) and (16.16), we can rewrite the expression for C j 1 as ( j + 1 ) 1 2 ( j 1 ) C j 1 = 1 + (1 ) 2 2 j+ 1
2
(16.20)
(16.21)
The condition (16.20) is satised provided the CFL condition   1 holds along with the bound ( j + 1 )
2
j+ 1
2
( j 1 ) 2
2
for all j + 1 , j 1 .
2 2
(16.22)
If j + 1 0 then the slopes at neighboring points have opposite signs. The data 2 then has an extreme point near U j and U j +1 and the local variation will certainly increase if the value at this extreme point is accentuated. For this reason, it is safest to take ( ) = 0 for 0 and use the unpwind method alone. Note that this is unsatisfying since, if the data is smooth near the extreme point, we would really like to take near 1 so that the high order method is being used. However, the total variation will generally increase if we do this. Osher and Chakravarthy [60] prove that TVD methods must in fact degenerate to rst order accuracy at extreme points. 114
More recently, it has been shown by Shu [75] that a slight modication of these methods, in which the variation is allowed to increase by O (k ) in each time step, can eliminate this difculty. The methods are no longer TVD but are total variation stable since over a nite time domain uniform bounds on the total variation can be derived. If we have T V (U n +1 ) (1 + k ) T V (U n ), where is independent of U n , then T V (U n ) (1 + k )n T V (U 0 ) e T T V (u 0 ), (16.24) (16.23)
for nk T and hence the method is total variation stable. For simplicity here, we will only consider TVD methods and assume that ( ) = 0 Then (16.22) will be satised provided 0 ( ) 2 and 0 ( ) 2 (16.26) for 0. (16.25)
for all . This region is shown in Figure 16.1. To obtain second order accuracy, the function must pass smoothly through the point (1) = 1. Sweby found, moreover, that it is best to take to be a convex combination of the for LaxWendroff (which is simply = 1) and the for BeamWarming (which is easily seen to be ( ) = ). Other choices apparently give much compression, and smooth data such as a sine wave tends to turn into a square wave as time evolves. Imposing this additional restriction gives the second order TVD region of Sweby which is also shown in Figure 16.1. Example 16.1. If we dene ( ) by the upper boundary of the second order TVD region shown in Figure 16.1, i.e., ( ) = max(0, min(1, 2 ), min(, 2)), then we obtain the socalled superbee limiter of Roe [67]. A smoother limiter function, used by van Leer [89], is given by ( ) = These limiters are shown in Figure 16.2. 115   + 1 +   (16.28) (16.27)
Sweby [83] gives several other examples and presents some numerical comparisons. More extensive comparisons for the linear advection equation are presented in Zalesak [102]. Sweby also discusses the manner in which this approach is extended to nonlinear scalar conservation laws. The basic idea is to replace = ak / h by a local dened at each interface by j+ 1 =
2
The resulting formulas are somewhat complicated and will not be presented here. They are similar to the nonlinear generalization of the slopelimiter methods which will be presented below. Generalization to nonuniform wave speeds. In the above description we assumed a > 0. Obviously, a similar method can be dened when a < 0 by again viewing LaxWendroff as a modication of the upwind method, which is now onesided in the opposite direction. It is worth noting that we can unify these methods into a single formula. This is useful in generalizing to linear systems and nonlinear problems, where both positive and negative wave speed can exist simultaneously. Recall that the upwind method for a linear system can be written in the form (13.15), which in the scalar case reduces to 1 1 1 FL (U ; j + 2 (16.30) ) = a (U j + U j +1 ) a (U j +1 U j ) 2 2 and is now valid for a of either sign. Also notice that the LaxWendroff ux can be written as 1 1 1 (16.31) ) = a (U j + U j +1 ) a (U j +1 U j ). FH (U ; j + 2 2 2 This can be viewed as a modication of FL in (16.30) and introducing a limiter as in (16.10) gives the high resolution ux 1 1 1 F (U ; j + 2 ) = FL (U ; j + 2 ) + [sgn() ] a (U j +1 U j ) j + 1 . (16.32) 2 2 Note that we have used the fact that a  = sgn(a ) a = sgn() a , since = ak / h and k , h > 0. The ux limiter j + 1 is again of the form j + 1 = ( j + 1 ), but 2 2 2 now take j + 1 to be a ratio of consecutive slopes in the upwind direction, which 2 depends on sgn(), as follows U U j j 1 for > 0 U Uj j +1 1 j + 1 = (U ; j + 2 ) = 2 U U j +1 j +2 for < 0. U j +1 U j 116
k f (U j +1 ) f (U j ) . h U j +1 U j
(16.29)
This choice can also be indicated compactly, setting j = j sgn() (= j 1), so that U j +1 U j j+ 1 = . (16.33) 2 U j +1 U j 16.2.1 Linear systems
The natural generalization to linear systems is obtained by diagonalizing the system and applying the uxlimiter method to each of the resulting scalar equations. We can reexpress this in terms of the full system as follows. Suppose A = R R1 with R = [r1 r2  rm ] and let be the vector with components p, j + 1 for p = 1, 2, . . . , m . Then we have
2
j + 1 = R1 (U j +1 U j )
2
(16.34)
p=1
p, j + 1 r p
2
(16.35)
p = p k/ h and p, j + 1 =
2
p, j p +1
2
p, j + 1
2
with
jp =
(16.37)
Recall that the upwind ux has the form (13.15) 1 1 1 F L (U; j + 2 (16.38) ) = A(U j + U j +1 ) A(U j +1 U j ), 2 2 where A = R   R1 , while the LaxWendroff ux is 1 k 1 ) = A(U j + U j +1 ) A2 (U j +1 U j ). F H (U; j + 2 (16.39) 2 2h The difference between these is k 1 1 1 A A2 (U j +1 U j ) ) F L (U; j + 2 )= F H (U; j + 2 2 2h m 1 sgn( p ) p p p, j + 1 r p = 2 2 p=1 and so the uxlimiter method for the linear system has the form F(U; j +
1 2)
= F L (U; j +
1 2) +
1 2
m p=1
sgn( p ) p p p, j + 1 ( p, j + 1 ) r p .
2 2
(16.40)
117
16.2.2
Nonlinear systems
For nonlinear systems of hyperbolic equations a similar form of the numerical ux is possible, based on the linearization provided by Roes matrix. For completeness we give the details of this generalization although the resulting method is similar to the slopelimiter method presented below. The natural way to generalize this method to a nonlinear system is to linearize the equations in a neighborhood of each cell interface x j + 1 and apply the method the previous section to some linearized system3
2
j + 1 x u = 0. t u + A
2 2
Following our discussion of Roes approximate Riemann solution (Section 14.2), j+ 1 = A (U j , U j +1 ), where A (U j , U j +1 ) is some Roe matrix satisfying we take A j + 1 by p, j + 1 condition (14.19). We denote the eigenvalues and eigenvectors of A 2 2 p, j + 1 respectively, so that and r
2
j+ 1 r p, j + 1 r p, j + 1 = p, j + 1 A
2 2 2 2
for p = 1, 2, . . . , m .
Recall that the ux function for Godunovs method with Roes approximate Riemann solver is given by (14.22), which we rewrite as 1 (U j +1 U j ), L (U; j + 1 ) = 1 f (U j ) + f (U j +1 ) 1 A F 2 2 2 j+ 2 j+ 1 = R 1 where A j+
2
ear system is
1 j+ 2
2 1 (U j +1 U j ). H (U; j + 1 ) = 1 f (U j ) + f (U j +1 ) k A F 2 2 2h j + 2 The difference between these is j+ 1 k A 2 1 (U j +1 U j ) L (U; j + 1 ) = 1 A H (U; j + 1 ) F F 2 2 2 2 2h j + 2 1 m p, j + 1 p, j + 1 sgn p, j + 1 = p, j + 1 p, j + 1 r 2 2 2 2 2 2 p=1 p, j + 1 in an eigenvector expansion of the difwhere p, j + 1 is the coefcient of r 2 2 ference U j +1 U j ,
m
U j +1 U j =
p=1
p, j + 1 . p, j + 1 r
2 2
has been drop from the Jacobian matrix of n 1 x u = 0. the more appropriate linearized system t u + A j+
2
118
Thus, the ux of the highresolution method for the linearized system has the form
1 L (U; j + 1 ) F(U; j + 2 )=F 2
1 + 2 where p, j + 1 =
2
m p=1
p, j + 1 p, j + 1 ) r p, j + 1 , p, j + 1 p, j + 1 ( sgn p, j + 1
2 2 2 2 2 2
p, j p +1
2
p, j + 1
2
with
jp =
j 1 j +1
if p , j + 1 < 0.
2
if p, j + 1 > 0
2
jp A simple method for evaluating the vector + 1 of the upwinded characteristic 2 variations is given by the relation p, j p +1 =
2
1 2
p, j 1 + p, j + 3 +
2 2
1 2
p, j 1 p, j + 1 . p, j + 3 sgn
2 2 2
In practice, this relation is difcult to be implemented since it requires the char j + 3 which are evaluated by means of different j 1 and acteristic variations 2 2 linearizations at the interfaces at x j 1 and x j + 3 , and not that at x j + 1 . To have an 2 2 2 algorithm processing each interface independently from any other, we can evaluate the two characteristic variations approximately by employing the local linearization at x j + 1 , as follows:
2
1 1 (U j U j 1 ) j 1 = R j+
2 2
and
1 1 (U j +2 U j +1 ), j+ 3 = R j+
2 2
and then use the previous formula to generate an approximate upwinded characteristic variation.
To generalize this procedure, we replace Step 1 by a more accurate reconstruction, taking for example the piecewise linear function
n u n (x , tn ) = U n j + j (x x j )
on the cell [x j 1 , x j + 1 ].
2 2
(16.41)
n Here j is a slope on the j th cell which is based on the data U n . For a system n of equations, j Rm is a vector of slopes for each component of u . Note that n taking j = 0 for all j and n recovers Godunovs method. The cell average of u n (x , tn ) from (16.41) over [x j 1 , x j + 1 ] is equal to U n j for 2 2 n any choice of j . Since Steps 2 and 3 are also conservative, the overall method is n conservative for any choice of j . For nonlinear problems we will generally not be able to perform Step 2 exactly. The construction of the exact solution u n (x , t ) based on solving Riemann problems n no longer works when u (x , tn ) is piecewise linear. However, it is possible to approximate the solution in a suitable way, as will be discussed below. n The most interesting question is how do we choose the slopes j ? We will see below that for the linear advection equation with a > 0, if we make the natural choice n Un j +1 U j n j = (16.42) h and solve the advection equation exactly in Step 2, then the method reduces to the LaxWendroff method. This shows that it is possible to obtain second order accuracy by this approach. The oscillations which arise with LaxWendroff can be interpreted geometrically as being caused by poor choice of slopes, leading to a piecewise linear reconstruction u n (x , tn ) with a much larger total variation than the given data U n . See Figure 16.3a for an example. We can rectify this by applying a slope limiter to (16.42), which reduces the value of this sope near discontinuities or extreme points, and is typically designed to ensure that:
T V (u n ( , tn )) T V (U n ).
(16.43)
The reconstruction shown in Figure 16.3b, for example, has this property. Since Steps 2 and 3 of Algorithm 16.1 are TVD, imposing (16.43) results in a method that is TVD overall, proving the following result. Theorem 16.4. If the condition (16.43) is satised in Step 1 of Algorithm 16.1, then the method is TVD for scalar conservation laws. Methods of this type were rst introduced by van Leer in a series of papers [88] through [92] where he develops the MUSCL Scheme (standing for Monotonicity 120
Upstreamcentered Scheme for Conservation Laws). A variety of similar methods have since been proposed, e.g., [9], [26]. The reconstruction of Step 1 can be replaced by more accurate approximations as well. One can attempt to obtain greater accuracy by using quadratic, as in the piecewise parabolic method (PPM) of Colella and Woodward [10] or even higer order reconstructions as in the ENO (essentially nonoscillatory) methods [29], [34]. (See Chapter 17.) Again, we will begin by considering the linear advection equation, and then generalize to nonlinear equations. For the linear equation we can perform Step 2 of Algorithm 16.1 exactly and obtain formulas that are easily reinterpreted as uxlimiter methods. This shows the close connection between the two approaches and also gives a more geometric interpretation of the TVD constraints discussed above. For the advection equation, the exact solution u n (x , tn +1 ) is simply u n (x , tn +1 ) = u n (x ak , tn ) (16.44) and so computing the cell average in Step 3 of Algorithm 16.1 amounts to integrating the piecewise linear function dened by (16.41) over the interval [x j 1 2 ak , x j + 1 ak ]. It is straight forward to calculate that (for a > 0)
2
1 +1 n n n n Un = Un j (U j U j 1 ) (1 )(h j h j 1 ). j 2
(16.45)
n n If j = 0 this reduces to the upwind method, while for j given by (16.42) it reduces to the LaxWendroff, in the form (16.12). Note that the numerical ux for (16.45) is
1 1 F (U ; j + 2 ) = aU j + a (1 )h j 2 U j +1 U j j+ 1 . 2 h
(16.46)
which has exactly the same form as the uxlimiter method (16.13) if we set j = (16.47)
In this context the uxlimiter j + 1 can be reinterpreted as a slopelimiter. 2 More generally, for a of either sign we have 1 +1 n n n n Un = Un j U ja U ja 1 [sgn() ] h ja h ja 1 j 2 where ja = j if a > 0 j + 1 if a < 0. 121 (16.49) (16.48)
The rst term here is simply the upwind ux and again we have a direct correspondence between this formula and the uxlimiter formula (16.32). Exercis e 16.3. Verify (16.45) and (16.48). In studying uxlimiter methods, we derived algebraic condition that j + 1 2 must satisfy to give a TVD method. Using piecewise linear interpretation, we can derive similar conditions geometrically using the requirement (16.43). One simple choice of slopes satisfying (16.43) is the socalled minmod slope, j = 1 minmod(U j +1 U j , U j U j 1 ) h (16.51)
where the minmod function is dened by a if a  < b and ab > 0 minmod(a , b) = b if b < a  and ab > 0 0 if ab 0 1 = [sgn(a ) + sgn(b)] min(a , b). 2
(16.52)
Figure 13.3b shows the minmod slopes for one set of data. We can rewrite the minmod slopelimiter method as a uxlimiter method using (16.47) if we set 0 if 0 ( ) = if 0 1 (16.53) 1 if 1 = max(0, min(1, )). Recall that j + 1 = (U j U j 1 )/(U j +1 U j ), for a > 0 and so (16.47) with 2 j + 1 = ( j + 1 ) and given by (16.53) reduces to (16.15). This limiter function 2 2 lies along the lower boundary of Swebys second order TVD region of Figure 16.1b. Note that again ( ) = 0 for 0, which now corresponds to the fact that we set the slope j to zero at extreme points of U , where the slopes (U j +1 U j )/ h and (U j U j 1 )/ h have opposite signs. Geometrically, this is clearly required by (16.43) since any other choice will give a reconstruction u n (x , tn ) with total variation greater than T V (U n ). 122
Although the minmod limiter (16.51) is a simple choice that clearly satises (16.43), it is more restrictive than necessary and somewhat larger slopes can often be taken without violating (16.43), and with greater resolution. Moreover, it is possible to violate (16.43) and still obtain a TVD method, since Step 3 of Algorithm 16.1 tends to reduce the total variation, and may eliminate overshoots caused in the previous steps. A variety of other slope limiters have been developed. In particular, any of the ux limiters discussed above can be converted into slope limiters via (16.47). Conversely, a geometrically motivated slope limiter can often be converted into a ux limiter function ( ). (In fact, van Leers limiter (16.28) was initially introduced as a slope limiter in [89]. 16.3.1 Linear systems
For a linear system of equations, we can diagonalize the system and apply the algorithm derived above to each decoupled scalar problem. Using the notation n n of Section 16.2.1, we let Vjn = R1 Un j have components V p, j so that U j = m n p=1 V p, j r p . We also set jp = j j +1 if p > 0 if p < 0 (16.54)
n takes the generalizing ja dened in (16.49). Then the method (16.48) for each V p form (mitting the superscript n to simplify the formulas)
1 n +1 Vp , j = V p, j p V p, j p V p, j p 1 + p sgn( p ) p h p, j p h p, j p 1 2 (16.55) where p = k p / h and p, j is the slope for V p in the j th cell. For example we may take p, j = 1 minmod(V p, j +1 V p, j , V p, j V p, j 1 ) h 1 = minmod p, j + 1 , p, j 1 . 2 2 h
(16.56)
123
= Un j
k h
m p=1
V p, j p p r p V p, j p 1 p r p
m p=1
(16.57) ,
1 2
p sgn( p ) p h p, j p h p, j p 1
1 Recalling that the upwind ux F L (U, j + 2 ) in (16.58) can also be written as m 1 F L (U, j + 2 )=
V p, j p p r p ,
p=1
(16.59)
we see that the ux of the high resolution method (16.57) has the form F(U; j +
1 2)
= F L (U, j +
1 2) +
1 2
m p=1
p sgn( p ) p h p, j p ,
(16.60)
where the occurrence of the subscript j p in the slope , dened by (16.37), must be noticed. Note that this is identical with the ux (16.40) for the uxlimiter method on a linear system if we identify p, j p = p, j + 1
2
p, j + 1
2
r p,
(16.61)
generalizing (16.47). Exercise 16.4. Verify that (16.61) holds when p, j is given by (16.58) and (16.56), p, j + 1 is given by (16.37), and is the minmod limiter (16.53).
2
16.3.2
In attempting to apply Algorithm 16.1 to a nonlinear problem, the principle difculty is in Step 2, since we typically cannot compute the exact solution to the nonlinear equation with piecewise constant initial data. However, there are various ways to obtain approximate solutions which are sufciently accurate that second order accuracy can be maintained. 124
I will describe one such approach based on approximating the nonlinear ux function by a linear function in the neighborhood of each cell interface, and solving the resulting linear equation exactly with the piecewise constant data. This type of approximation has already been introduced in the discussion of Roes approximate Riemann solver in Chapter 14. The use of this approximation in the context of high resolution slopelimiter methods for nonlinear scalar problems is studied in [26]. Here I will present the main idea in a simplied form, under the assumption that the data si monotone (say nonincreasing) and that f (u ) does not change sign over the range of the data (say f (U n j ) > 0). A similar approach can be used near extreme points of U n and sonic points, but more care is required and the formulas j are more complicated (see [26] for details). Moreover, we will impose the time step restriction k 1 max  f (U n (16.62) j ) h 2 although this can also be relaxed to the usual CFL limit of 1 with some modication of the method. With the above assumptions on the data, we can dene a piecewise linear n function f(u ) by interpolation between the values (U n j , f (U j )), as in Figure 16.4. We now dene u n (x , t ) by solving the conservation law t u + x f(u ) = 0 (16.63)
n (x , t ) for tn t tn +1 , with the piecewise linear data (16.41). The evolution of u is indicated in Figure 16.5. The ux is still nonlinear, but the nonlinearity has been concentrated at the points U n j . Jumps form immediately at each points x j , but because of the time step (16.62), these jumps do not reach the cell boundary during the time step. Hence we can easily compute the numerical ux
1 F (U n ; j + 2 )= tn +1 tn
f u n (x j + 1 , t ) dt .
2
(16.64)
n At each cell boundary x j + 1 , the solution values lie between U n j and U j +1 for 2 tn t tn +1 and hence
n , n (x j + 1 , t ) U n f u n (x j + 1 , t ) = f (U n j)+ u j a j+ 1
2 2 2
(16.65)
where a n = j+ 1
2
n f (U n j +1 ) f (U j ) n Un j +1 U j
(16.66)
125
The conservation law t u + x f(u ) = 0 reduces to the advection equation t u + n a j + 1 x u = 0 near x j + 1 and so
2 2
u n (x j + 1 , t ) = u n x j + 1 (t tn ) a n ,t j+ 1 n
2 2 2
= Un j +
h n (t tn ) a n 1 j . j+ 2 2
1 k
tn +1 tn
f (U n j)+ 1 n a 1 2 j+ 2
= f (U n j)+
h n n (t tn ) a n j + 1 dt 1 j a j+ 2 2 2 (16.67) k n n 1 h j . 1 a h j+ 2
n For the linear advection equation this reduces to (16.46). For j 0 this reduces n to the upwind ux f (U j ). With this choice of slopes (16.42), it reduces to
F (U ; j +
1 2)
k 1 n f (U n = j ) + f (U j +1 ) 2 2h
n f (U n j +1 ) f (U j ) n Un j +1 U j
, (16.68)
To obtain a high resolution TVD method of this form, we can again choose the slope j as in the linear case, for example using the minmod slope (16.51), so that the total variation bound (16.43) is satised. Notice that, although we do not solve our original conservation law exactly in Step 2 of Algorithm 16.1, we do obtain u n as the exact solution to a modied conservation law, and hence u n is total variation diminishing. By using a slope limiter that enforces (16.43), we obtain an overall method for the nonlinear scalar problem that is TVD. 16.3.3 Nonlinear systems
which is a form of the LaxWendroff method for scalar nonlinear problems. Also notice the similarity of (16.67) to the uxlimiter formula (16.14). With the correspondence (16.47), (16.67) is clearly a generalization of (16.14) to the nonlinear case. (Note that k a n / h is precisely n dened through (16.29).) j+ 1 j+ 1
2 2
The natural way to generalize this method to a nonlinear system of equations is to linearize the equations in a neighborhood of each cell interface x j + 1 and apply the 2 method of Sections 16.3.1 to some linearized system j + 1 x u = 0. t u + A
2
(16.69)
126
This is what we did in the scalar case, when the linearization was given by (16.66). We have already seen how to generalize (16.66) to a system of equations in our discussion of Roes approximate Riemann solution (Section 14.2). We j+ 1 = A (U j , U j +1 ), where A (U j , U j +1 ) is some Roe matrix satisfying take A 2 j + 1 by p, j + 1 condition (14.19). We denote the eigenvalues and eigenvectors of A p, j + 1 respectively, so that and r
2 2 2 2 2
j+ 1 r p, j + 1 r p, j + 1 = p, j + 1 A
2 2
for p = 1, 2, . . . , m .
Recall that the ux function for Godunovs method with Roes approximate Riemann solver is given by (14.22), which we rewrite as
m 1 ) = f (U j ) + F L (U; j + 2 p=1
1 p, j + 1 , p, j + 1 r p, j +
2 2 2 2
(16.70)
eigenvector expansion of U j +1 U j ,
U j +1 U j =
p=1
p, j + 1 . p, j + 1 r
2 2
(16.71)
1 2
m p=1
p, j + 1 sgn p, j + 1 p, j + 1 h p, j p ,
2 2 2
(16.72) where p, j Rm is some slope vector for the pth family. Here the subscript j p is dened by j if p, j + 1 > 0 2 jp = j + 1 if p, j + 1 < 0
2
p, j + 1 of the linearized problem. and therefore depends on the local eigenvalue 2 Note that combining (16.56) and (16.58) gives the following form for p, j in the case of a linear system p, j = 1 p, j 1 r p . minmod p, j + 1 , 2 2 h (16.73)
p, j + 1 which now varies with j . For our nonlinear method, r p is replaced by r 2 p, j + 1 and p, j 1 that are actually p, j 1 r Moreover, it is only the vectors p, j + 1 r
2 2 2 2
127
p, j + 1 and coefcient computed in Roes method, not the normalized r p, j + 1 2 2 separately, and so the natural generalization of (16.73) to the nonlinear case is given by 1 p, j + 1 r (16.74) p, j 1 r p, j = minmod 1, 1 , 2 p, j + 2 2 p, j 2 h where the minmod function is now applied componentwise to the vector arguments. Of course the minmod function in (16.74) could be replaced by any other slope limiter, again applied componentwise. The high resolution results presented in Figure 1.4 were computed by this method with the superbee limiter. In deriving this method we have ignored the entropy condition. Since we use Roes approximate Riemann solution, which replaces rarefaction waves by discontinuities, we must in practice apply an entropy x as described in Section 14.2.2. The details will not be presented here. The ux (16.72) with slope (16.74) gives just one high resolution method for nonlinear systems of conservation laws. It is not the most sophisticated or best, but it is a reasonable method and our development of it has illustrated many of the basic ideas used in many other methods. The reader is encouraged to explore the wide variety of methods available in the literature.
! ORDERED GRID ! ! ! ! ! Given the vector xx of nodal points and the vector ww of the cell average of the conservative variables, the program calculates the vertical flux of the high resolution method for the unsteady Euler equations of gasdynamics for a polytropic ideal gas
IMPLICIT NONE REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), REAL(KIND=8), INTEGER, INTENT(IN) INTENT(IN) INTENT(IN) INTENT(OUT) INTENT(OUT) INTENT(IN) :: :: :: :: :: :: dt xx ww FF_hr F_SO ! (:,2) limiter
128
REAL(KIND=8), DIMENSION(2,2) :: R,
!,
REAL(KIND=8), DIMENSION(2) :: lambda, A_lambda_ef, lambda_l, lambda_r, lambda_i, wi, dv, dvl, dvr REAL(KIND=8) :: dx, N_l, P_l, P_r, N_r,
dv_upw_p,
psi
zero = 0
ff = flux(ww) ! Store the nodal fluxes to avoid a double ! evaluation of the flux at the grid nodes
DO i = 1, SIZE(FF_hr, 2);
j = i
lambda, L, R)
! LeVeque version of Harten and Hyman ENTROPY FIX ! implemented by means of Maricas symmetric expression ! states and TRUE characteristic speeds wi = ww(:,j) + R(:,1) * dv(1)
lambda_l = eigenvalue(ww(:,j)) lambda_r = eigenvalue(ww(:,j+1)) lambda_i = eigenvalue(wi) N_l = MIN(lambda_l(1), 0.0d0); P_r = MAX(lambda_r(2), 0.0d0); A_lambda_ef = ABS(lambda) IF (P_l /= N_l) & A_lambda_ef(1) = ((P_l + N_l)*lambda(1)  2*P_l*N_l)/(P_l  N_l) P_l = MAX(lambda_i(1), 0.0d0) N_r = MIN(lambda_i(2), 0.0d0)
129
! The first and last interface are treated by extrapolating ! outside the computational interval ! A linear extrapolation retains high resolution accuracy IF (j == 1) THEN ! first left interface: dvl = MATMUL(L, ww(:,j+1)  ww(:,j)) ! linear extratopation ELSE dvl = MATMUL(L, ww(:,j)  ww(:,j1)) ENDIF IF (j + 1 == SIZE(xx)) THEN ! last right interface: dvr = MATMUL(L, ww(:,j+1)  ww(:,j)) ! linear extrapolation ELSE dvr = MATMUL(L, ww(:,j+2)  ww(:,j+1)) ENDIF
! centered contribution to the numerical flux in symmetric form FF_hr(:, i) = (ff(:,j) + ff(:,j+1)) / 2 DO p = 1, 2 ! upwind variation of the dv_upw_p = (dvl(p) + dvr(p)) / 2 & ! characteristic variables + (dvl(p)  dvr(p)) * SIGN(half, lambda(p)) psi = psi_lim(dv(p), dv_upw_p, & & limiter)
FF_hr(:, i) = FF_hr(:, i) +
! Second Order Surface Contribution to the Numerical Flux Np = SIZE(xx) F_SO(:, 1) =  (dt/(2*dx)) * MATMUL(AA(ww(:,1)), ff(:,2)  ff(:,1))
130
F_SO(:, 2) =
END SUBROUTINE
uhr_num_flux
131
11 Conclusion
This report has presented some modern numerical techniques for solving nonlinear hyperbolic equations and system of equations. This kind of mathematical problems are encountered in the solution of the uid dynamic equations expressing the conservation of mass, momentum and energy when any physical dissipation mechanism is drop out from the governing equations. Such an amputation reduces the full compressible NavierStokes equations to the Euler equations of gasdynamics. At the same time, the nonlinearity present in both mathematical models of compressible ows can produce a steepening of smooth initial data which can break down and degenerate to true disocontinuities of the eld variables in the inviscid case. The mathematical difculties implied by the occurrence of shock waves and/or other discontinuities in the solution require one to introduce special numerical tools for predicting transonic and supersonic ows. We have described initially the mathematical concepts associated with the aforementioned mechanisms in connection with the very simple example of a scalar conservation law in one dimension: the trafc ow equation. After introducing the basic idea of weak solutions in the context of a single nonlinear hyperbolic equation, we have demonstrated the RankineHugoniot jump condition which embodies what a weak solution must obey in the presence of a discontinuity. This has allowed us to give a preliminary presentation of the numerical method proposed by Godunov to calculate discontinuous solution. The rest of the work is dedicated to the study of systems of nonlinear hyperbolic equations. We have taken a very simple and most convenient example of the gasdynamic equations in one dimension for an isothermal ideal gas. First we have described this mathematical representation of this physical model and have determined its eigenvalues and eigenvectors to be able to characterize the nonlinear or linear nature of the considered hyperbolic system. Subsequently we have introduced the idea of Riemann problem and have computed some simple solution for the two equations of an isthermal ideal gas. They can be either a propagating discontinuity, called shock wave, or fans of similarity solutions described as rarefaction waves. Equipped with these simple solutions, we have formulated the Riemann problem for the considered gas dynamic system and described how it can be solved. In particular, following Landau and Lifshitz, we have shown how the kind of solution of any Riemann problem can be predicted a priori from its initial data by calculating a pair or relative velocities which represent the limiting values beween solutions with left and right waves of different types. After the exact Riemann solver for the isothermal gas model has been elaborated, the Godunov method is introduced. Then there is complete and rather detailed analysis of how to develop a linearized version of the Riemann solver, 132
following the idea of a conservative linearization introduced by Philip Roe. In this context, the idea of the entropy x is also discussed and an algorithm for obtaining an entropy x respectful of conservation has been described in the scalar case. The discretization of hyperbolic equations due to Lax and Wendroff is also described. This scheme has a second order accuracy both in time and in space and it can be formulated in conservation form for nonlinear equations, which is of the fundamental importance for obtaining weak solutions. The method based on Roe linearization can be nally combined with the conservative LawWendroff scheme to derive a high resolution method. This rather sophisticated numerical tool has been implemented successfully, although the details of its formulation have not yet documented in the present version of the report. The reader interested in the most recent developments concerning methods based on Godunov method and using Riemann solver is referred to the monographs of Toro [15] and Guinot [6]. The report contains also two appendices. In the rst appendix we have described the Riemann problem for the Psystem expressed in the conservative variables and in the Eulerian frame. A formulation of the problem as a system of two nonlinear equations is presented for the rst time which opens an original line of attack to demonstrate existence and uniqueness of the solution for large data. In the second appendix we have presented a general and clever procedure due to Alberto Guardone for satisfying the boundary conditions in hyperbolic systems.
133
References
[1] A. Bres s an, Hyperbolic Systems of Conservation Laws, Oxford University Press, 2000. [2] S. Chandras ekhar , Newtons Principia for the Common Reader, Oxford University Press, Clarendon Press, Oxford, 1995. [3] M. Calori, A. Di Donato, D. Pavanello and A. Pirrotta , Equazioni del trafco (Trafc Flow Equations), Student Report, updated version of 2007. [4] E. Godlews ki and P.A. Raviart , Numerical Approximation of Hyperbolic Systems of Conservation Laws, SpringerVerlag, New York, 1996. [5] S. K. Godunov , A difference method for the numerical computation of discontinuous solutions of the equations of hydrodynamics, Mat. Sb., 47, 271306, 1959. [6] V. Guinot , GodunovType Schemes, An Introduction for Engineers, Elsevier, Amsterdam, 2003. [7] L. D. Landau and E. M. Lifs hitz, Fluid Mechanics, second ed., Pergamon Press, New York, 1987. [8] R. J. LeVeque , Numerical Methods for Conservation Laws, Birkhuser, Basel, 1992. [9] R. J. LeVeque , Finite Volume Methods for Hyperbolic Problems, Cambridge University Press, 2002. [10] R. J. LeVeque, D. Mihalas , E. Dor and E. Mller , Computational Methods for Astrophysical Fluid Flow, SaasFee Advanced Course 27, A. Gautschy and O. Steiner, Eds., Springer 1998. [11] M. Lus kin and B. Temple , The existence of a global weak solution to the nonlinear waterhammer problem, Comm. Pure and Applied Math., XXXV, 697735, 1982. [12] M. Pelanti , Condizioni di entropia e positivit nei solutori di Riemann approssimati, Tesi di laurea, Dipartimento di Ingegneria Aerospaziale, Politecnico di Milano, 1999. [13] P. L. Roe , Approximate Riemann solvers, parameter vectors and difference schemes, J. Comput. Phys., 43, 357372, 1981. [14] L. Quartapelle, L. Cas telletti, A. Guardone and G. Quaranta , Solution of the Riemann problem of classical gasdynamics, J. Comput. Phys., 190, 118140, 2003. 134
[15] E. F. Toro , Riemann Solvers and Numerical Methods for Fluid Dynamics, SpringerVerlag, New York, second ed., 1999. [16] S. Wiggins , Introduction to Applied Nonlinear Dynamical System and Chaos, Springer, 199?. [17] R. Young , The psystem. I: The Riemann problem and II: The vacuum, submitted for publication, 2001.
135
(A.1.1)
for the unknowns mass density and momentum density m , where P () is a given pressure function. We assume that function P () is differentiable and satises the following asymptotic conditions
0
lim P () = 0
and
lim P () =
(A.1.2)
The rst condition means, in principle, the possibility of a vacuum state with null pressure while the second rules out innite compression. Other fundamental assumptions will be made in the following. 136
In particular, when P () = a 2 the Psystem above reduces to the conservation laws for an isothermal ideal gas described in section 4. The quasilinear form of the general Psystem is t w + A(w)x w = 0 where w = (, m ) with the Jacobian matrix dened by 0 1 2 2m A(w) m P () 2 (A.1.3)
(A.1.4)
The eigenvalue problem for the Jacobian A(w) consists in nding the values which make to vanish the determinant of matrix A(w) I , namely: P () 1 m2 2 =0 2m (A.1.5)
[The eigenvalue is denoted by letter instead of the more common for later use as the similarity variable = x / t used for calculating the rarefaction waves.] The characteristic equation 2 has solutions 1,2 (w) = m2 2m + 2 P () = 0 m P () (A.1.6)
(A.1.7)
where the eigenvalues are taken in increasing order. Thus hyperbolicity requires the pressure function to satisfy P () > 0 (hyperbolicity) (A.1.8)
a condition which allows to introduce the speed of sound a in the system according to the denition d P () (A.1.9) a () d We notice in passing that alternative representations of the Psystem are possible, but all lead to one and the same eigenstructure of the physical system. For instance,
137
the density could be replaced by the specic volume v 1/ and in the new representation (v, m ) the quasilinear form of the Psystem would be t v t m + A(v, m ) x v x m =0 (A.1.10)
m 2 + Q (v) 2v m
and the new constitutive relation for pressure given by Q (v) P () = P (1/v). The characteristic equation in this representation would assume the form 2 2v m + v 2 m 2 + Q (v) = 0 and the eigenvalues would be 1,2 (v, m ) = v m Q (v) (A.1.13) (A.1.12)
To nd the nonlinearity property of the modes the gradient of the eigenvalues (w) = (w) (w) + m m
Coming back to the representation (, m ) = w, let us determine the eigenvectors. A direct calculation gives 1 (A.1.14) r1,2 (w) = m P () (A.1.15)
The scalar product of the gradient of the eigenvalue by the corresponding eigenvector yields 2 P () + P () r1,2 (w) 1,2 (w) = (A.1.17) 2 P () It is convenient to introduce the dimensionless function, () = P () 1 d [ a ()] =1+ a () d 2 P () 138
called fundamental derivative of the considered hyperbolic system. In terms of this function is is immediate to verify that the calsa product above becomes () P () () a () r1,2 (w) 1,2 (w) = = (A.1.18) The modes will be genuinely nonlinear provided that () = 0 d 2 [ P ()] =0 d 2 (convexity) (A.1.19)
In the following we will assume that this condition is satised and therefore limit our attention only to convex Psystems. Moreover, we assume for deniteness that the sign of the second derivative above is positive, namley that P () > 2 P ()/ , without any loss of generality. For completeness we write the matrices of the right eigenvectors 1 1 R(w) = (A.1.20) m m () a () a () a () + m a () 1 () L(w) = m 2 a () 1
(A.1.21)
The eigenvectors have been normalized in the standard way of genuinely nonlinear waves (see below) and the eigenrows have been normalized in the usual way to ensure the inverse relationships R(w)1 = L(w) and L(w)1 = R(w).
139
associated with the rst mode with eigenvector r1 (w) that will be written in normalized form as 1 a () norm (A.2.2) (w) = r1 () m a () where the sound speed a () = P () has been already introduced. The Riemann invariant i = i (w) is dened as the solution to the equation i i m + =0 a () a () m (A.2.3)
This is a rstorder partial differential equation, linear with variable coefcients. To determine its solution let us consider the change of the independent variable m u = m / (A.2.5)
The partial derivatives of the original variable of the Riemann invariant can be expressed in terms of those of the new one: i I m I = 2 u 1 I i = m u
(A.2.7)
and the substitution into the equation for i yields, after simplifying two terms, i I + =0 a () u (A.2.8)
The form of this equation suggests to look for the Riemann invariant I (, u ) as the sum of two functions of only one variable, as follows I (, u ) = A() + B (u ). Then the equation becomes A () + B (u ) = 0 a () 140 (A.2.9)
where the prime denotes differentiation with respect to the independent variable of any function of a single variable. The rst term of the equation is a function only of variable whereas the second is a function only of u . Thus the equation can be satised only provided the two terms are equal to one and the same constant, but with opposite signs, as is typical when a PDE is solved by the technique of the separation of variables. As a consequence, the two functions A() and B (u ) must satisfy the two independent rst order ordinary differental equation A () = K and B (u ) = K (A.2.10) a () where K is the separation constant. The integration of the two equations is mmediate and gives
A() = K
a () d
and
B (u ) = C + K u
(A.2.11)
where the integral is left in indenite form, which implies the presence of an arbitrary additive constant in the rst relation and C is another integration constant. Therefore the solution is found to be a () d I (, u ) = K +u (A.2.12) the single integration constant being absorbed in that associated with indente integral. The Riemann invariant as a function of the originary variables reads
i (, m ) = K
a () d m +
(A.2.13)
The Riemann invariant associated with the second mode can be obtained in the same way.
The system being homogeneous, nontrivial solutions can be obtained only going throught the eigenstructure just determined. By means of the standard procedure already followed in section 4.3, the nontrivial solutions are characterized by the hybrid system d = d () P () 1 m dm (A.3.2) = d () P () m = P ()
which consists of two differential equations and one algebraic equation, the latter establishing a relationship beween the independent (similarity) variable and the two unknowns. Let us now determine the rarefaction solution issuing from a given state (, m ). The algebraic relation implies that initial condition for this problem must be ) = ( and ) = m m ( (A.3.3)
The equation is a rstorder linear equation with variable coefcients and with a right hand side function only of the independent variable. Let us divide the equation by = 0, m d (m /) P () P () 1 dm 2 = = = (A.3.6) d d 142
In this way, the hybrid differentialalgebraic problem is made complete. To nd the solution is convenient to look rst at the direct relation between the two unknowns and m along the rarefaction wave, that is, to determine the function m = m (), irrespective of the similarity variable [the function m () should not be confused with the previous m ( )]. The differential equation governing the unknown m () is obtained from the rst equation of the hybrid system and the initial value problem for the new unknown reads dm m = P () d (A.3.5) m () =m
where the initial value of the independent variable is xed by the eigenvalue itself to be m P () (A.3.4)
The equation for the variable m / is a rst order linear equation with constant coefcients and nonhomogeneous. Its solution is the sum of the general solution to the homogeneous equation and a particular solution of the nonhomogeneous complete equation, namely m () = A
P () d
(A.3.7)
where the integration constant A is determined by imposing the initial condition. The solution satisfying the initial condition is easily found to be m ) = m (, w
P () d
(A.3.8)
This rarefaction solution is valid only for < . The complete solution of the ) in the eigenvalue equation to give: hybrid system is obtained by plugging m (, w ) = (, w m
P () d
P ()
<
(A.3.9)
The range of amdissible values for the similarity variable is dened by the rule that must increase when the rarefaction wave is connected to the left state while must decrease when the rarefaction is connected to the right state. Since the waves associated to the left and right states involve the rst and second eigenvalue, respectively, the range of variable for the left wave will be > 1 ( , m ) = m P ( ) while that for the right wave will be < 2 (r , m r ) = m r /r + / P (r ). At this point, the solution process is ended by inverting the function = ), to give eventually the similar solution = (, w ) and m = m (, w ) = (, w ), w . The inversion of = (, w ) will be possible provided this m (, w function is strictly monotonic and this condition is easily checked by computing its derivative, which is d 2 P () + P () = d 2 P () < (A.3.10)
Therefore, for convex Psystems this derivative cannot vanish and function = ) is strictly monotonic, and the solution = (, w ) can be found. This (, w rar solution will be indicated by = ( ) (, w ), with > 1 (w ) for the left rar ( ) (, w ), with < (w ) for the right wave. wave or by = r r 2 r ) is negative for Due to the previous positivity assumption, the slope of (, w the rst eigenvalue and positive for the second. Therefore this function together ) is strictly decreasing for the rst eigenvalue and with its inverse = (, w increasing for the second. 143
P () mr d = + r P () d +
r
P () d
P () d
(A.4.2)
The formation of vacuum corresponds to a vanishing density, namely to 0, and this will occur provided that mr m > vacuum ( , r )) r where
(A.4.3)
vacuum ( , r ))
P () d +
r 0
P () d
(A.4.4)
The condition for vacuum formation is satised when the relative velocity = u r u of the uid on the right with respect to that on the left is sufciently high. When the vacuum is formed, the edges of the two rarefaction waves travel at velocity r P () P () d and u vac,r = u r d (A.4.5) u vac, = u + 0 0 It is interesting to note the particularity of the Psystem for the isothermal ideal gas with regard to the vacuum formation. For this gas P () = a 2 and the integrals in the relation dening the solution with two rarefaction waves gives a logarithm, as follows m mr 2 = a ln r r 144
r e 2a
m mr r
It follows that, irrespective of the initial data, can never vanish and no vacuum region can form in the isothermal ideal gas. Physically speaking, this impossibility is explained by the continuous heating of the gas which is mantained at a uniform temperature. This isothermal ideal gas model represents therefore a uniquely peculiar Psystem. Equation (A.4.2) denes the solution of the Riemann problem when it consists of two rarefaction waves. This relation can be recast to express the relative velocity = u r u of the two states of the Riemann problem as a function of the variable and of the two initial densities. We have in fact
(, , r ) =
P () d
P () d
With two rarefaction waves < and also < r . Therefore, when = min = min( , r ) this density value corresponds to the limit for a solution with two rarefactions. This denes the limit value of the relative velocity for a solution to the Riemann problem with two rarefaction waves (fans), as follows max P () d (A.4.6) 2 f ( , r ) min which is a positive value, as found in section 6 in the particular case of the equations for the isothermal ideal gas. After having established the solutions for the rarefaction waves, we pass to determine the solutions corresponding to shock waves for the general Psystem.
Substituting this result in the rst equation gives the solution for the speed s , in the form P () P () m ) = s1,2 (, w (A.5.4) The range of the admissible value for the parameter is determined by the entropy condition. Focussing on the rst eigenvalue, Lax version of the entropy condition for a system reads 1 (w ) > s1 (, w ) > 1 (w1 (, w )) (A.5.5)
where w1 (, w ) = , m sw 1 (, w ) . By direct calculation, the left part of the inequality leads to the condition P () P ( ) > ( ) P ( ) Then, by virtue of the differentiability of P (), it must be > , at least in a local sense. On the other hand, the right part of the inequality, by a slightly more complicated calculation reduces to the condition P () P ( ) < ( ) P () which is satised provided > , again locally by the differentiability of P (). Thus, for the rst eigenvalue the entropy condition is satised (locally) when > . A similar argument shows that for the second eigenvalue the entropy condition requires > r . We now introduce a new parametrization of the shock solution alternative to that will be particularly convenient for formulating the Riemann problem with a uniform treatment of rarefaction and shock branches. The idea is to choose a parameter i) with same physical dimensions of the similarity variable of the rarefaction wave, and that ii) matches continuously with the rarefaction solution at the pivotal state and extends the domain of variable to the half line in the direction opposite to the rarefaction wave range and iii) has a linear relationship with . Let us determine the new parametrization by imposing the three conditions above. Condition iii) means that sw ( ) = a + b, 146 with < (A.5.6)
for the second, where the coefcients a and b for the rst eigenvalue and > are to be determined by imposing the continuity condition ii). The continuity at of this linear function with the density and of its slope with the slope of = the rarefaction solution give the two relations a+ m / P () b=
(A.5.7)
(A.5.8)
This parametrization matches continuously and uniformly the two possible waves at the pivotal state so that is could be termed canonical. = w and its Considering the shock wave connected with the left state w associated rst eigenvalue, the linear relation will be written as
sw ( ) = a + b
(A.5.9)
for < 1 (w ), with the coefcients being dened by the appropriate version of the preceding relations. Therefore, the density of the two possible waves, either a rarefaction or a shock, that can be connected with the left state, will be given by the function rar ( ) > 1 (w ) (A.5.10) ( ) = a + b < (w ) 1 Similarly, the momentum of the wave connected with the left state is expressed by the function m rar ( ) > 1 (w ) m 1 ( ) = (A.5.11) m sw ( ) < (w ) 1 where
sw m sw ( ) m 1 (a + b , w )
(A.5.12)
with the function m sw 1 (, w ) of the shock wave solution already dened. The functions for the density and momentum of the right wave will be constructed in the same way. However, the variable of the canonical parameterization must be indicated by a different letter, for instance , with respect to that used to parametrize the left wave ( ). Thus, according to whether < 2 (r , m r ) or 147
> 2 (wr ), with 2 (wr ) = m r /r + P (r ), the wave will be a rarefaction or a shock. The function for the density along the right wave is expressed as follows rar () r < 2 (wr ) r () = (A.5.13) a + b > (w ) r r 2 r m r () = rar () m r < 2 (wr ) > 2 (wr ) (A.5.15) (A.5.14)
where, of course,
sw mr () m sw 2 (ar + br , wr )
m sw () r
The relation (A.5.3) for the momentum along the shockwave solution can be recast as an expression for velocity, as follows ) = u u sw 1,2 (, w 1 1 P () P ()
where u =m / . If the solution of the Riemann problem consists of two shock waves, the density of the intermediate state can be characterized also by the equality of the velocity along the left and right waves
sw u sw 1 (, w ) = u 2 (, wr )
This equation for the twoshock solution can also be recast as follows ur u = 1 1 P () P ( ) 1 1 r P () P (r )
When the solution consists of two shock waves > and > r , so that the limiting value of the density such that there are two shocks is given by = max = max( , r ). Substituting this value in the expression above only one term survives and we obtain 2s ( , r ) max min P (max ) P (min ) min max (A.5.16)
which is negative, as requested, and which can be written more clearly as follows 2s ( , r ) r  P (r ) P ( ) r 148 (A.5.17)
By summarizing, the three limiting values of the relative velocity = u r u are collected here, all together in increasing order, r  P (r ) P ( ) r max P () d 2 f ( , r ) = min r P () P () vacuum ( , r ) = d + d 0 0 2s ( , r ) =
(A.5.18)
which involves only ordinary derivatives. But, beside this advantage brought about by the special form of the 2 2 nonlinear system for its numerical solution, the 149
form itself seems also very appropriate to attack the demonstration of existence and uniqueness of the solution to the Riemann problem for convex Psystems under arbitrary (large) initial data.
where u ( ) = m 1 (, w )/1 (, w ) and similarly on the right. The functions for the velocity are, for the left wave, 1 (,w ) P () u d > 1 (w ) u 1 (, w ) = 1 1 P (1 (, w )) P ( ) < 1 (w ) u 1 (, w ) (A.7.2) and for the right wave 2 (,wr ) P () d < 2 (wr ) u + r r u 2 (, wr ) = 1 1 u r + P (2 (, wr )) P (r ) > 2 (wr ) r 2 (, wr ) (A.7.3) System (A.7.1) can be recast in the notationally simpler form ( ) = r (), u ( ) = u r (), (A.7.4)
having introduced the four functions ( ) 1 (, w ), u ( ) u 1 (, w ), r () 2 (, wr ) and u r () u 2 (, wr ). The four functions appearing in the two sides of the two equations are strictly decreasing or increasing. In fact, using the notation and to indicate strictly monotonic increasing and decreasing functions, respectively, the functions and r have the following increasing/decreasing character, namely, and r 150
On the contrary, the velocity functions u and u r are always increasing: u and
ur
By virtue of the strict monotonicity of the function 1 (, w ) and u r (), they can be 1 1 and the increasing/decreasing character inverted to give teh functions and u r of the latter is 1 1 and ur since the decreasing or increasing character of a function is invariant upon its inversion. The u version of the system of the Riemann problem can therefore be written as follows 1 = r (),
1 = ur u ( ). 1 1 u ( ) have The two composed functions f () = r () and g ( ) = u r the following increasing/decreasing character
and
since the composition of a decreasing function with an increasing one is a decreasing function and the composition of two increasing functions is increasing. Therefore the system reads = f (), (A.7.5) = g ( ). At this point, we should attempt to use the main theorem (Lemma 4.3.2) of [16, p. 444446] to demonstrate existence and uniqueness of the solution of the Riemann problem for the Psystem. We need to estimate the Lipschitz constants f and g 1 1 u and verify that of the two composed functions f = r and g = u r the contractive property f g < 1 is satised as a consequence of the conditions of hyperbolicity and convexity.
where (m 2 / + P ) = (m 2 /) + P , with P = P (r ) P ( ). Obviously, the rst equation is identically satised and therefore we have a single equation in with two components: thus we have a oneparameter family the vector unknown w of solutions, which is determined as follows. First let us consider the special case = 0, i.e., = r = r . Then, the second equation in (A.8.2) simplies to 2m m= m2 +P = (m 2 ) m2 = . r (A.8.3)
is satised. This leads to a system of two equations in the two unknowns and m , namely, m = m, (A.8.2) 2m m2 m 2 + P () + m= +P , 2
Considering now the general case = 0, the second equation in (A.8.2) gives a quadratic equation in the variable m / : ( ) m
2
2( m )
m +
m2 +
P P () = 0.
(A.8.5)
(A.8.6) This is still an implicit expression for the solution family, due to the occurrence of the density of the unknown intermediate state in the last term under the square root. If is selected as the parameter of the family of solutions, (A.8.6) gives the explicit expression of the solution family m =m () . On the other hand, since the solution set is a oneparameter family of solutions, we can impose one additional condition to obtain a problem with a uniquely dened 152
Notice that, due to presence of the function P () , this equation represents actually the denition of the oneparameter family of solutions, by means of the implicit function (m , ) = 0. Anyway, the use of solution formula for the quadratic equation gives: 2 m 1 m 2 = . m ( m ) ( ) + P P ()
solution. To simplify the righthand side of (A.8.6) we can select the value so that the implicit term involving the pressure function is eliminated. This means that the value of will be taken as the solution of the supplementary equation P () = P . (A.8.7)
Since the function P () is strictly convex, the function P () is invertible and the solution, whenever = 0, is expressed as follows: P = ( P )1 , (A.8.9) 2 1 m 1 P 2 . m = (P ) m ( m ) ( )
Therefore, the solution of the linearization problem is given by the system P P () = , (A.8.8) 2 m m ( m )2 ( ) . m =
Thus, for the Psystem, by xing a convenient value of the parameter , we have in terms of the been able to obtain a unique solution of the intermediate state w variations of the conservation variable w and the variations of m 2 / and P . We notice in particular that, in the special case m = 0, we obtain m r ( P )1 m = r
P
(A.8.10)
Surprisingly enough, even though m = 0, the momentum m of the intermediate state is in general different from m r = m = m r whenever = r . We have now to determine the physical relevant solution between the two solutions just found. Let us rst consider the discriminant in (A.8.9) and substitute the variations m = m r m and = r in it to give ( m ) ( )
2 2 m2 m2 mr 2 2 = m r 2m m r + m (r ) r
( m r r m )2 . r (A.8.11) 153
The denominator is always positive whereas the numerator is only nonnegative. Therefore, the second equation in (A.8.9) has two real distinct solutions except when m mr = , (A.8.12) r in which case the root is double. This special situation corresponds to a constant velocity across the left and right states and the solution is deduced immediately from (A.8.9), namely, P , = ( P )1 (A.8.13) mr m 1 P 1 P m = . (P ) ( P ) = r Considering now a positive discriminant, the solution of the second equation in (A.8.9) assumes the form 2 m 1 m r m ( m r r m ) = r r (A.8.14) 1 m r r m = mr m . r r
Let us consider rst the solution obtained by taking the minus sign in the expression. We have 1 m r r m m = mr m r r r m r r m m r + r m 1 = r r r r m + r m r 1 = r r r r m + m r = . r r Noticing that r = the solution above reduces to
m mr m + r = . + r
(A.8.15)
r +
(A.8.16)
(A.8.17)
154
Although we are considering the case = 0, we notice that for 0 this solution reduces to the one (A.8.4) obtained when = 0. If we now consider the plus sign in the expression above we have
m mr m r = . r
(A.8.18)
We notice that, when dealing with a general pressure function P (), it is necessary to nd the complete intermediate state, i.e., one has to determine both and m . This is not the case, for example, for a ux function that is homogeneous of degree one, for which the original Roes method was introduced. In this case, as we will see in the next section, the Jacobian matrix no longer depends on the density, and hence the Roe matrix is completely dened by specifying the ratio m / alone, i.e., the uid velocity u = m / . The intermediate state found here corresponds to the averaged velocity introduced by Roe (also called Roeaveraged or rhoaveraged velocity), namely, u + r u r u = . + r = (, By substituting the solution w m )T into the Jacobian matrix A(w) we obtain the Roe matrix 0 1 = A(w ) = A(, (A.8.20) A m ) = 2 2m . m P () 2
This result is rejected since it leads to an innite value for the ratio m / when 0, if m and m r are constant. In conclusion, the uniquely dened solution of Roe linearization problem for the Psystem for a general strictly convex P (), obtained by an appropriate choice of the parameter , has the form P , = ( P )1 m mr (A.8.19) + r 1 P = m (P ) . + r
In terms of the Roe averaged velocity, Roe matrix can be expressed more simply as 0 1 = A(, . (A.8.21) A u ) = 2 P () u 2u 155
156
formations between the conservative and physical variables is detailed rst for the system of Euler equations of gasdynamics and then for the system of two equations governing the 1D ow of an isothermal ideal gas. The procedure for the treatment of the boundary conditions consists in four distinct steps, which are formulated for an arbitrary hyperbolic system. The effectiveness of Guardones procedure is assessed by some numerical comparison for a test problem consisting in the reection of a shockwave by a plane wall. The exact analytical solution of this problem has been given in section 5.5.
The m components of the unknown are the conservative or conservation variables of the system. For the purposes of a numerical solution method, the system of conservation laws t w + x f (w) = 0 is also written in the socalled quasilinear form t w + A(w)x w = 0, where A(w) = f (w)/ w is the Jacobian matrix of the ux vector f (w). In the same context, one considers the eigenvalue problem associated with this matrix A(w) and, under the condition of strict hyperbolicity, it is standard to dene the matrix R(w) of right eigenvectors and the matrix L(w) of left eigenrows. It is standard to normalize the eigenrows on the basis of the right eigenvectors, so that L(w)R(w) = I. By means of these two matrices, one denes the characteristic variable v = L(w)w, which has m components, v1 v2 (B.2.2) v= . . . vm The inverse transformation from the characteristic variables to the conservative ones is obtained by multiplying v = L(w)w by R(w) and using the orthonormalization condition to give w = R(w)v. This relation gives the inverse transformation only implicitly, since matrix R(w) involves the conservation variable w one is looking for, except when the hyperbolic system is linear. Only in such a
157
where u is uid velocity and P is pressure. The relationship between the conservative variables and the physical ones is a vectorvalued function of the type w = w(p) and its inverse p = p(w). Always with reference to the Euler equations of gasdynamics, the three equations of the rst transformation read w1 = = p1 w2 = m = u = p1 p2
1 2 u w3 = E t = et = e + 2 1 2 = e( P , ) + 2 u 1 2 = p1 e( p3 , p1 ) + 2 p2
where is the mass density, m is the momentum density and E t is total energy density of the gas, while typical physical variables will be p1 (B.2.5) p = p2 = u P p3
For instance, for the Euler equations of gasdynamics, the three conservative variables are w1 (B.2.4) w = w2 = m Et w3
linear case, the characteristic variables have the important properties of being linear combinations of the conservative unknowns which are governed by uncoupled advection equations, each with its own (constant) propagation velocity. In view of specifying the boundary values, it is convenient to introduce a third set of variables, which are called physical or primitive variables and that will be denoted by p1 p2 p= . (B.2.3) . . pm
(B.2.6)
(B.2.7)
For a polytropic ideal gas e( P , ) = P //( 1). The relations of the inverse transformation p = p(w) will be p1 = = w1 p2 = u = m w2 = w1 Et 1 2 u , 2 (B.2.8)
p3 = P = P (e, ) = P
2 w2 w3 =P , w1 2 w1 2w1
For the polytropic ideal gas P (e, ) = ( 1)e . The other interesting example is that of an isothermal ideal gas which can be obtained by a simple reduction of the polytropic ideal gas. The two conservation variables are w1 w= (B.2.10) = w2 m while the physical variables are typically chosen as p= p1 = p2 u (B.2.11)
(B.2.9)
The nonlinear transformations between these two sets of variables are given by w(p) = p1 p1 p2 and p(w) = w1
w2 w1
(B.2.12)
The matrix of the right eigenvectors for the equations of the isothermal gas is R(w) = 1
m
1
m
a a+
+a 1 1
(B.2.13)
(B.2.14)
159
To refer the available boundary data and the known boundary values of the already computed solution by means of one and the same mathematical symbol let us dene the function w(t ), dened only on the right boundary, as follows w (t ) w(t ) = wext (t ) if a (w(xright , t )) 0 if a (w(xright , t )) < 0 (B.3.2)
where a (w) = f (w) is the advection speed. Thus, the alternative corresponds to a right end which is an outow or an inow boundary. Here the function w ext (t ) is assumed to be given and can be in particular a prescribed constant value. In a nite volume discretization of the nonlinear conservation law, the boundary value w(t ) is taken into account through the evaluation of the ux by means of f = f (w).
the exterior of the computational domain and the same time the proper elements of the solution already determined at the considered time. To solve this problem Alberto Guardone suggested a method that is described in the following. His method can be implemented in two alternative ways. The rst procedure is based on considering variations of the characteristic variables while the second relies upon the evaluation of the characteristic variables themselves. Step 0. Preliminary: Eigenstructure The preliminary computation is the determination of the eigenstructure of the local solution on the boundary. This means to compute initially the eigenvalues
(w )
= 1, 2, . . . m ,
(B.4.1)
which are assumed to be set in an increasing order. These values are independent of the variables chosen to formulate the eigenvalue problem, so that they can be referred to, collectively, as the vector
1 , 2 , . . . , m
(B.4.2)
Then, to complete the solution eigenstructure on the boundary, the matrices of the right and left eigenvectors are computed R(w ) and L(w ) (B.4.3)
normalized so that L(w ) R(w ) = I. We will denote the number of strictly negative eigenvalues by k , with 0 k m . This number is independent of the variables chosen to formulate the eigenvalue problem. In the rst form of Guardones procedure considered here, the value of k is taken into account only in an implicit manner through an operator that selects the components of a vector dependening on the sign of their corresponding eigenvalues, as it will be shown later. Step 1. Determination of the characteristic variation Let us suppose that the value of all of the physical variables p is known outside the right extreme of the integration interval and let us denote the vector of these data by ext p1 p ext pext 2 (B.4.4) . . . ext pm
In practice only some of the components of pext will be needed in any specic circumstance, but the selection of the proper components that must be actually taken into account will be achieved automatically, see below. 161
Let us introduce the difference between the solution vector w and the value of conservation variable w(pext ) corresponding to the external data pext : w w w(pext ) The variation of the characteristic variables is dened simply by v L(w ) w = L(w ) w w(pext ) (B.4.6) (B.4.5)
Step 2. Selection of the characteristic boundary variations Each component of the characteristic variation is now dened according to the sign of the corresponding eigenvalue. A negative sign means that the characteristic line enters the right extreme of the integration interval, implying that the boundary value of the corresponding component of the characteristic variation must be chosen from the vector v. On the contrary a positive sign means that the characteristic line is sorting out from the right end of the interval so that the boundary value of the conservative vector w must be chosen from the vector w of the solution on the boundary. This corresponds to a zero value for the component of the characteristic variation. In formula we have v if < 0 v (B.4.7) 0 if 0 for = 1, 2, . . . , m . The complete barred characteristic variation results in v1 v2 (B.4.8) v= . . . vm
We can formalize this step by introducing an operator acting on a vector y which selects its components only when the corresponding eigenvalue is strictly negative, and puts zero otherwise. Explicitly, we dene SN the operator selecting the part corresponding to the strictly negative eigenvalues, as follows y1 if 1 < 0 0 if 1 0 y2 if 2 < 0 0 if 2 0 (B.4.9) SN y . . . ym if m < 0 0 if m 0 162
In terms of this operator, the relation dening the characteristic variation v on the right extreme of the integration interval can be written in the following form v = SN v = SN L(w ) w w(pext ) (B.4.10)
But the characteristic variables are not the unknowns of the nonlinear hyperbolic problem, thus we need a last step to return back to the original conservative variables of the system. Step 3. Back transformation to the conservative variables This step involves the inverse transformation from the characteristic variables to the conservative ones. We will use the matrix R(w ) to perform the back transformation of the variations in the form R(w ) v (B.4.11)
The nal expression for the sought for barred vector of the conservation variables on the right boundary is w = w R(w ) v (B.4.12)
By means of the w the boundary ux f = f (w) can be determined and used eventually for the time advancement of the discretized hyperbolic system. The treatment of the boundary condition at the left extreme x = xleft is similar and the nal expression for the sought for barred vector of the conservation variables on the left boundary is w = w R(w ) SP L(w ) w w(pext ) (B.4.13)
where one has introduced the operator SP that selects the components of the vector argument when the corresponding eigenvalue is strictly positive and sets zero otherwise.
163
replacements
1
4.651 3.368
3 2.843 0.2
0 0.3063 0.2
2.2
2.2
Figure B..3: Comparison of density and momentum of the numerical solution with the exact solution in the shock reection problem
164
165
C. Algorithms
C.1 Main Program
PROGRAM USE USE USE USE USE USE isot_pig_main numerical_fluxes ! system_order flux_jacobian riemann_isot_pig_solvers riemann_isot_pig_profiles isothermal_pig plot_procedures
REAL (KIND=8), PARAMETER :: CFL = 0.75 REAL (KIND=8), DIMENSION(Np) :: xx, cell_size, rr_e, RR, mm_e ww_e
REAL (KIND=8), DIMENSION(system_order, Ni) :: FF ! Numerical flux ! in 1D FF == PHI REAL (KIND=8), DIMENSION(system_order) :: w_L, !! uu ! ! xa ! xb > vector of the unknown variables > left extreme of the computational interval > right extreme of the computational interval w_R, w_I, F_bar
! The initial discontinuity is at the interval midpoint ! w_L ! w_R > left state > right state
166
! time_end > final time of the solution REAL (KIND=8), DIMENSION(system_order, 2) :: F_SO ! Surface Numerical Flux of ! the Second Order term REAL (KIND=8) :: xa = 0, xb = 1, dx, dt, time_end, time, nu_2s, nu_2r INTEGER :: scheme, n, i, j, jl, jr & &
CHARACTER (LEN=7), DIMENSION(4) :: & scheme_name = (/Godunov, Roe_lin, LW_cons, highres/) CHARACTER (LEN=3), DIMENSION(4) :: & scheme_label = (/God, Roe, LWc, uhr/)
! INITIAL DATA FOR THE RIEMANN PROBLEM time_end = 0.15d0 w_L(1) = 0.6d0; w_L(2) = 0.4d0; w_R(1) = 0.5d0 w_R(2) = 0.2d0 w_R, nu_2s, nu_2r)
dx = (xb  xa) / (Np  1) ! uniform ordered grid DO j = 1, Np xx(j) = xa + (j  1) * dx ENDDO cell_size(1) = dx/2 ! first halfcell DO j = 2, Np  1 cell_size(j) = dx ENDDO
167
DO scheme = 1, SIZE(scheme_name)
WRITE (*,*);
WRITE (*,*)
time = 0;
n = 0
! initial condition of the Riemann problem !WHERE (xx < (xa + xb)/2) ww(1,:) = w_L(1) ww(2,:) = w_L(2) ELSEWHERE ww(1,:) = w_R(1) ww(2,:) = w_R(2) END WHERE !DO WHILE (time < time_end .AND. n < n_max) ! loop on time
dt = CFL * dx / MAXVAL(ABS(eigenvalue(ww)) + 1.0d8) IF (time + dt > time_end) THEN dt = time_end  time ENDIF time = time + dt; n = n + 1 time: , time
SELECT CASE (scheme_name(scheme)) CASE (Godunov); CASE (Roe_lin); CASE (LW_cons); CALL god_num_flux CALL Roe_num_flux (xx, ww, (xx, ww, FF) FF) FF, F_SO)
168
FF, F_SO,
limiter)
RR = 0 ! Imposition fo the boundary conditions in weak form ! through the Numerical Flux F_bar at the two ! end points of the interval CALL left_boundary (time  dt/2, primitive(ww(:,1)), ww(:,1), RR(:,1) = RR(:,1) + F_bar ! Right end of the interval CALL right_boundary (time  dt/2, primitive(ww(:,Np)), ww(:,Np), RR(:,Np) = RR(:,Np)  F_bar F_bar) F_bar)
! Contribution of the Surface Numerical Flux of the Second Order term IF (scheme_name(scheme) == LW_cons RR(:,1) = RR(:,1) RR(:,Np) = RR(:,Np) ENDIF + + F_SO(:,1) F_SO(:,2) .OR.
scheme_name(scheme) == highres)
DO i = 1, Ni;
j = i ! =====
! !
jl = j;
jr = j + 1 + FF(:,i) FF(:,i)
RR * SPREAD(dt/cell_size, 1, system_order)
loop on time
!
169
CALL CALL
! OPTIONAL &
exact_Riemann_isot_pig_profiles (w_L, w_I, w_R, (xa+xb)/2, xx, time, rr_e, mm_e) plot_profile_name (xx, ww(1,:), rr_e, scheme_label(scheme)//"_rho") plot_profile_name (xx, ww(2,:), mm_e, scheme_label(scheme)//"_mom") plot_profile_name (xx, ww(2,:)/ww(1,:), mm_e/rr_e, & scheme_label(scheme)//"_vel")
170
C.2 Modules
MODULE isothermal_pig
! P(rho) = a2 rho,
with
a = SQRT(P(rho))
IMPLICIT NONE
sound_speed_s, sound_speed_v,
fundamental_derivative_s, fundamental_derivative_v
&
171
FUNCTION pressure_v(rr) RESULT(PP) IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: rr REAL(KIND=8), DIMENSION(SIZE(rr)) :: PP PP = a**2 * rr END FUNCTION pressure_v !FUNCTION sound_speed_s(r) RESULT(c) IMPLICIT NONE REAL(KIND=8), INTENT(IN) :: r REAL(KIND=8) :: c c = r ! to avoid warning since for isothermal_pig ! the sound speed is the constant a and does ! not depend on the density r c = a END FUNCTION sound_speed_s
FUNCTION sound_speed_v(rr) RESULT(cc) IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: rr REAL(KIND=8), DIMENSION(SIZE(rr)) :: cc cc = rr ! to avoid warning since for isothermal_pig ! the sound speed is the constant a and does ! not depend on the density r cc = a END FUNCTION sound_speed_v
172
!
FUNCTION fundamental_derivative_s(r) RESULT(G) IMPLICIT NONE REAL(KIND=8), INTENT(IN) :: r REAL(KIND=8) :: G G = r ! to avoid warning since for isothermal_pig ! the fundamental derivative of gasdynamics is ! the constant and does not depend on the density r G = 2 * a**2 END FUNCTION fundamental_derivative_s
FUNCTION fundamental_derivative_v(rr) RESULT(GG) IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: rr REAL(KIND=8), DIMENSION(SIZE(rr)) :: GG GG = rr ! to avoid warning since for isothermal_pig ! the fundamental derivative of gasdynamics is ! the constant and does not depend on the density r GG = 2 * a**2 END FUNCTION fundamental_derivative_v
END MODULE
isothermal_pig
173
MODULE USE
flux_jacobian isothermal_pig
IMPLICIT NONE
PRIVATE ::
flux_v, flux_a, Jac_v, Jac_a, eigenvalue_v, eigenvalue_a, & primitive_v, conservative_v, & primitive_a, conservative_a
&
CONTAINS !=======
174
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FUNCTION
flux_v(w) RESULT(f)
! f = f(w) ! Flux function of the nonlinear hyperbolic system ! simple vector version IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: w REAL(KIND=8), DIMENSION(SIZE(w)) :: f REAL(KIND=8) :: u, P u = w(2)/w(1) P = pressure(w(1)) f(1) = w(2) f(2) = w(2)*u + P END FUNCTION flux_v
! f = f(w) ! Flux function of the nonlinear hyperbolic system ! array version IMPLICIT NONE REAL(KIND=8), DIMENSION(:,:), INTENT(IN) :: ww REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,2)) :: ff REAL(KIND=8), DIMENSION(SIZE(ww,2)) :: uu, PP uu = ww(2,:)/ww(1,:)
175
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FUNCTION Jac_v(w) RESULT(J) ! ! Jacobiam matrix of the Euler equations of gasdynamics Simple version IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: w REAL(KIND=8), DIMENSION(SIZE(w), SIZE(w)) :: J REAL(KIND=8) :: u, c2 u = w(2)/w(1) c2 = sound_speed(w(1))**2 J(1,1) = 0; J(2,1) = c2  u*u; END FUNCTION Jac_v !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ FUNCTION Jac_a(ww) RESULT(JJ) ! ! Jacobiam matrix of the Euler equations of gasdynamics Version extended to arrays IMPLICIT NONE REAL(KIND=8), DIMENSION(:, :), INTENT(IN) :: ww REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,1), SIZE(ww,2)) :: JJ J(1,2) = 1 J(2,2) = 2*u
176
REAL(KIND=8), DIMENSION(SIZE(ww,2)) :: uu, uu = ww(2,:)/ww(1,:) cc2 = sound_speed(ww(1,:))**2 JJ(1,1, :) = 0; JJ(2,1, :) = cc2  uu*uu; END FUNCTION Jac_a JJ(1,2, :) = 1
cc2
JJ(2,2, :) = 2*uu
!+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
FUNCTION eigenvalue_v(w) RESULT(lambda) ! ! Nonlinear system of Euler equations of Gasdymnamics Simple version IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: w REAL(KIND=8), DIMENSION(SIZE(w)) :: lambda REAL(KIND=8) :: u, c u = w(2)/w(1) c = sound_speed(w(1)) lambda(1) = u  c lambda(2) = u + c END FUNCTION eigenvalue_v !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ FUNCTION eigenvalue_a(ww) RESULT(lambda) ! ! Nonlinear system of Euler equations of Gasdymnamics Extended array version IMPLICIT NONE REAL(KIND=8), DIMENSION(:, :), INTENT(IN) :: ww
177
REAL(KIND=8), DIMENSION(SIZE(ww,1), SIZE(ww,2)) :: lambda REAL(KIND=8), DIMENSION(SIZE(ww,2)) :: uu, cc uu = ww(2,:)/ww(1,:) cc = sound_speed(ww(1,:)) lambda(1,:) = uu  cc lambda(2,:) = uu + cc END FUNCTION eigenvalue_a !+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ SUBROUTINE ! eigenstructure (w, lambda, L, R)
Nonlinear system of Euler equations of Gasdymnamics IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: w REAL(KIND=8), DIMENSION(:), INTENT(OUT) :: lambda REAL(KIND=8), DIMENSION(:,:), INTENT(OUT) :: L, R REAL(KIND=8) :: u, c, G ! fundamental derivative u = w(2)/w(1) c = sound_speed(w(1)) lambda(1) = u  c lambda(2) = u + c G = fundamental_derivative(w(1)) R(1,1) = 1; R(2,1) = c  u; R = (w(1)/(G*c)) * R L(1,1) = c  u; L(2,1) = c  u; L = G/(2*w(1)) * L L(1,2) = 1 L(2,2) = 1 R(1,2) = 1 R(2,2) = c + u
END SUBROUTINE
eigenstructure
178
!FUNCTION conservative_v(primitives) RESULT(w) ! primitive(1) = mass density ! primitive(2) = velocity IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: primitives REAL(KIND=8), DIMENSION(SIZE(primitives)) :: w w(1) = primitives(1) w(2) = primitives(1) * primitives(2) END FUNCTION conservative_v !FUNCTION primitive_v(conservatives) RESULT(p) IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: conservatives REAL(KIND=8), DIMENSION(SIZE(conservatives)) :: p p(1) = conservatives(1) p(2) = conservatives(2) / conservatives(1) END FUNCTION primitive_v !FUNCTION conservative_a(primitives) RESULT(ww) ! primitive(1) = mass density ! primitive(2) = velocity IMPLICIT NONE REAL(KIND=8), DIMENSION(:, :), INTENT(IN) :: primitives REAL(KIND=8), DIMENSION(SIZE(primitives, 1), SIZE(primitives, 2)) :: ww ww(1, :) = primitives(1, :) ww(2, :) = primitives(1, :) * primitives(2, :) END FUNCTION conservative_a
179
!FUNCTION primitive_a(conservatives) RESULT(pp) IMPLICIT NONE REAL(KIND=8), DIMENSION(:, :), INTENT(IN) :: conservatives REAL(KIND=8), DIMENSION(SIZE(conservatives, 1), SIZE(conservatives, 2)) :: pp pp(1, :) = conservatives(1, :) pp(2, :) = conservatives(2, :) / conservatives(1, :) END FUNCTION primitive_a !END MODULE flux_jacobian
180
IMPLICIT NONE
INTERFACE Roe_linearization MODULE PROCEDURE Roe_linearization_isot_pig_l_LR MODULE PROCEDURE Roe_linearization_isot_pig_int END INTERFACE Roe_linearization
PRIVATE :: Roe_linearization_isot_pig_l_LR, Roe_linearization_isot_pig_int, & exact_Riemann_isot_pig, & limits_relative_velocity_isot_pig, & transonic_rarefaction_isot_pig, & loci_isot_pig CONTAINS !======================================================================= SUBROUTINE Roe_linearization_isot_pig_l_LR (wl, wr, lambda, L, R)
! Solution of the linear Riemann problem for the Roe matrix ! w is the vector of the conservative variables: ! mass density, momentum density and total energy density IMPLICIT NONE
181
REAL(KIND=8), DIMENSION(2), INTENT(IN) :: wl, wr REAL(KIND=8), DIMENSION(2), INTENT(OUT) :: lambda REAL(KIND=8), DIMENSION(2,2), INTENT(OUT) :: L, R REAL(KIND=8) :: ul, ur, sl, sr, u ! Determination of Roe intermediate state ul = wl(2)/wl(1) ur = wr(2)/wr(1) ! Roes averaging sl = SQRT(wl(1)); u = (ul * sl + sr = SQRT(wr(1)) ur * sr) / (sl + sr)
! eigenvalues and right/left eigenvectors of Roe matrix lambda(1) = u  a; R(1,1) = 1; R(2,1) = u  a; L(1,1) = a + u; L(2,1) = a  u; L = L/(2*a) END SUBROUTINE Roe_linearization_isot_pig_l_LR lambda(2) = u + a R(1,2) = 1 R(2,2) = u + a L(1,2) = 1 L(2,2) = 1
! Solution of the linear Riemann problem for the Roe matrix ! w is the vector of the conservative variables: ! mass density, momentum density and total energy density IMPLICIT NONE REAL(KIND=8), DIMENSION(2), INTENT(IN) :: wl, wr REAL(KIND=8), DIMENSION(2), INTENT(OUT) :: lambda, wi REAL(KIND=8), DIMENSION(2,2) :: R, L REAL(KIND=8), DIMENSION(2) :: dv REAL(KIND=8) :: ul, ur, sl, sr, u
182
! Determination of Roe intermediate state ul = wl(2)/wl(1) ur = wr(2)/wr(1) ! Roes averaging sl = SQRT(wl(1)); u = (ul * sl + sr = SQRT(wr(1)) ur * sr) / (sl + sr)
! eigenvalues and right/left eigenvectors of Roe matrix lambda(1) = u  a; R(1,1) = 1; R(2,1) = u  a; L(1,1) = a + u; L(2,1) = a  u; L = L/(2*a) ! intermediate states of the solution of the linear Riemann problem dv = MATMUL(L, wr  wl) wi = wl + dv(1) * R(:,1) WRITE(*,*) dv(1) = , dv(1) END SUBROUTINE Roe_linearization_isot_pig_int ! characteristic variation lambda(2) = u + a R(1,2) = 1 R(2,2) = u + a L(1,2) = 1 L(2,2) = 1
!======================================================================= ! Solution of the Riemann Problem for the ! Isothermal version of the Polytropic Ideal Gas ! ! Iterative Solution by Newton method ! ! ! phi(rho) = 0 where phi(rho) = m_L(rho)  m_R(rho) ! !! ! INPUT
183
! ! wl : left state (conservative variables) ! wr : right state (conservative variables) ! ! OPTIONAL ! ! rel_err : relative error (of pressure) to stop the iterations ! [default = 1.0d7] ! iterations : maximum number of iterations ! [default = 100] ! !! ! OUTPUT ! ! wi : intermediate state ! ! OPTIONAL ! ! iterations : number of actually computed iterations ! !! ! The program calculates also the relative velocities ! nu_2r and nu_2s that define the limits between ! solutions containing one shock wave and one rarefaction wave ! and thus with two rarefaction waves or two shock waves, ! respectively. ! !======================================================================= IMPLICIT NONE REAL(KIND=8), INTENT(IN), DIMENSION(:) :: wl, wr REAL(KIND=8), INTENT(OUT), DIMENSION(:) :: wi REAL(KIND=8), INTENT(IN), INTEGER, INTENT(INOUT), OPTIONAL OPTIONAL :: rel_err :: iterations & &
REAL(KIND=8) :: rho, rhol, rhor, D_rho, nu_2s, nu_2r, m1, m2, Dm1, Dm2
REAL(KIND=8) :: rel_err_Int = 1.0d07 ! default INTEGER :: max_it_Int = 100 ! default INTEGER :: it !
184
IF (PRESENT(rel_err)) IF (PRESENT(iterations))
rhol = wl(1); !
CALL limits_relative_velocity (wl, wr, ! average for the initial guess rho = (rhol + rhor)/2 D_rho = rhor  rhol DO it = 1, max_it_Int m1, m2,
CALL loci_isot_pig (1, wl, rho, CALL loci_isot_pig (2, wr, rho, D_rho =  (m1  m2)/(Dm1  Dm2) rho = rho + D_rho ! convergence check
Dm1) Dm2)
IF (ABS(D_rho) <= rel_err_Int * rho) THEN CALL loci_isot_pig (1, wl, rho, wi(1) = rho; wi(2) = m1 iterations = it m1)
IF (PRESENT(iterations))
WRITE (*,*) iterations = , it RETURN ENDIF ! WRITE (*,*) it =, it ENDDO PRINT*, exact_Riemann_isot_pig solver fails to converge END SUBROUTINE exact_Riemann_isot_pig
185
!================================================================
m,
DmDr)
INTEGER, INTENT(IN) REAL(KIND=8), DIMENSION(:), INTENT(IN) REAL(KIND=8), INTENT(IN) REAL(KIND=8), INTENT(OUT) REAL(KIND=8), OPTIONAL, INTENT(OUT)
:: :: :: :: ::
REAL(KIND=8) :: s, rho_, m_, rat, sqr, lnr s = (1)**i rho_ = w_(1); rat = rho/rho_ IF (rho > rho_) THEN ! Hugoniot locus sqr = SQRT(rat) m = m_ * rat + s * a * (rho  rho_) * sqr + m_ = w_(2)
IF (PRESENT(DmDr)) DmDr = m_/rho_ ELSE ! Integral curve lnr = LOG(rat) m = m_ * rat + s * a * rho * lnr
s * a * (lnr + 1)
!================================================================ SUBROUTINE limits_relative_velocity_isot_pig (wl, wr, IMPLICIT NONE REAL(KIND=8), DIMENSION(:), INTENT(IN) :: wl, wr REAL(KIND=8), INTENT(OUT) :: nu_2s, nu_2r nu_2s, nu_2r)
186
&
! limits of relative velocity for 2 rarefaction waves and 2 shocks rhol = wl(1); rhor = wr(1); ul = wl(2)/wl(1) ur = wr(2)/wr(1)
rho_min = MIN(rhol, rhor) rho_MAX = MAX(rhol, rhor) nu_2s =  a * ABS(rhol  rhor) / SQRT(rhol * rhor) nu_2r = a * ABS(LOG(rhol/rhor)) ul = wl(2)/ wl(1); WRITE WRITE WRITE WRITE WRITE WRITE (*,*) (*,*) (*,*) (*,*) (*,*) (*,*) ur = wr(2)/ wr(1)
! Sonic values of the rarefaction wave ! similarity solution at xi = 0 IMPLICIT NONE INTEGER, INTENT(IN) :: i ! eigenvalue REAL(KIND=8), DIMENSION(:), INTENT(IN) :: w REAL(KIND=8), DIMENSION(:), INTENT(OUT) :: ws REAL(KIND=8), DIMENSION(SIZE(w)) :: lambda REAL(KIND=8) :: rho, E lambda = eigenvalue(w) rho = w(1) SELECT CASE (i)
187
CASE (1) E = EXP(lambda(1)/a) ws(1) = rho * E ws(2) = rho * a * E CASE (2) E = EXP(lambda(2)/a) ws(1) = rho * E ws(2) = rho * a * E END SELECT
END SUBROUTINE
transonic_rarefaction_isot_pig
END MODULE
Riemann_isot_pig_solvers
188
FUNCTION
psi_lim(a, b, limiter)
RESULT(psi)
! Limiter function in Rebays form, as a function ! of two variables: a = Ducentred, b = Dupwind ! ! limiter is an OPTIONAL parameter ! ! DEFAULT > van Leer ! ! limiter == 2 > Secondorder scheme ! limiter == 0 > no limiter, firstorder upwind ! ! limiter == 1 > van Leer ! limiter == 2 > minmod ! limiter == 3 > superbee ! limiter == 4 > Monotonized Central ! IMPLICIT NONE REAL(KIND=8), INTENT(IN) :: a, b INTEGER, OPTIONAL, INTENT(IN) :: limiter REAL(KIND=8) :: psi REAL (KIND=8), PARAMETER :: zero = 0, IF (PRESENT(limiter)) THEN SELECT CASE(limiter) CASE(2) ! Secondorder scheme psi = b CASE(0) ! no limiter, firstorder upwind psi = 0 CASE(1) ! van Leer psi = (a*ABS(b) + ABS(a)*b)/(ABS(a) + ABS(b) + 1.0d8) CASE(2) ! minmod psi = (SIGN(half,a) + SIGN(half,b)) * MIN(ABS(a), ABS(b)) CASE(3) ! superbee psi = (SIGN(half,a) + SIGN(half,b)) & * MAX( MIN(ABS(a), 2*ABS(b)), MIN(2*ABS(a), ABS(b)) ) CASE(4) ! Monotonized Central psi = MAX( zero, MIN((a+b)/2, 2*a, 2*b) ) + MIN( zero, MAX((a+b)/2, 2*a, 2*b) ) half = 0.5d0
&
189
CASE DEFAULT WRITE (*,*) Unknown limiter specified; END SELECT ELSE ! default limiter: van Leer
STOP
190
CASE DEFAULT WRITE (*,*) The index for the eigenvalues must be 1 for WRITE (*,*) the first eigenvalue and 2 for the second one WRITE (*,*) STOP STOP END SELECT END FUNCTION eigenvalue
SUBROUTINE
Riemann_exact_sol_iig (u_l, u_i, u_r, x0, T, xx, DIMENSION(2), INTENT(IN) INTENT(IN) DIMENSION(:), INTENT(IN) DIMENSION(:,:), INTENT(OUT) :: :: :: ::
uu)
191
REAL(KIND=8),
DIMENSION(SIZE(xx)) :: ee &
REAL(KIND=8) :: rho_l, rho_i, rho_r, vb, ve, wb, we, a, & x1, x2, x3, x4 a = s_s_T ! ! ! ! !
Evaluation of velocity at the limit under a condition on the density by distinguishing shock and rarefaction shock case: shock speed s rarefaction case: the limiting velocities are the eigenvalues
! left wave IF (rho_l < rho_i) THEN ! LEFT SHOCK vb = (u_i(2)  u_l(2)) / (rho_i  rho_l) ve = vb ELSE ! LRFT RAREFACTION vb = eigenvalue(u_l, 1) ve = eigenvalue(u_i, 1) ENDIF
! right wave IF (rho_i > rho_r) THEN ! RIGHT SHOCK wb = (u_r(2)  u_i(2)) / (rho_r  rho_i) we = wb ELSE ! RIGHT RAREFACTION wb = eigenvalue(u_i, 2) we = eigenvalue(u_r, 2)
192
! After the evaluation of the limiting velocities and ! positions, the exact solution can be calculated WHERE (xx <= x1) ! inside the left state uu(1,:) = u_l(1) uu(2,:) = u_l(2) END WHERE
.AND.
ee = EXP(((xx  x0)/T  vb)/a) uu(1,:) = u_l(1) * ee uu(2,:) = u_l(1) * ee * ((xx  x0)/T + a) END WHERE
.AND.
.AND.
ee = EXP(((xx  x0)/T  we)/a) uu(1,:) = u_r(1) * ee uu(2,:) = u_r(1) * ee * ((xx  x0)/T  a) END WHERE
WHERE
uu(1,:) = u_r(1)
193
SUBROUTINE ! ! ! !
uu)
Propagation of a shock wave toward a right wall and reflection of in a time tc For given density values on the left and right of discontinuity, the speed of the incident shock is calculated.
! the value rho_i must be greater than that of rho_w ! to have a right propagating shock REAL(KIND=8), INTENT(IN) :: rho_i, rho_w, T REAL(KIND=8), DIMENSION(:), INTENT(IN) :: xx REAL(KIND=8), DIMENSION(:,:), INTENT(OUT) :: uu REAL(KIND=8) :: m_i, s_inc, s_rif, tc, xd, & L, rho_star, x_shock, a REAL(KIND=8), PARAMETER :: m_w = 0
! Evaluation of the interval length and of the initial ! position (xd) of the discontinuity a = s_s_T L = MAXVAL(xx)  MINVAL(xx) xd = MINVAL(xx) + L/2 ! the shock is at the interval midpoint
m_i = a * rho_i * (1  rho_w/rho_i) * SQRT(rho_i/rho_w) s_inc = a * SQRT(rho_i/rho_w) tc = L / (2*s_inc) PRINT *, tc rho_star = rho_i**2 / rho_w s_rif = a * SQRT(rho_w/rho_i)
194
WHERE (xx <= x_shock) ! left state uu(1,:) = rho_i uu(2,:) = m_i ELSEWHERE ! right state uu(1,:) = rho_w uu(2,:) = m_w END WHERE ELSE ! After shockwave reflection x_shock = MAXVAL(xx) + s_rif * (T  tc)
WHERE (xx <= x_shock) ! left state uu(1,:) = rho_i uu(2,:) = m_i ELSEWHERE ! right state uu(1,:) = rho_star uu(2,:) = m_w END WHERE ENDIF
END SUBROUTINE
exact_shock_reflection_iig
195