Sinc-Galerkin solution of second-order hyperbolic problems in multiple space dimensions by Kelly Marie McArthur A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics Montana State University © Copyright by Kelly Marie McArthur (1987) Abstract: A fully Galerkin method in space and time is developed for the second-order hyperbolic problem in one, two, and three space dimensions. Using sinc basis functions and the sinc quadrature rule, the discrete system arising from the orthogonalization of the residual is easily assembled. (Two equivalent matrix formulations of the systems are given. One lends itself to scalar computation while the other is more natural in a vector computing environment. In fact, it is shown that passing from . one to the other is simply a notational change. In either setting the move from one to two or three space dimensions does not significantly affect the ease of implementation. Intermediate diagonalization of each matrix representing the discretization of a second partial leads to the diagonalization of the overall system. The method was tested on an extensive class of problems. Numerical results indicate the method has an exponential convergence rate for analytic and singular problems. Moreover that is independent of the spatial dimension.  SINC-GALERKIN SOLUTION OF SECOND-ORDER HYPERBOLIC PROBLEMS IN MULTIPLE SPACE DIMENSIONS by Kelly Marie McArthur A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Mathematics MONTANA STATE UNIVERSITY Bozeman, Montana May 1987 £ > 3 7 £ M l l 9 f C-Ojp,^ ii APPROVAL of a thesis submitted by Kelly Marie McArthur This thesis has been read by each member of the thesis committee and has been found to be satisfactory regarding content, English usage, format, citations, bibliographic style, and consistency, and is ready for submission to the College of Graduate Studies. Date ^-Qj m 7 Chairperson, Graduate Committee Approved for the Major Department J Z d / 9 * 7 Date HeauTyMathema^icad Sciences Department Approved for the College of Graduate Studies j r - ? A Graduate DeanDate iii STATEMENT OF PERMISSION TO USE In presenting this thesis in partial fulfillment of the requirements for a doctoral degree at Montana State University, I agree that the Library shall make it available to borrowers under rules of the Library. I further agree that copying of this thesis is allowable only for scholarly purposes, consistent with "fair use" as prescribed in the U .S . Copyright Law. Requests for extensive copying or reproduction of this thesis should be referred to University Microfilms International, 300 North Zeeb Road, Ann Arbor, Michigan 48106, to whom I have granted "the exclusive right to reproduce and distribute copies of the dissertation in and from microfilm and the right to reproduce and distribute by abstract in any format." Date iv ACKNOWLEDGMENTS The author thankfully acknowledges the superb typing of Ms. Rene1 Tritz and the invaluable computer assistance of Eric Greenwade. Thanks also go to Professor Norman Eggert for his careful reading of this work. In addition, the author recognizes Professor Frank Stenger for the revival and extension of sine function theory. His clear exposes and willingness to discuss new problems have led to yet another generation of sine function students. Finally, the author thanks Professor Kenneth L . Bowers and Professor John Lund, whose guidance is responsible for this work. They possess the rare ability to teach one to teach oneself. VTABLE OF CONTENTS Page 1. INTRODUCTION.... ................................... I 2. THE SINC-GALERKIN SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS . ....... 8 Interpolation on (-=, = )......................... 9 Quadrature on (-»,<»)............................ 18 Interpolation and Quadrature on Alternative Arcs...................... 22 Sinc-Galerkin Method............................ 26 3. THE SINC-GALERKIN METHOD FOR THE WAVE EQUATION.............................. 35 4. SOLUTION OF THE DISCRETE SINC-GALERKIN SYSTEM... 56 Classical Methods................................ . 56 Solution of the Discrete System in One Spatial Variable................ 61 Solution of the Systems in Two and Three Spatial Variables....................... 65 5. NUMERICAL EXAMPLES OF THE SINC-GALERKIN METHOD.. 71 Exponentially Damped Sine Waves. . ............. 78 Singular Problems.......... 82 Problems Dictating Noncentered Sums in All Indices.......... 86 REFERENCES CITED 91 vi LIST OF TABLES Table Page 1 . Machine Storage Information for the Matrices B ( ^ ) and B^J )........................... 58 2. Numerical Results for Example 5.1............ 79 3. Numerical Results for Example 5.2............... 80 4. Numerical Results for Example 5.3............... 81 5. Numerical Results for the Damped Sine Wave.... 81 6. Numerical Results for Example 5.4....... 82 7. Numerical Results for Example 5.5............... 84 8. . Numerical Results for Example 5.6............... 85 / 9. Numerical Results for the Singular Problems.... 85 10. Numerical Results for Example 5.7............... 86 11. Numerical Results for Example 5.8............... 88 12. Numerical Results for Example 5.9............... 89 13. Numerical Results for Problems Dictating Nbncentered Sums in All Indices. . . ............. 89 vii LIST OF FIGURES Figure Page 1. The Domain of Dependence.......................... 3 2. The Numerical Domain of Dependence for the Explicit, Centered Finite Difference Method.................................. 4 3. S (0,1) (x) = sine (x) , x 6 R ........ .............. 10 4 . The Domain Dg.................... . ................. 14 5. The Conformal Map X ........... ............... . . 23 6. The Regions Dg , Dw , and Dg ........................ 38 7. The Basis Functions Sq (X) and Sq (t ).............. 39 viii ABSTRACT A fully Galerkin method in space and time is developed for the second-order hyperbolic problem in one, two, and three space dimensions. Using sine basis functions and the sine quadrature rule, the discrete system arising from the orthogonalization of the residual is easily assembled. ^Two equivalent matrix formulations of the systems are given. One lends itself to scalar computation while the other is more natural in a vector computing environment. In fact, it is shown that passing from . one to the other is simply a notational change. In either setting the move from one to two or three space dimensions does not significantly affect the ease of implementation. Intermediate diagonalization of each matrix representing the discretization of a second partial leads to the diagonalization of the overall system. The method was tested on an extensive class of problems. Numerical results indicate the method has an exponential convergence rate for analytic and singular problems. Moreover that is independent of the spatial dimension. CHAPTER I INTRODUCTION The general second-order, linear partial differential equation (1.1) Aux?c + Buxy + Cuyy + Dux + Euy + Fu = G , where A, B, C, D, E , F, and G are functions of x and y only, is classified at the point (x,y) via the discriminate (B2 - 4AC)(x,y). That is, when (B2 - 4AC)(x,y) is positive, zero, or negative the equation at (x,y) is hyperbolic, parabolic, or elliptic, respectively. The present chapter reviews classical results for a purely hyperbolic, constant coefficient problem corresponding to A = -I, C = I, and B = D = E = F = 0 . Including boundary and initial conditions, the specific partial differential equation examined is u tt(x,t ) - uxx(x,t) = G (x,t ) (x,t) E (0,1) x (0,“ ) u(0,t) = u (I,t) = 0 t > 0 (1 .2 ) u(x,0) = f(x) 0 < x < I Ut (XzO) = g(x) 0 < x < I This problem is referred to as a one-dimensional wave equation since it models the displacement of a vibrating string with initial displacement f, initial velocity g, and subject to an external force G . 2The equation in (1.2) is representative of one of the two hyperbolic canonical forms due to the absence of the term Ux t - In contrast, the distinguishing feature of the alternative canonical form is that its only second-order term is the mixed partial Uxt. The change of variables f = x + t and 'n = x - t transforms the equation in (1.2) to (1.3) -4u?)? = G((f + i?)/2 , (f - T i ) / 2 ) (?,%) G (0,=) x (-<»,I) It is well-known that the correct change of variables is found by solving two ordinary differential equations which depend on the discriminate. For a complete discussion see Farlow [I]. By integrating (1.3), restoring the variables x and t, and applying the initial and boundary conditions, d'Alembert explicitly solved (1.2) [2]. His result of 1747 is (1.4) u(x,t ) 1 2 x+t f (x + t) + f(x — t) + J g (j-l) + 2(1 " m2)Ui,j-i " Ui,j-2 + (At)^ G (xi 't J - I ) • Here Ui ^ approximates u at (Xi,tj) = (iAx,jat) and m = At/Ax is the ratio of the stepsizes. A detailed derivation of the time-marching scheme (1.7) is included in Ames [4]. Stepping back in time determines the finite difference gridpoints which influence Ui j (see Figure 2 below). These gridpoints blanket the numerical domain of dependence outlined in Figure 2. The CFL criteria states that a necessary condition for the convergence of (1.7) (as At and Ax -» 0) is that the numerical domain of dependence must contain the analytic domain of dependence. This requires m < 1 . The computational impact of restricting m < I is briefly discussed in Chapter 4. t. -- Figure 2. The Numerical Domain of Dependence for the Explicit, Centered Finite Difference Method. 5The method of the present work satisfies the CFL condition trivially. This technique, called the Sinc- Galerkin method, builds an approximate solution for (1.2) valid on the entire domain. The approximate is a truncated generalized Fourier expansion of basis elements which are tensor products of sine functions composed with suitable conformal maps. ,The support of each.basis element is (0,1) x (0,») hence, the method may be termed a spectral method and its numerical domain of dependence is identically the domain of the partial differential equation. To ease the description of the Sinc-Galerkin method applied to (1.2), its counterpart for ordinary differential equations is derived in Chapter 2. Here the basis elements are single sine functions composed with conformal maps. The pertinent sine function properties needed to construct an approximate solution are reviewed. Further, it is shown that when 2N + I basis functions are used to define this approximate, the optimal exponential order of convergence, Ofe-K^ fN"), K > o, occurs even in the presence of singularities. Stenger [5] discusses a more general setting than that considered in Chapter 2. The chapter closes with the formulation of the discrete linear system whose solution specifies the approximate. This system is symmetric, a result that Lund [6] has shown depends on the correct choice of weighted inner product. ) i 6Chapter 3 extends the Sinc-Galerkin method to (1.2) (with f = g = 0) and then further to the analogous wave equations in two and three space dimensions. At this writing. the most common procedure for solving partial differential equations with a semi-infinite time interval is a Galerkih discretization of the spatial domain with time dependent coefficients. The result is a system of ordinary differential equations usually solved via difference techniques. Botha and Finder [3] develop Galerkin discretizations of space using finite element basis functions. In contrast, Gottlieb and Orszag [7] use globally defined spatial basis elements. They show that in space many of these spectral methods exhibit an exponential convergence rate. However, because Gottlieb and Orszag consider only finite difference techniques for the temporal domain, the time solution has finite-order accuracy. They acknowledge the incompatibility of the error statement in time versus space with the following remark: No efficient, infinite-order accurate time- differencing methods for variable coefficient problems are yet known. The current state-of-the-art of time-integration techniques for spectral methods is far from satisfactory on both theoretical and practical grounds...[7]. The point of view taken here differs from the two sources just cited by carrying the Galerkin discretization into the time domain. Chapter 5 reports numerical results which attest to the success of this notion. i Besides developing the Sinc-Galerkin method. Chapter 3 introduces notation to facilitate the description of the resulting discrete systems. These systems are posed in two algebraically equivalent matrix forms. The choice of form to use depends somewhat on available computing facilities. Chapter 4 discusses this topic along with algorithms for the solution of the linear systems in either form. The nine examples included in Chapter 5 are broken into groups of three. Each group highlights a feature of the Sinc-Galerkin method for problems in one, two, and three space dimensions. For instance, the first three examples have analytic solutions while the second three have combinations of both algebraic and logarithmic singularities. The numerical results show that the rate of convergence is not affected by this singular behavior. The last three examples show the dramatic reduction in the size of the discrete system solved when care is exercised in parameter selections. Finally, each group indicates that the asymptotic error 0(e"~^ '^ ") , K > 0, is attained independent of the dimension of the wave equation. 7 \ 8CHAPTER 2 THE SINC-GALERKIN SOLUTION OF ORDINARY DIFFERENTIAL EQUATIONS The goal of this chapter is to derive the discrete Sinc- Galerkin system necessary to build an approximate to the solution of (2 .1) Lf(x) s f" (x) + v(x)f(x) = o(x) a < x < b f (a) = f(b) = O valid on the interval (a ,b ). A symmetric matrix formulation for the system can be posed and is, in fact, easy to solve numerically. The resulting approximate solution converges to the true solution on (a,b) at the rate 0(e_l<:JTr) where K is a positive constant and 2N + I basis functions are used to build the approximate. Further, the convergence rate is maintained in the presence of singularities' (the solution has an unbounded derivative) on the boundary. To prove these statements a background in general sine function theory is necessary. In particular, the foundation for the error analysis of the Sinc-Galerkin method is the error associated with the truncated sine quadrature rule. 9Interpolation on Numerical sine function methods are rooted in E .T . Whittaker's [8] work concerning interpolation of a function at the integers. Rather than using well-known expansions based on polynomials, Whittaker sought an expansion whose properties are far more distinguished when applied in the proper setting. The foundation of the series is the sine function (2 .2 ) sine(x) sin(Tcx) I t X x E R shown in Figure 3 below. The resulting formal cardinal series for a function f is (2.3) ^ f(k)sine(x - k) k=-«> To generalize (2.2) and (2.3) to handle interpolation on any evenly spaced grid define for h > 0 (2.4) S(k,h)(x) sinc^- and denote the Whittaker cardinal function by (2.5) C (f ,h ) (x) E Y f (kh)S(k,h)(x) k=-™ whenever this series converges. In engineering literature (2.5) is often called a band-limited series. 10 Figure 3: S(0,I)(x) = sine(x) , x E R. With regard to using (2.5) as an approximation tool, two important classes of functions must be identified. The first is the very restricted class for which (2.5) is exact. The second is the class of functions f for which the difference between f and C (f ,h) is small. J.M. Whittaker [9] and McNamee, et.al. [10] accomplish this identification task by displaying a natural link between the Whittaker series and aspects of Fourier series and integrals. The Fourier transform for a function g is (2 .6 ) A fundamental result of Fourier analysis is that if g E L 2(R) then g E L 2(R) and g is recovered from g by the Fourier inversion integral (2.7) g( t) ii = — f g(x)e ixtdx 2ir J For a select set of functions, the Paley-Wiener Theorem shows that the inverse transform has compact support. Their result is Theorem (2.8): If g G L2 (R), entire, and there exist positive constants A and C so that |g(w)| < Cexp{A|w|} where w G (5> then I A a (2.9) g(w) = 2jt J g(x)e ixwdx . -A Showing that the sine function satisfies the hypotheses of Theorem (2.8) with A = it and C = I is straightforward. An elementary calculation gives (2.10) sinc(w) = J e"ixwdx X (_KzK)e™ixwdx ; - i t - 00 hence, an immediate consequence of Theorem (2.8) is CO (2.11) SincT(X) = I sinc( t)eixtdt = X (-*,*) (x) To accommodate the translated sine function appearing in (2.4), a change of variables in (2.10) gives (2.12) S (k ,h )(x ) J sine e ixtdth 12 _ J1 eixS ^ (-ir/h, K/h) (x) for fixed real S . The support of the characteristic function in (2.12) prompts the definition of a class of functions, called the Paley-Wiener class, which is naturally associated with the Whittaker series. Definition (2.13): Let B (h) be the set of functions g such that g 6 L2 (R), g is entire, and |g(w)| < Cexp{tc |w|/h) where w 6 0. Before the Whittaker function can be discussed, one result is vital. I zt-w\ Theorem (2.14): If g E B(h) then g(w) = - J g (t ) sine ) d t . ' - C D ' A proof of Theorem (2.14) is found in [10]. For completeness the converse of Theorem (2.14) is Theorem (2.15): If g E L2 (R) then k(w) I 00 zt-w\- r g(t)sine(-^-Idt is in B(h). Proof: Using Parseval1s Theorem with the inner product = J f (t ) g (t ) dt establishes the growth estimate for k. Application of Morera1s Theorem proves entirety and the Cauchy-Schwarz inequality yields k E L2 (R). 13 The significance of all the proceeding work is in the subsequent elegant theorem. Theorem (2.16): If g G B(h) then g(w) “ J ak S(k,h)(w) k=-® where ak = h J g(t)sinc^— ---- jdt = g(kh)t - kh Proof: The Paley-Wiener Theorem, the identity .ikhx (2.17) e-ixw _ ^ sin( ? ) Z (-D k w+kh k=-« and the uniform convergence theorem justify the ensuing steps: I g(w) ir/h J g(x)e ixwdx -rc/h sin (?)/ jr/h g(x) -rc/h ,k,h (-l)*"sin(Kw/h) jr Z w + kh k=-® I k=-o I (-1) k e ikhx w+kh I g (x)eikhxdx h - (-D sin( rcw/h) re L w + kh k=-® ^ g(kh) sinc^W ^ . k=-® 14 Hence for f £ B(h), f(w) = C(f,h)(w) for all w £ p. Note that this is a stronger result than originally sought. The initial quest was for a class of functions such that the Whittaker series was exact on R. An even stronger statement is derived from Theorem (2.16) and the identity (2.18) I J S(k,h)(t)S(A,h)(t)dt = I, if A = k 0, if A ^ k that is, Theorem (2.19): The set | S (k ,h) | is a complete orthonormal set in B(h) [10]. Unfortunately the set B(h) is extremely restrictive and some relaxation is necessary if (2.5) is used as a practical interpolatory tool. McNamee, et. al . [10] identifies a set, here called BP(Dg ), where C(f,h) is not exact but its approximation to f is very good. In particular, the domain of analyticity for BP(Dg) is Dg . Definition (2.20): Dg = (z: z = x + iy, |y| < d £ R, d > 0} Figure 4: The Domain Dg 15 The class BP(Dg) Is specified by Definition (2.21). Definition (2.21): Bp (Dg) is the set of functions f such that (2.22) f is analytic in Dg , d (2.23) J I f (t+iy) I dy = 0(|t|a ) as t -> ±® where 0 < a < I, -d and for p = I or. 2 Np (f ,Do) = Iim . 00 . „ r ■ I f(t + iy)Ipdt y-»d J “ 00 . (2.24) iy)Ipdt 1/p" < OS . The exact form of the error is given by Theorem (2.25). Theorem (2.25): If f E Bp (Dg ) .then £ (f)(x) = f (x) - C (f ,h ) (x) where _______ f(t-id)_________ (t-x-id) sin[ it (t-id) /h]e(f)(x) san( jrx/n; J (2.26) f(t+id)_________ (t-x+id) sin[ jt (t+id) /h] Moreover (2.27) if f E B1 (Ds ) '5 B(Ds ) then N1 (f,Dg ) N(f,Ds ) e (^) H00 - 2,red sinh( icd/h) 2rcd sinh( Tcd/h) 16 while if f G B2 (Dg ) then N 2 (f ,Dg ) (2.28) Il e (f) Ilro < ------=---------- 2VE3 sinh(Kd/h) and (2.29) llc(f) H2 N 2 (f,Dg) sinh(Kd/h) For a proof see Stenger [11]. Worth noting, is that the error statement of Theorem (2.25) is valid only on the real line. Theorem (2.16), recall, applies to the complex plane. However, the original goal was approximating on the real line and Theorem (2.25) certainly satisfies that. Of far greater interest is the order statement derived from (2.27) , (2.28), and (2.29). As h -> 0 sinh(jcd/h) -» ® ; hence, IIs(T)IIo3 2 0 independent of whether f E B 1(Dg) or B2 (Dg ). Moreover, the rate of convergence is governed by sinh (Kd/h) ; i.e., l/sinh( rcd/h) = 0( exp (-Kd/h) ) as h 0. Although the exponential convergence rate is attractive, to be of practical importance it must be maintained when (2.5) is truncated. Denote the truncated Whittaker series for a function f by ^ /x - kh\ (2.30) CM>N(f,h)(x) = Y f (kh)sinc^— --- J k=-M where it is assumed C (f ,h ) (x) converges. Theorem (2.31): If f G BP(Dg ) for p = I or 2, d > 0 and there exist positive constants a and (3 such that 17 (2.32) |f(x)I < L e«x S-I3x then choosing (2.33) and (2.34) N = a P M + I h = (K d / (aM))% if x 6 (-=,0) if x E [0,® ) gives (2.35) Hf - CM/N(f ,h)||oo < C M* e “ (JtdaM)^ where C is a constant dependent on f . Proof: From Theorem (2.25) there exists a constant L 1 such that I f (x) - C ( f , h) (x) I < L1S-icdZ*1 for all x E R. Using the triangle inequality and (2.32), |f(x) - CMfN(f,h) (x) I < Ll6-jcdZh + J |f (-kh) I + J I f (kh) I k=M+l k=N+l J S“akh + £ B-I3kh< L1B-jcdZh + L < L1B-jcdZh + L Now if N and h are defined by (2.33) and (2.34), then k=M+l e-aMh- e- ----- + — k=N+l PNh K(X) - CM ,,(f,h)(x)| S Lie- ( ^ M ) X + + e— {xdaM)X = C M7* e_-(JtdaM)^ The choice of N and h is dictated by balancing asymptotic errors. The truncation errors for the lower and upper sums 18 are of orders 0 ( a n d Ofe-^ * 1), respectively. Equating these order statements gives N = aM/|i. Using (2.33) is a convenience to guarantee N is an integer. The choice for h is deduced by equating the truncation error order 0(e-aMh) and the order of the interpolation error 0 ( ) . Hence, truncating sums for computational feasibility need not be at the expense of the exponential convergence rate. This rate carries over during the development of the sine quadrature rule. Quadrature on (-=,«) The interpolation results just reviewed are the groundwork for the derivation of the sine quadrature rule on the real line. As with interpolation, the quadrature formula involves infinite sums; hence, a primary task is deducing the error caused by truncation. This leads to a numerically practical means of estimating an integral over the real line. The use of conformal maps generalizes the sine quadrature to alternative curves. A few preliminaries are necessary before stating the quadrature theorem analogous to Theorem (2.25). Lemma (2.36) Lemma (2.37) /x - kh\ sincf— ---- Idx = f .iax - z, 2ici I , h > 0, k 6 Z . e iaz0, Im(Z0) > 0 0 , Im(Z0) < 0x 19 Lemmas (2.36) and (2.37) are proved using standard contour integration. Lemma (2.38): If f G L1 (R) then J C(f,h)(x)dx = h £ f(kh) k=-= where h > 0 and k E Z. Proof: By the integral test £ |f(kh)| converges. k=-® / x — kh\ /x — Since If(kh)sinc^— ---j( < |f(kh)| , J f (kh)sincf— - k=-<» converges absolutely and, therefore, by the Weierstrass M-test, uniformly. Thus - kh ! C (f,h)(x)dx f I -Co k = - f(kh)sinc(^ ^ ^dx ^ f(kh) J sinc(iL_— — )dx Ic--OO —0 = h 2 f(kh) k=-oo These three lemmas along with. Theorem (2.25) provide the background to prdve the following. 20 Theorem (2.39): If f G B(Dg ) s B1 (Dg ) then (2.40) J f (x ) dx = h £ f (kh) + r?(f) — CO — — OT where (2.41) f?(f) I c(f)(x )dx and (2.42) |i?(fH S e-jcd/h N (ffDs) 2sinh( Jtd/h) Here s(f)(x) and N(f,Dg ) are defined in (2.26) and (2.24) (with p = I), respectively. Proof f s(f)(x)dx I f (x )dx - J C (f ,h )(x )dx J f (x )dx - h £ f(kh) — CO Ic=-CO From the left hand side, using (2.26) oo 17(f) s J E(f)(x)dX J sin(rex/h) 2rci J f (t-id)dt(t-id-x) sin( it (t-id) /h) J f (t+id)dt(t+id-x) sin( jc (t+id) /h) 21 e-n:d/h 00 f (t+id)e:L,Tt/n _ f (t-id)e J sin( Jt (t+id) /h) sln( jc (t-id)/h) irct/h -I irt/h 21 where Fubini1 s Theorem and (2.37) with a = ic/h and Z0 = t ± id give the last equality. Bounding the last expression leads to (2.42). The result of Theorem (2.39) is that sine quadrature is the familiar trapezoidal rule on the real line. However, the rate of convergence for (2.40) is 0 (e-2,rd/h ) rather than the usual 0 (h2 ) that is associated with the trapezoidal rule when f" is bounded. Sine function properties lead to the restriction f G B(Dg) which, in. turn, accounts for the vastly accelerated convergence rate. As with interpolation, numerical practicality calls for truncating the sine quadrature, nee' trapezoidal, series. Define the truncated trapezoidal series N (2.43) TM N (f,h) 5 h £ f(kh) , h > 0 . k=-M Analogous to Theorem (2.31).is Theorem (2.44). Theorem (2.44): If f G B(Dg ), d > 0 and f satisfies (2.32) for some positive constants a and |3, then choosing N and h as in (2.33) and (2.34) gives (2.45) 22 < K e_2ird/h + - e-aMh + - e“pNh 1 a . p . < c e-(2judaM)^ where K1, K, and C are constants. The proof follows in a like fashion to that of Theorem (2.31); for greater detail see Stenger [12], [13] and Lund [6]. Once again selections for N and h are motivated by asymptotically balancing errors. Finally, note the increased rate of convergence for quadrature, 0 (e- ( 2)tdaM) ^ versus interpolation, 0(e_ ^ daM^). Interpolation and Quadrature on Alternative Arcs The preceeding results hold for x G To be useful in the setting of building a Sinc-Galerkin approximate for the solution of a differential equation, the results must be generalized to alternative intervals. Stenger [5] shows that conformal maps provide the means of extension. His discussion is briefly outlined here with the goal in mind the statement of the quadrature rule. Let D be a simply connected domain and Dg be as in Definition (2.20). Given a,b G 3D such that a ^ b, there is a conformal map X : D Dg (see Figure, 5) satisfying (2.46) V(R) = X -1(R) = Y and Figure 5: The Conformal Map X. With respect to D, the following definition is similar to (2 .21 ) . Definition (2.48): Let B(D) be the class of functions such that f is holomorphic on D; (2.49) J I f (w ) dw I = 0(|t|a ) as t -> <*> V (t+L) where a G [0,1) and L = (iy: |y| < d) ; and (2.50) N( f , D) E Iim inf f |f(w)dw| < =° CCDC-»9D ^ where C is a simple closed curve in D. Rather than developing interpolation and quadrature rules for D from scratch, two theorems provide a natural link to the past work. 24 Theorem (2.51): If V is a conformal map of Dg onto the simply connected domain D and if f G B(D)z then F G B(Dg ) where (2.52) F(w) 5 f (Y{w)) Y-'(w) . Also Theorem (2.53): If X: D -> Dg is a conformal, one-to-one and onto mapping then V = X-1 is conformal. Stenger [13] remarks on Theorem (2.51) while Theorem (2.53) is a standard complex analysis result. Finally a revised version of the exponential decay property (2.32) is needed. Definition (2.54): Let f G B(D), X be a conformal map satisfying the conditions of Theorem (2.53), and -c G Y = V(R) = X-1 (R) • Then f/X' is said to decay exponentially with respect to X if there exist positive constants K, a and P such that exp (-a |X(T:)|) , T G Yl exp(-pI X (t ) I) , T G y r where Y l = {z: z E f ( (-==,0) ) } (2.56) Yr = {z: Z G V([0,»))}. Theorem (2.51) and Definition (2.54) suggest that the conformal map is incorporated into the previous theorems and (2.55) f (X) < K X ' (T) 25 definitions quite easily. The interpolation arid quadrature rules that follow support this notion. Theorem (2.57) : Let f G B(D) and X : D Dg satisfy the hypotheses of Theorem (2.53). Further, suppose f/X1 decays exponentially with respect to X. If z E y = Y(R) = X -1(R) and zk = Y (kh ), k E Z, h > 0, then selecting N and h as in (2.33) and (2.34), respectively, gives where K is a constant independent of z. Note that to interpolate f rather than f/X1, simply let F = fX 1 and substitute into (2.58) assuming F satisfies the hypotheses of Theorem (2.57). Equation (2.58) shows that the rate of convergence for sine interpolation on a curve y remains 0 (e~ (^dxxM) ^ ) Similarly, the rate of convergence for the sine quadrature rule is unchanged. Theorem (2.59): (a) I f f E B(D) and X , Y , z, and Zjc are as in Theorem. N f (Zlc) X (z )-kh h (2.58) < K e~(%daM)% (2.57), then for h sufficiently small 26 (2.60) I » f (zk )f (z)dz - h ^ - k=-= '(Zk ) N(f'D) ^-2?rd/h 1-e" B K2 e-2xd/h (b) Further, if f/X1 decays exponentially with respect to x then (2.61) J f (z ) dz - h I k=-M f(=k) X 1(Zk) < K 9 e-2*d/h + K e-«Mh 2 a + E e“PNh P (c) Finally, if N and h satisfy (2.33) and (2.34), respectively, then for some constant 0 depending on f (2.62) K0 e“2icd/h + - e_aMh + - e”PNh < C e™ < 2jrd(xM)?i . 2 . a . P Steriger [12] provides proofs of Theorems (2.57) and (2.59) for centered sums, that is, M = N . Lund [6] expands on these to handle M # N. Both of their proofs illustrate the rationale for the use of conformal maps; namely, under suitable assumptions the maps are a means of transferring a standard set of results from Dg to various domains D. Most significant, is that conformality preserves the error. Sinc-Galerkin Method \ \The previous discussion is the background necessary to\ approach the numerical solution of differential^equations 27 via the Sinc-Galerkin method. For the purposes of this work, it is sufficient to examine the method applied to a specific class of differential equations. Hence, consider the second-order, self-adjoint boundary value problem (2.63) Lf(x) s f"(x) + v(x)f(x) = o (x) , a < x < b f (a) = f(b) = 0 Let D be a simply connected domain with (2.64) and < in *< (U A X A O' 1C . Il O n O (2.65) {(a,0), (b,0)} C SD . With regard to D , the Sinc-Galerkin method to approximate the solution of (2.63) is summarized as follows. Define the set of basis functions (Sj. ) by (2.66) Si (X) s S (i , h) o x(x) , h > 0 where X : D Dg is described in (2.46), (2.47), and Theorem (2.53). L by See Figure 5. Next define the approximate solution (2.67) A N fm (x) s £ fiSi (x) , m = M + N + I . i=-M To determine the unknown coefficients, f^, orthogonalize the residual with respect to the basis elements; that is, (2.68) 0 = (Lfm - o, Sp ) =.(fm ", Sp ) + (vfm - a , Sp ) , -M < p < N . 28 Here, the inner product is b (2.69) (u,v) = J u(x)v(x)w(x)dx a where w is a weight function yet to be specified and the quadrature rule used to evaluate the inner product is (2.61). The resulting discrete linear system is solved for fi , -M < i < N. A general review of Galerkin methods is found in Botha and Finder [3]. Their discussion includes several criteria by which to judge the quality of a method. These include: (i) the ease of constructing the basis elements; (ii) the ease of evaluating the inner product; (iii) the ease of solving the system; and (iv) the accuracy of the method. With regard to the Sinc-Galerkin method, the first is demonstrated by (2.66) while the second is shown to depend on the sine basis elements and sine quadrature rule. For the third, it is shown that an adroit choice of weight function leads to a system which is easily solved. Lastly, the accuracy follows directly from Theorem (2.59). Stenger [5] thoroughly discusses W = 1/X1 and Lund [6] introduces w = 1/(X')%. Stenger's choice of weight function handles regular singular problems but yields a nonsymmetric linear system. Lund's weight function results* in a symmetric linear system while possibly restricting the class of functions to which the method applies. The development of the discrete system for a general weight motivates the 29 choice w = I/(X1 as well as addresses the questions raised in the previous paragraph. Continuing with (2.68), for -M < p < N integrate the inner product (f",S ) by parts twice to get b O = J f"(x)Sp(x)w(x)dx a + 1 [V (K )f (x) - O ( X ) ]Sp (X)W(X)dx + J f (x){[Sp (X)W(X)]" + v (X)Sp (X)W(X)}dx (2.70) J o (x)Sp (x)w(x)dx b r d2 BT + f f (x)M S(p,h) o x(x) (X'(x))^w(x) -S (p,h) o X(X) (X"(x)w(x) + 2X'(x)w'(x)) + S(p,h) o X(x)(w"(x) + v(x)w(x)) I "(x)S (p,h) e X (X )W (X )dx where BT is the boundary term (2.71) BT = { f 1(x)Sp (x)w(x) - f (x)[Sp (x)w(x) ] 1 \ b a • For now, BT is assumed to vanish. The exact assumptions 30 governing BT = 0 are discussed in Chapter 3 when specific conformal maps are used. To apply the sine quadrature (2.61) to (2.70) several suppositions are necessary. First, f [SpW]" satisfies the hypotheses of Theorem (2.59). Second, fvSpw and aS p W are in B(D). It is unnecessary for fvSpw / X 1 and oS w/X' to decay exponentially with respect to X. This P is because for g 6 B(D), the quadrature rule (2.60) for g integrated against the sine yields point evaluation. That is, (2.72) J g(x)Sp(x)dx = g(ph) + 0 (e-n:d//h) . a To proceed, the following three identities are useful. I , p = i (2.73) S (p ,h) o x(x) = 6 (0 ) X = X , 0 , p 7* i (2.74) hi— S (p, h) o X (x) = 6 (I) x=x. 0 (-l)l-P . i-P , P t4 ! and LdX' S (p ,h) o X(x) X = X . (2.75) 6(2) Pi -jc2/3 -2(-l)i-p (i-P)' , P = i , P T5 i Applying the quadrature rule results to (2.70) yields 31 N I i=-M f ( X i ) ~2 &p V X ' (x I^w (x I)Lh (2.76) I M x / X u(Xi ) + h 6Pil ( F H I T w(Xi) + 2w m x Ii + . vlxIlwfxI1 p i X X 1 ( X i ) X 1 ( X i ) - ? 6(0) 0txIlwtV . + o,e-(KdnM,^, I p i X 1 ( X i ) i=-M An equivalent matrix formulation is (2.77) — I (2) D(X'w) + - I (1) D "I'" + 2w'"J h2 h L x ' J + D w " + VW X ' where (2.78) ? - ^-M+1' ^N-I' ^ ' (2.79) D(g) GT(X-M) g(x-M+l) G(Xw) mxm 32 I (2) 6(2) Pi (2.80) 2 72 2 -JC 2 3 2(-I)m (BI-I) 2 2 (-D m-1 (m-2)2 2(-I)m 2(-I)m 1 (m-1)2 (m-2)2 and 1(11 ■ t t ¥ ) i. 0 -1 1 2 (2.81) I 0 - 1 mxm ( - D m'1 m-1 ( - D m"2 m-2 (_l)m-l (_1)m-2 . . . - ---- - - - ---- -— 0 _ m-1 m—2 For an arbitrary weight function it is unclear how to solve (2.77). In particular, the skew-symmetric nature of I(1) causes difficulties. It can be argued that as h -» 0 I(2 ) is the dominant matrix in the system. This is somewhat satisfying in that l(2) is a symmetric, negative definite 33 Toeplitz matrix whose spectra lies in the interval (-K2 ,0) Lund's [6] notion is to consider X " (2.82) — w + 2w' = 0 X' or equivalently X" —2w' (2-83) r " " V ” * Integrating both sides of (2.83) yields w = 1/(X')% and the system now simplifies to — I (2) D ( (X ' )^) +, D (X' )* (X')%/ X ' (2.84) = D[o/(X')3/2]f . Multiplying through by D(X') and factoring gives (2.85) A t = D (o/(X1)%)I where (2.86) A = D (X ') — l(2 ) + D I ( X ' )•' + (X')*7(X')3/2 D(X') and (2.87) f = D(1/(X' . The matrix A is a real symmetric, negative definite matrix; hence, there exists an orthogonal matrix Q such that (2.88) A = QAQT of the eigenvalues of A.where A is the diagonal matrix Solving for t , (2.89) t = QA-1QtD (o/(X')^)? 34 and, in turn, "t is recovered via (2.90) ? = D((X')%)# . In review, a symmetric coefficient matrix for the self- adjoint problem (2.1) is attained by selecting w = I/(X1)^. The class of singular problems that w = 1/(X')% handles appears to be somewhat more restricted than if w = 1/X1 [6]. However, this statement is dependent on the method of proof in (6]. When w = 1/(X')% is known to apply to a singular problem, the rate of convergence remains 0 (exp(-K'fH)) where 2N + I basis functions are used. 35 CHAPTER 3 THE SINC-GALERKIN METHOD FOR THE WAVE EQUATION Various numerical methods for solving general second- order hyperbolic problems can be found in the literature [3]. Usually a scheme is first developed for the classic vibrating string problem Lu(x,t) E utt(x,t) - ux x (x,t) = f (x ,t) (3.1) (x , t) 6 (0,1) x (0, <») u(o , t ) = u ( i , t ) = o t e to,a.) u(x,0) = Ut (x,0) = 0 x E [0,1] . Once this model problem is investigated, several generalizing routes are possible. Among them are adaptations to handle nonconstant coefficients, nonlinearities, or higher space dimensions. With regard to the Sinc-Galerkin method, the present work examines higher space dimensions. In this chapter the discrete linear Sinc- Galerkin system for (3.1) is derived in some detail and from it a natural extension to the linear systems for two and three space dimensions is exhibited. The actual solution of the systems is postponed until Chapter 4. A common procedure for solving (3.1) is a Galerkin discretization of the spatial domain with time dependent coefficients. This gives a system of ordinary differential 36 equations typically solved by difference techniques (O.D.E. solvers) [14]. Drawbacks of this method include the necessity to artificially truncate the time domain and the fact that the numerical solution is valid only on a finite time grid. In contrast, the technique of this work implements a Galerkin scheme in time as well as space. The success of the method is largely due to the choice of basis elements; these basis elements are tensor products of sine functions composed with suitable conformal maps for the intervals (0,1) and (0,~ ). The conformal map for (0,«) is the means for building an approximate solution to (3.1) valid on the infinite time domain. The structure of the discrete Sinc-Galerkin system is computationally efficient for at least two reasons. As a result of the identities (2.73) - (2.75) and the sine quadrature rule (2.61), the coefficients of the unknowns and the constant terms are easily evaluated. Note, that this property is independent of the weight function. A property which is intimately connected to the selection of the weight is the symmetry of the system. In the present chapter it is shown that a generalization of w = I/(X1)^ leads to a symmetric linear system. Two matrix formulations for the discrete system are given. The more convenient formulation to implement is dependent on the computational environment, i.e. the available computer hardware. 37 To determine the Sinc-Galerkin approximate for the solution of (3.1) the fundamental steps used in Chapter 2 remain unchanged. First, select a set of basis functions, then using these basis functions define an approximate solution. The unknown coefficients appearing in the approximate are determined by orthogonalizing the residual with respect to the basis elements. The orthogonalization is defined with a weight function and evaluated by means of the quadrature rule (2.61). To construct basis functions on the region ((x ,t): 0 < x < I, 0 < t < “ }, basis functions on the intervals (0,1) and (0,=>) are built and then their tensor products are formed. So to begin, define the conformal maps (3.2) 0(z ) = A n 1 - z and (3.3) T(w) =An(W) The map 0 carries the eye-shaped region (3.4) D- = Z = X + iy: arg I — z < d < jr/2 onto the infinite strip Dg given by Definition (2.20). Similarly, the map T carries the infinite wedge (3.5) Dw = (w = t + is: Iarg(w)| < d < rc/2} onto Dg . These regions are depicted in Figure 6. 38 Figure 6: The Regions Dfi, Dw , and Dg . The compositions (3.6) S 1(X) = S(i,hx ) o 0(x ) and (3.7) Sj(t) = S(j,ht ) o T(t) define the basis elements for (0,1) and (0,°°), respectively (see Figure 7 below). The "mesh sizes" hx and h t represent the mesh sizes in Dg for the uniform grids (Ichx ), -a> < k < C0, and (pht) , -o» < p < <=. The sine grid- points xk 6 (0,1) in Dg and tp G (0,») in Dw are the inverse images of the equispaced grids; that is, khx khx (3.8) Xjc = 0 1 (khx ) = e /(I + e ) and 39 (3.9) tp = T“1(pht) Pht S(0,1) o T(t)S(O1I) o o+ and (3.19) Iim u(t)/(Jt « An(t )) = 0 . f+00 With regard to the last equality in (3.15), the nodes tp are defined by (3.9) and the identity (3.20) (T'(t))™3/2 [(T'(t)) %]" = -1/4 is utilized. The integrals Iu and Ir are explicitly defined in [15] and represent the exact error terms in (2.60). From Theorem (2.59) , I11 vanishes with order 0(e-,cd/h) ifu (3.21) u(z) [S(Pzht ) o T(z)(T'(z))~"%] E B(Dw ) while the same statement holds for Ir if (3.22) r(t)(S(p,ht ) o T(t)(T'(t))-%) E B(Dw ) . 42 to truncate the infinite sum note that u(t)(S(Pzht) o t(t)(t'(t))“^>" behaves like u(t)(?•(t))3^2 near t = 0 and t = ®. Hence, condition (2.55) simplifies to (3.23) |u(t)(?■(t))%| < K tY , t 6 (0,1) t“6 , t G [I,*) for positive constants K, v, and 6. Since (3.23) implies (3.18) and (3.19) the only additional assumption necessary for (3.16) to vanish is (3.17). Applying the truncated quadrature rule (2.61) to (3.15) yields 0 = h t. I J=-Mt ultJlIr1= 6PJ1 !-4J S V ' jtJ1!* (3.24) r(t ) - ht --------— — + 0 {exp (-JtdZht) ) (t'(tp ))3/2 + 0(exp(-YMtht)) + 0(exp(-6Nth t)) If ht and Nt are chosen by (3.25) ht = (JtdZ(YMt ) )% and (3.26) I then the errors are asymptotically balanced and the final linear system is (3.27) I ,(0) 4 6Pd (T'(tj))% u(tj) - r(tp )Z(T'-(tp ) ) 3^2 = 0(exp{-(JtdYMt) ^ )) . Therefore, if 43 a N t (3.28) Um^ (t) s £ UjS(Jzht ) o T(t) , m t = M t + N t + I J=-Mt is an assumed approximate solution of (3.13) then since ' - A um t (tp) = Up, the discrete Sinc-Galerkin system for (3.13) is defined by the left-hand side of , (3.27) With u (t .)■ J replaced by Uj. When p = -Mt,.. .,Nt the matrix formulation for (3.27) follows from (2.84) and reads (3.29) A tD((T')%)u = D((T')“3/2 r)3? where and the remaining terms are defined by (2.78) and (2.79). At is a real symmetric, negative definite matrix. To maintain these properties in the discrete system (3.29) the change of variables . (3.31) v = D((T')-%)u leads to (3.32) A t v = D((T')~% r)f where the matrix i (3.33) A t = D(T')AtD(T' ) . The solution u is found via the procedure outlined in (2.88) through (2.90). Before considering (3.14) note that the selection h t in (3.25) asymptotically balances the error terms exp(-jcd/ht ) and exp(-vMtht ) while the selection N t in (3.26) balances 44 the asymptotic errors exp(-6Ntht ) and exp(-yM^h^). The error term exp(-6N^h^) arises as a consequence of the inequality (3.23) which assumes u(t) decays algebraically at infinity. For many problems the solution decays exponentially as (3.34) |u(t)(?'(t))%| < K e"6t , t G [I,-) . In this case, Lund [6] shows that the' choice (3.35) —— An — M^ht. h t 6 t r + I significantly reduces the size of the discrete systems solved with no loss of accuracy. The preceeding discussion applied to (3.14) follows a parallel development. The map T is replaced by the map 0 of (3.2) (since 0 is compatible with the interval (0,1)) and hx is substituted for ht . Again orthogonalizing the residual and integrating by parts twice renders an equation similar to (3.15) with a boundary term on (0,1) analogous to (3.16). To guarantee the boundary term vanishes it is assumed that (3.36, “ 5+ U,(X,'E Iim u ' (x)Vl-x x-»lAn(x) x_rj- An(l-x) Example 5.4 in Chapter 5 illustrates a case where u' is unbounded as x -» O+ yet (3.36) is satisfied. Further discussion is included in that example. The exponential decay condition (2.55) simplifies to xa , x E (0,%) (I - x)P , x G [%,!)(3.37) Iu(x )(0'(x))%| < L 45 for positive constants L , a, and |3 where f and X are replaced by u(0')3'/2 and 0, respectively. Hence the selections (3.38) hx = (rcd/(aMx ) and (3.39) Nx . = I result in the balanced asymptotic error rate 0 (exp (-(JtdaMx ) ^ ) ) . Therefore, if a Mt (3.40) Umx = £ UiS(IfIix ) o 0 (x) , mx = Mx + Nx + I i=-Mx is an assumed approximate solution of (3.14) then the discrete Galerkin system for the (U1) is given by (3.41) Axw = D((0')“* s)t ■ where the matrix (3.42) Ax = D(0')AxD(0') and Ax is the same as At in (3.30) with h t replaced by hx - As with A. > A is symmetric and negative definite. Further, t x when M = N , D(01) is centrosymmetric. Thus since Ax isx x Toeplitz, Ax is centrosymmetric. Finally, the vector w is related to the vector of unknowns u by (3.43) w = D( (0 ' )_J*)u . The separate one-dimensional problems serve to identify the matrices Ax and At as well as disclosing the parameter selections. Returning to the two-dimensional hyperbolic 46 problem (3.1) and its approximate solution (3.10), the unknown coefficients (u Jj) are found by orthogonalizing the residual. One possible formulation for the resulting discrete Sinc-Galerkin system is (3.44) AxV - VAt = G ,where the matrices At and Ax are identified in (3.33) and (3.42). The mx x mt matrices V and G are defined by (3.45) V = D((0')"%)UD((T')-%) and (3.46) G = D((*')-%)FD((T')-%) where U and F are the Mtx x mt matrices which consist of the coefficients (u^j) for (3.10) and f(x^,tj), respectively. The form of (3.44) has two easily discerned motivations. One is that representing the unknowns as a matrix is more naturally suited to the orthogonalization procedure and subsequent use of the sine quadrature rule. The second motivation is that transforming the unknown matrix U by (3.45) introduces the symmetric coefficient matrices Ax and At . As a direct consequence of this symmetry, the system (3.44) is easy to solve numerically. Chapter, 4 discusses the solution technique at some length. A second alternative for posing the discrete system occurs when the unknowns are arrayed as a vector. In some sense, representing (uJj) as a vector versus a matrix is purely a notational matter; however, the computational aspects of storing coefficient matrices and numerically 47 solving the discrete system can vary greatly depending on whether {u^.} is regarded as a vector or matrix. Two background terms provide the notational machinery to rewrite the discrete system posed as in (3.44) as a system defined with (u^j} as a vector. Definition (3.47); Let A be an m x n matrix and B be a p x q matrix. The Kronecker or tenspr product of A and B is mp x nq matrix I aiiB a I2B . . . . a lnB A ® B H a21B ^ 2 2® • * • • a2nB _amlB am2B • • • • amnB The second term, concatenation, loosely refers to representing a finite ordered array as a vector. Eventually concatenation is defined for an array with n-subscripts. For now, a precise definition when there are one or two subscripts is sufficient. If b = (bj), I < i < m, then the concatenation of b is denoted (3.48) co(b) s Eb1, b 2, ..., bm]T Hence, for a column vector c, co(c) = c while for a row vector r, co(r) = rT . When B = (bjj), I < i < m and I < j < n, then co(B) is the mn x I vector 48 (3.49) CO(B) = COfbil) cO(b I2) co which may be regarded as stacking the columns of a matrix. Davis [16] includes a more thorough but slightly different discussion of concatenation and the Kronecker product. In particular, Davis defines concatenation as stacking the rows of a matrix rather than its columns. Besides the notational apparatus just introduced, the following theorems ease the transition to a system whose unknowns are represented as a vector. Theorem (3.50): If A and B are matrices of identical dimension, a and (3 are scalars, then (3.51) CO(aA + PB) = aco(A) + Pco(B) Theorem (3.52): Let A, X, and B be matrices whose dimensions are compatible with the product AXB. Then (3.53) CO(AXB) = (BT ® A)co(X) Proof: Beginning with AXB, its ij-th element is m n (AXB)ij = £ ajk £ X jca bAj k=l A=I 49 n m = I bAj I aIk xkA ' A=I k=l The last line is precisely , ( B ^ A)co(X). To apply Theorems (3.50) and (3.52), a more convenient form of (3.44) is (354) AxVIm t - I mxVAt = G where Ig (q = mt ,mx ) is the q x q identity matrix. Concatenating (3.54) and using the symmetry of At , the discrete system admits the equivalent representation (3.55) (Imt 0 Ax - At @ Imx)co(V) = co(G) which is referred to as the Kronecker sum form. The coefficient matrix (3.56) Bi=) = Imt » Ax - At ® Imx - (Blj) , -Mt < l,j < Mt has an easily discerned block form. The square blocks have dimension Hix x Hix and are given by (3.57) Bi j = S ( O ) A x - ( A t)l j Imx , -Mt < l.j < Nt where (At )^j is the ij-th element of the matrix A t. vectors co(V) and co(G) are related to U and F by co(V) = co(D((*')-%)UD((T')-%)) (3.58) = (D((T')-%) 0 D((0')-%}co(U) and (3.59) co(G) = (b((T')-%) 0 D ((01) CO(F) From (3.58), it is evident that (3.55) The is the matrix 50 formulation which arises via a natural ordering of the sine gridpoints from left to right, bottom to top where (Xi,tj) follows ( , tp ) if tj > tp or if tj = tp and Xi > xk . The previous discussion may be generalized to the second order hyperbolic problems in two and three space dimensions. Explicitly, the problems considered are L u (x ,y ,t ) s u tt - v2u = f(x,y,t) (x,y, t) 6 (0,1)2x (0, oo) (3.60) u| = 0 t 6 [ 0,00) '3(0,I)2 uj = ut j = 0 (x,y) G [0,1]2 It=0 It=0 and Lu(x,y,z,t) s u tt - 92u = f (x,y,z,t) (x,y,z,t) E (0, l)3x(0,oo) (3.61) u| = 0 t E [0,=») lS(OfI)3 ul = u t = 0 (x,y,z) E [0,1]3 OH t=0 where COCMIlCi—iO1CC(0r-tO respectively, are the open and closed n-dimensional unit cubes. The spatial variables are all on the same interval as a matter of ease rather than necessity. In particular, this simplifies the construction of basis elements on each spatial interval because the map 0 (3.2) is used repetitively. Hence the Sinc-Galerkin approximate solutions to (3.60) and (3.61), respectively, are 51 (3.62) A Nx N Nt umx ,m y ,mt Cx 'Y '*) = J J J u ijAsijA<*'Y-t I I=-Mx J=-My A=-Mt and (3.63) N N N Nt I I I I uI j M s IjkAlx zy,z, t) I=-Mx J=-My k=-Mz A=-Mt where (3.64) SiJA(x ^Yzt ) = S1 (X)Sj- (y)S^(t) , (3.65) Si JkA (x ZYzz Zt ) = S i(x)SJ.(y)Sk (z)S^(t) , (3.66) mq = Mq + Nq + 1 q = x,yrz,t Z and Sp (P = i/J/k) and S^ are given by (3.6) and (3.7). As usual, the unknown coefficients (U^j- or (u IjkA) are found by orthogonalizing the residual using the weight function (3.67) w(x,y,t ) = 1/(0'(x)0'(y)T'(t))^ for the three-dimensional problem and (3.68) w(x,y,z,t) = l/(0'(x)0'(y)0'(z)T'(t))% for the four-dimensional problem. Analogous to the two-dimensional case, the parameter selections are deduced from separate one-dimensional problems. The form of the one-dimensional differential equations in y and z are assumed to be like that of problem (3.14). The choices for the y parameters are - , - ‘ / (3.69) hy = (TCdZ(SMy ) ) H and 52 (3.70) + I based upon the assumption v y e (o,%) (3.71) |u(y)(0'(y))%| < L (1-y)" y e [%,i) where L , $ , and n are positive constants. With respect to the z parameters, an assumption like (3.71) with f replaced by u and n replaced by v gives (3.72) hz and (red/(pMz j)% (3.73) + I To list the linear systems for additional notational devices are necessary. I < i < m and I < j < n, U is represented as a matrix. To and For U = (u^j), view u(3) E (Uij^ ) , I < i < m, I < j < n, and I < A < p, as a matrix define mat(U(3 )) = mat((Ujj&)) Cco(Uijl) , C O ( U ij2), - c°i (3.74) Conveniently, concatenating the array 3 ^ is equivalent to concatenating the matrix; i .e ., co(U (3 )) (3.75) c°((UijA)) CO ( U iJi) CO(U1J2 ) .coluijp>J co(mat(U1 )) 53 A recursive definition suffices to generalize concatenation for the n-subscripted array U = (u.- 4 ) , 1 I 1 Z- • • -1H I < ij < my, I < j < n, as follows: co(U(n)) co^ uI1I2 ...in )) (3.76) co(uili2:--i(n-l,l) c0^ i 1I2 -■■i(n-1)2) C0 U^ ili2 •••i(n-I)imn ^ In turn, using (3.76) a recursive formula for the matrix representation is easily given by mat(Uin)) (3*77) s [C° (Uili2--- M n - I ) ^ ' CO(Uili2--- M n - D Z ^ ......... CO(Uili2---i (n-l)n*n)] Hence, going from mat(U^n )) to co(U^n )^ involves unraveling the last index of U^n ). These devices provide a convenient means to compactly write the systems for unknown coefficients when the spatial dimension is greater than or equal to two. In two spatial variables, the system analogous to (3.44) is (3.78) {Imy®Ax+Ay®Imx}mat(v(3 )) - mat(V<3))At = mat(G(3 ^) 54 where for -Mjf < i < Nx , -My < j < Ny and -Mt < A 5 N^ mat(V^3 )) = mat((v--o)) (3.79) (3) . = DyXmat(u(3 ))D((T') *) , (3.80) Dyx = D( (01(Y))~^) ® D( (01(x))~% ) and Ay is the same as Ax in (3.42) with hx replaced by hy . The matrix mat(G (3 ^) is defined just like mat(V (3^) with U(3 > replaced by F = (f (x^,y ^,t^)) where the point (XizYj,tA ) is a sine gridpoint. Adding a third variable yields IImzmy ® »x + Imz ® 4V * A * ® Imymx>”atlv<4)) - mat(v<4 >)At = mat(G^4 I) where <3 -82) Ipq = Ip * Iq ' and for -Mx < i < Nx , -My < j < Ny , -Mz < k < Nz and -Mt < A < Nt mat(v(4)) = mat((viikA)) (3.83) . = Dzyx mat(u(4))D((T') %) , (3.84) D = D((0'(z))"%) ® D((0'(y) )~%) ® D((0'(x))"%) . The matrix Az is Ax with hx replaced by hz , and (3.85) mat(0(4)) = Dzyx mat(F)D((?')'%) where F = ( f (Xj. ,yj , Zfe, t'A ) ) . Note that in (3.78) and (3.81) the coefficient matrices which multiply mat( ^), j = 3,4, on the left represent the discretization of the Laplacian operators in (3.60) and (3.61), respectively. Alternative definitions of mat(v(n )) can break up the discretized Laplacian in n-dimensions. 55 The Kronecker sum form for the discrete system in two spatial variables is derived by concatenating (3.78). That iS' c o (g (3 )) = co(mat(G^3 ))) = c o [(Imy ® Ax + A y ® Imx>mat)Imt - ^ y m xmatIvl3jIAt] =' 1Bt ® I1my ® Ax + Ay ® Imx}co(mat(vl3))) - AtT ® Imymx co(mat(v(3;))) - [ 1B t ® I1By ® Ax * Ay ® Inlx) - At ® 1Bymx ] c°lv(3J) Similarly, the vectorized version of (3.81) is ^1Hit ® ( ^mzMy ® aX + -Fmz ® Ay ® 1Hix + Az ® -^mymx^ (3.87) - - A t ® ! m z m y m x j ^ t V ^ h = co(G(4 )) . By definition of concatenation, the grid is swept in a natural ordering: first across the x-domain, then the y, followed by the z (for (3.87)), and lastly through the t-domain. A closer examination of this structure as well as the solution is developed in the next chapter. 56 CHAPTER 4 SOLUTION OF THE DISCRETE SINC-GALERKIN SYSTEM Classical Methods As mentioned in Chapters I and 3, typical Galerkin schemes for time dependent partial differential equations use a trial function which is a truncated expansion of basis functions defined over the spatial domain and having time dependent coefficients. For instance, the form of a trial function for (3.1) is . . (4.1) u(x,t) = £ ci (t)0^(x) i=l where t0*)2=1 is complete in some function space containing the true solution u(x,•). Orthogonalizing the residual with respect to 9 •, I < j < N, leads to a system of ordinary differential equations in time. The literature abounds with algorithms to solve this type of system on a discrete, truncated time grid [14], [17]. In contrast, the Sinc-Galerkin method defines the basis functions on the entire space-time domain. The coefficients of the trial function are now constants, hence a system of ordinary differential equations is exchanged for a system of linear equations. Linear systems also arise when classic 57 schemes such as finite differences or finite elements are applied to (3.1) on a truncated time domain. Several well- established routines exist for the numerical solution of these systems. In light of developed techniques such as finite differences or more common Galerkin methods, a numerically feasible scheme which solves the discrete systems (derived in Chapter 3) is essential if the Sinc- Galerkin method is to represent a viable alternative for approximating solutions of (3.1), (3.60), and (3.61). Conventional algorithms typically solve linear systems written in the form Bx = b where x is a vector of unknowns and B is called the coefficient matrix. In the present setting the discrete Sinc-Galerkin systems (3.55); (3.86), and (3.87) have the coefficient matrices (4.2) B<2> - Imt ® Ax - At ® Iax , (4.3) BO) = Imt ® (Imy ® Ax 4- Ay ® Imx) - At ® Imymx , and ^ (4.4) B<4 > = Imt ® (Imziny ® Ax 4 Imz ® Ay ® Imx + Az ® 1Iiiymx * ~ At ® 1HizMiyinx respectively. Table I lists the dimensions of B ^ *, j = 2,3,4, and suggests a first consideration for a feasible scheme — machine storage. Methods which require full - storage mode are often impractical in the setting of systems arising from the numerical solution of partial differential equations. This is certainly true here. For example, if a Table I. Machine Storage Information for the Matrices and Equation (3.55) . (3.86) (3.87) j 2 3 4 Dimension of ^ mm. x m m m m m. x m m m m m m m x m m m mX t X t x y t x y t x y z t x y z t Nonzero elements m in (m + ni - I) X t'' x t ' m m m fm + m + m - 2) x y t\ x y z > m m m m (m + m + x y z tv x y m + m - 3) of B(j) z t ' Nonzero elements m m^m m m m m m m m m m O f Bj of Ax and At ' respectively. If the change of variables (4.7) Z = QtVP 62 is made in (3.44) then the equation takes the form (4.8) Ax Z - ZAt = H where (4.9) H = QtGP . The solution of (4.8), in component form, is given by -Mx < i < Hx (4.10) [ (Xx ) i - (Xt )1- ] _• = hjj , X 1 X J J J -Mt < j < Nt where h^j is the ij-th element of H . Once Z is determined from (4.10) the solution V of (3.44) is recovered via V = QZPt and, in turn, using (3.45) gives U. If (Xx )i = (Xt )j for some indices i and j , the equation in (4.10) is inconsistent. In the array of examples listed in Chapter 5 this matching of eigenvalues never occurs. The author believes the following: Conjecture (4.11): a (Ax ) D o ( A t) = 0 where a(A) denotes the spectrum of A. This is connected, so the author believes, to the different nature of the spectrum of Lu = -u" on compact versus semi­ infinite domains (see deBoor and Swartz [18]). While none of this has been proven, continuing analytic and numerical work supports the validity of the conjecture. Verifying the consistency of the system posed as in (3.44) is equivalent to showing the matrix is nonsingular which, in turn, implies the existence of a unique solution for the linear equations (3.55). To 63 establish the connection a property of tensor products that Davis [16] states is useful; that is, (4.12) . (A ® B)(C © D) = (AC) ® (BD) assuming each product is defined. Using (4.12) and transforming variables in (3.55) via (4.13) co(Z) = (PT ® Qt )co(V) yields (4.14) {Imt ® Ax - A t 0. InixJco(Z) = co(H) where (4.15) co(H) = (PT ® QT )c o (G) . Equation (4.14) shows that the eigenvalues of B^2) are given by all possible combinations of (Xx ) ^ - (Xt )j-, -Mx < i < Nx , -Mt < j < Nt . Equally significant is the fact that diagonalization of B ^ ) j.s accomplished by two intermediate diagonalizations of much smaller matrices. As a result, b (2) need never be stored. Indeed, if mx - m t, the largest array with respect to storage has mxmt elements. This product represents the number of unknown coefficients in the trial function (3.10) (see Table I). . Finally, note that under the transformations (4.7) and (4.13) the algorithms solving (3.44) and (3.55), respectively, can be machine coded identically. An alternative method of solution for (3.55) is described below. Consider the transformation co(ZD ) e CO(Q7V) = (Imt® QT )co(V) . (4.16) 64 ' Rewriting (3.55) in terms of co(ZD ) and multiplying through by Ijn^ ® Qt yields (2 ) (4.17) B£, CO(Zd ) .= CO(Hd ) where (4.18) Bll2 I = Imt • Ax - A t • Imjt and (4.19) . Co (Hd) = c o (QtG) . ( 2 ) The matrix Bd is a block matrix all of whose blocks are diagonal matrices. Explicitly, if the blocks are called BD(ij) then they are given by (4-20) 6D(Ij) “ 6Ijl aX - <*t>ij 1Hlx -Mt 5 i-l 5 Nt where (At )iJ denotes the ij-th element of A t, This improved structure has come at the minor expense of only one diagonalization of the mx x mx symmetric, negative definite matrix Ax . Moreover, machine storage for B ^ ) is considerably less than that needed, to save the nonzero elements of B^2* (see Table I). Block elimination techniques work extremely well on the system (4.17) as all matrix inversions and multiplications are performed on diagonal matrices. After the elimination procedure the solution is recovered from co(V) = (Imt ® QJc o (Zd ) = co(QZd ) from which U is found via U = D((*')%)V D((T')*). Although block elimination techniques for B^2) may be performed on scalar machines, the structure of B^2 ) is 65 inherently suited to vectorized computation. A block Gauss- Jordan elimination routine solving (4.17) has been written in explicit vector FORTRAN and implemented on the CYBER 205 computer. Its performance was consistent with the results of the algorithm which diagonalizes both Ax and A ^ . Solution of the Systems in Two and Three Space Variables The algorithms detailed in the previous section admit ready extensions for application to the discrete Sinc- Galerkin systems arising from (3.60) and (3.61). One extension is actually an. assortment of methods utilizing block techniques. The second is based on diagonalizing A„, Ay , At , and Az (if it occurs in the system). The discussion begins with the second method. A„ and A_ are symmetric hence there exist orthogonal Y z matrices R and S such that (4.21) R7AyR = Ay and (4.22) where A StAzS = Az and Az are diagonal matrices containing the eigenvalues of Ay and Az , respectively. Equations (3.86) and (3.87) may be transformed using changes of variables analogous to (4.13) as follows. Let = (z^j^) and Z(4) = -Mt < A < Nt , be defined by (ZijkA)' -Mx 5 1 5 Nx' -My 5 J 5 Ny , -Mz < k < Nz , (4.23) CO(Z 1 ) (P7 ® R7 ® Q7 )co(y(3 )) 66 and (4.24) co(z(4)) = (PT ® ST ® R t ® QT )co(V(4V) where Q and P are given by (4.5) and (4.6), respectively. Rewriting (3.86) in terms of yields [1I t ® ® aX + Ay * Imx) - A t ® ImymJcolZ <3 >) (4.25) = (PT ® Rt ® QT)co(G(3)) while (3.87) in terms of 4 ^ becomes [1Jiit ® ^ m zmy® Ax + Imz ® Ay ® Imx + Az ® Imymx ^ <4 -26) - At ® = (PT ® ST ® Rt ® Q^)co(G(4?) . If the arrays = (hi;jA ) and H^4 ^ - (HiJkjl) , -Mx < i < Nx , -My < j < Ny , -Mz < k < Nz , -Mt < A < Nt , are given by (4.27) co(H(3 )) = (PT ® Rt ® QT )co(G(3)) and (4.28) CO(H ^ 4 ^ ) = (P? ® ST ® RT ® QT )co(G (4 ^ ) then the solutions of (4.25) and (4.26), respectively, are (4.29) C(Xx )i + (Xy)j - (Xt )A ]zijA = h ijA and (4.30) C(Xx )i + (Xy )j + (Xz )k - (Xt)A ]zijkA = hijkA . As in (4.10), the possibility exists that (4.29) or (4.30) is inconsistent. Again this never happens for the examples discussed in Chapter 5. The author believes in the validity 67 of the analogue of Conjecture (4.11) for (4.29) and (4.30). That Is, based on the array of examples tested, if the spatial domain is compact while the temporal is semi­ infinite then, with respect to the sine discretization the spectrum of the discretized Laplacian is disjoint from the spectrum of A t - Assuming the conjecture is valid, the unknown coefficients U ^ in the expansion (3.62) are recovered from (4.29) via (4.23) and (3.79). Similarly, (4.30) may be transformed using (4.24) followed by (3.83) t:o give appearing in the trial function (3.63). One item remains to completely specify the algorithm. When solving the system as above the matrix-vector products which occur are highly structured. Taking advantage of that structure allows the products to be determined while retaining use of three and four subscripts. For instance, suppose X A (Xijk), l < i < m , I < j < n, I < k < p; (Sij), I < i ,j < m; B = (by^), I < i,j < n; and C = (Ci j ), I < i,j 5 p. The product (C ® B ® A)co(X) is defined by [(C ® B ® A)co (X)]ijk (4.31) P I t=l I ckt I b j s S=I air xrst r=l where [ ]ijk indicates the ijk-th element. Similarly, if A, B , and C are as before; X = (xijkA), I < i < m , . I < j < n , I < k < p, I < A < q ; and D = Cdij), I < i,j < q; then (D' ® C ® B ® A) co (X) is given elementwise via 68 [(D ® C O B ® A) CO (X) 12 q p n m (4.32) ^ ^Av ^ ckt ^ bJs ^ aIr xrstv v=l t=l s=l r=l Hence, the code needed to evaluate the vector c o (h (3 )) or co(H^4 )^ is merely a set of nested loops. Likewise, recovering the trial function coefficients from 3 ) and z(4) xs equally simple. Note that the diagonalization of Ax , Ay , At , and Az (when it occurs) is the definitive characteristic of the algorithm just described. Indeed, when these diagonalizations are carried out, the numerical solution of the linear systems posed as in (3.86) and (3.87) is indistinguishable from the numerical solution of (3.78) and (3.81), respectively. Further, the chief attributes of this method for the numerical solution of the discrete systems are a direct consequence of the diagonalizations. The first, implementation ease, is established by the preceeding discussion. The second is the minimal machine storage needed. The maximum array size required to solve (3.86) is mxmym t while for (3.87) it is mxmymzm t (see Table I). These values represent the number of unknown coefficients occurring in the trial function and, as such, also give the minimum array size needed. Hence, with regard to machine storage, it is not possible to do better than this method. The remaining class of algorithms is only briefly mentioned as it has yet to be implemented. The 69 distinguishing trait of the class is that at least one of the matrices Ax , Ay , A t, or A z (if it occurs) remains undiagonalized. Here At will not be diagonalized. As an example, the change of variables (4.33) CO(Z^3j) E (Ifflt © R t ® QT )c o (V(3)) transforms (3.86) to (4.34) b £3 > CO (Zj(3 I) = (Imt © RT © QT)c o (g (3>) where (4.35) b £3> = Imt ® (Imy ® Ax + Ay ® Imx) - A t ® Imymx . Bjl3 ) is a block matrix all of whose blocks are diagonal matrices. Therefore, one appropriate choice of solution is a block Gauss-Jordan elimination routine. However, a complication arises due to the added space dimension. That is, either the block routine must be written to handle unknowns with three subscripts or the array of unknowns must be adapted to two subscripts. In the latter case, the block routines used to solve (4.17) could be applied although, they may not take advantage of all vectorization possible. The dilemma is compounded for the system (4.36) B^4 ) co(Z^4 )) = (Imt ® ST 0 RT © QT )CO (G < 4 ) ) where (4.37) B(4 ) D - I mt ® ( 1HizItiy ® Ax + 1IItz ® Ay ® 1Hix + Az ® 1 JIlymx ) “ At ® 1 UlzIIlymx and 70 (4.38) CO(ZjJ4 J) = (Imt 0 ST ® RT ® QT )c o (V(4) ) Here the array of unknowns has four subscripts. To further cloud the picture, the manner in which the transformed coefficient matrices are stored may depend on the transformation. In particular, if only Ax is diagonalized the resulting transformations of B (3 ) and B^4 J require more storage in terms of nonzero elements than the matrices B^3 J and B1J4 J (see Table I). Diagonalizing the matrices Ax , Ay , A^, and possibly A z appears the easier route in the setting of problems (3.1), (3.60), and (3.61) but the author believes that block techniques are more widely applicable. For instance, the inclusion in the partial differential equation of lower order terms and/or nonconstant coefficients may discourage diagonalization of the entire coefficient matrix. 71 CHAPTER 5 NUMERICAL EXAMPLES OF THE SINC-GALERKIN METHOD The space-time Sinc-Galerkin method for (3.1), (3.60), and (3.61) was tested on a large class of problems whose known solutions exhibit differing behaviors. Choosing problems with known solutions allows for a more complete error evaluation. With regard to error, considerable numerical success is evident for all problems; hence, the nine examples reported herein are selected to highlight characteristics of the scheme. In addition the sample of nine breaks into three groups of three where within a group a given characteristic is illustrated for problems in one, two, and three space dimensions. Examples 5.1 - 5.3 and 5.7 - 5.9 display a first trait; that is, optimal parameter selections for an expected convergence rate result in a discrete system of minimum size for the Sinc-Galerkin method. Moreover, symmetry of the discrete system is maintained for any choice of parameters (see Examples 5.7 - 5.9). A characteristic common to each of the nine examples is that the numerical solution (3.10), (3.62), or (3.63) (depending on space dimension) is global. In particular, by employing the conformal map ?(t) = An(t) the approximate solution is valid on the infinite time 72 interval. Another significant property is exhibited by Examples 5.4 - 5.6 whose solutions are singular on their respective boundaries. With respect to implementation (that is, parameter selections), the Sinc-Galerkin method for these singular problems proceeds in the same fashion as for the analytic examples. Referring to the order statements following (5.7), the rate of convergence of the method is governed by the Lip a class to which the solution belongs;— not the degree of singularity of the solution. Perhaps the most distinguished feature of the Sinc- Galerkin method, indeed of spectral methods in general, is the potential exponential convergence rate [7]. For problem (2.1) this convergence is established analytically. Specifically, a direct consequence of Theorem (2.59) is that if 2N + I basis functions are used to construct an approximate solution of (2.1) then the order 0(exp(-K>fW) ) , K > 0, holds uniformly for both analytic and singular problems. With respect to problems (3.1), (3.60), and (3.61) and their respective approximates (3.10), (3.62), and (3.63), it is easy to establish the exponential convergence rate on the sine grid. . An analytic extension to uniform exponential convergence, in multiple dimensions does not currently exist. Arguments based on analytic function theory (see Chapter 2 for the development in one variable) lead directly to problems in functions of several complex variables. Alternatively, arguments based on the . 73 development as in Stenger [5] are complicated by the form of Green's functions in higher dimensions. The author believes that the exponential convergence rate of the Sinc-Galerkin method (indicated in the discussion following (5.12)) does hold uniformly in higher dimensions. Unfortunately, the development of analytic tools to verify this convergence has not kept pace with the development and numerical testing of the methods. Since block techniques for the problems in two and three space dimensions have yet to be implemented, the results reported are for the systems solved via all possible diagonalizations. When feasible the code was run on a Honeywell Level 66 computer using a double, precision ANS FORTRAN-77 compiler. For large systems arising from the discretization of (3.60) or (3.61) the array of unknowns exceeds the maximum array size allowed on the scalar machine mentioned. Hence, these systems were solved with the same code run on a CRAY XMP/48 using a single precision CFT.141 compiler. Single precision on the CRAY XMP/48 is equivalent to double precision on most scalar machines. Indeed, the CRAY XMP/48 reproduced the error results of smaller double precision Honeywell runs. With regard to problems in one space dimension, the maximum absolute error between the numerical approximation, u iA, and the true solution, u(xir tA ), at the sine gridpoints was determined and reported as 74 (5.1) IIE(1 M h ) N - = max lu-o - u (X1, to) I u . IA 1 where h is the stepsize associated with x. Similarly, for the problems in two and three space dimensions, the maximum errors are (5.2) IIE(2) (h)ll = max IU11O - u(xl,y1,to)| u IjA J J ' and (5.3) IIE(3 Mh)II = max | U 1J1ca - u (X1 , yj , zfe, tA ) | u ijkA respectively, where uIjA an< ^ u IjkA are numerical approximates of the true values n(x1 ,yj,tA ) and u (Xf,Yj/zk /tA)• The notation .ddd-v represents .ddd x 10-v. To implement the method in one space dimension, assume the solution u(x,t) of (3.1) satisfies (5.4) * j u (x ,t) j < Mx"+% (I-X)P+^ t y + * e"6t for some positive a, |$, v, and 6. As a consequence of (5.4) there exist constants K and L such that (5.5) Iu(x ,t)(T'(t))%| < K tY , t E (0,1) e-8t , t E [l,co) and (5.6) |u(x,t)(0'(x))%| < L X a , x E (0,%) (l-x)P , x E [%,!) hold uniformly for (x,t) E (0,1) x (0,»). These conditions taken in light of the one-dimensional problems (3.13) and (3.14), respectively, motivate the parameter selections for the approximate 75 (5.7) u A 'm^ = M fc + Nt + I . Recall that the asymptotic errors for the one-dimensional problem (3.13) are 0(exp(-rcd/ht) ) , 0(exp(-YMth t.) ) , and 0(exp(-6Ntht )) (see (3.24)). These depend on the growth bound (3.23) which is analogous to (5.5). Similarly the asymptotic errors associated with the spatial problem (3.14) are 0 (exp(-ird/hx ) ) , 0(exp(-aMxhx ) ) , and 0 (exp(-|iNxhx ) . Once Mx is chosen, balancing the asymptotic errors for the one­ dimensional problems with respect to 0(exp(-aMxhx )) determines the following stepsizes and summation limits: (5.8) h x = tc/ ( 2 a M x )^ where the angle d in Figure 6 is taken to be n / 2 , Assuming the solution.of (3.1) decays exponentially in time, (5.9) Nx = ^ Mx + I (5.10) ht = hx and (5.11) Mt = ^ Mx + I a smaller value of N t, (5.12) Nt = 76 may be chosen (see Lund [6]). Note that when — M„ or — Mp x Y x is an integer these are the values used for Nx and , respectively. The additional +.1 appearing in (5.9) , (5.11), and (5.12) guarantees that all the appropriate errors are at least 0(exp(-aMxhx )). Parameter selections for the approximate in two or three space dimensions are deduced via the same notion of balancing asymptotic errors. Hence, if the solution u(x,y,t) of (3.60) satisfies (5.13) Iu(x ,y , t ) I < Kxa+^ (I-X)P+^ y^+^ (l-y)f?+^ tY+^ e-61: then the additional stepsize by and summation limits My and Ny for the approximate (3.62) are given by (5.14) hy = hx (5.15) My a 7 *Mx + I and (5.16) Ny = ^V+ 1 Similarly for (3.61), assuming the solution u (x ,y ,z ,t ) satisfies (5.17) |u(x,y,z,t)I < Kw(x,y,t)zUt% (i-z)v+^ where (5.18) w(x,y, t ) = x a+^ (I-X)P+^ y^+^ (1-y) rt^ tY+^ e~6t , then the remaining parameters for the Sinc-Galerkin approximate (3.63) are I77 (5.19) hz = hx (5.20) Mz = and (5.21) Nz = a Mx + I V For all examples in one or two space dimensions a sequence of runs with Mx ='4, 8, 16, and 32 is reported. In three space dimensions a run corresponding to Mx = 32 is not currently possible, even on the CRAY XMP/48, due to the considerable machine storage needed for the array of unknowns. With respect to all nine examples, whenever an error result is reported from a CRAY XMP/48 run a * appears to the left. This * also indicates that the run was too large to perform on the Honeywell machine available to the author. In all cases reported, M^ > N t (see (5.12)), which yields a much smaller discrete system than the choice of Nt given by (3.26). The choice Mt = Nt or that given by (3.26) results in larger matrices with ho corresponding increase in accuracy. Worth noting is that as a consequence of the map T(t) = An(t), the sine gridpoints in time become quite large. Recalling (3.9), t ^ s e ^ ^ t _ In @11 Qf th^ following results the selection Mx = 16, for example, leads to a choice of Nt that yields t ^ = e3 ^*7854 ^ =10.55. A 78 large number of iterations is typically required to approximate the solution for a t of similar magnitude when using time-marching schemes. Hence, in comparison, the Sinc-Galerkin method requires much smaller systems to attain the same accuracy as finite differences (see the discussion following Examples 5.7 - 5.9). The parameter a = % is common to all nine examples hence the stepsize h = hx = it / (2aMx ) ^ and the asymptotic error 0(exp(-aMxhx )) are the same throughout this chapter. As a result; these values are not included in every table. Computationally the asymptotic rate (which for d = jr/2 is 0(exp[-aMxhx ]) = 0(exp[-%/(2(xMx )%]) is consistently attained at the sine gridpoints. Exponentially Damped Sine Waves Example 5.1: Damped sine wave in one-dimension. . Lv (x ,t ) 2 v tt(x,t) - vxx (x,t) = {Jt2 + (2 - 4t + (I + jc2)t2)e-t}sin(rcx) v(0,t) = v(l,t) = 0 v(x, 0) = Sin(Ttx) , v t (x, 0) = 0 This problem is transformed to the form (3.1) via u(x,t) = v(x,t) - sin(Ttx) , which yields Lu(x,t) = (2 - 4t + (I + Tt2 Jt2 Je-tSin(Itx) u(0,t) = u(l,t) = 0 u(x,0) = u t (x,0) = 0 The analytic solution is u(x,t) = t2e-tsin (itx) . The 79 parameters (see (5. 4)) are a = P = 1/2, Y = 3/2, and 6 = 1 . Table 2 displays the maximum absolute error at the sine gridpoints for the sequence X= I l 8, 16, and 3 2. Additionally a column with asymptotic error is given. ,Since a = |$, Mx = Njf. In this case the sum on i in (5.7) is referred to as a centered sum. The sum on A in (5.7) is a noncentered sum, as commonly happens when using (5.12). Table 2. Numerical Results for Example 5. I . MX NX M t N t h HE^1) (h) I Asymptotic Error 4 4 2 I 1.57080 .378-1 .432-1 8 8 3 2 I .11072 .253-1 .118-1 16 16 6 3 .78540 .905-3 .188-2 32 32 11 4 .55536 .320-3 .138-3 Example 5.2: Damped sine wave in two-dimensions. L u (x ,y ,t ) = (2 - 4t + (I + 2rc2)t2)e-tsin(rcx)sin(rcy) u I = 0 *3(0,I)2 u| = u t = 0 , It=O t=0 where, recall, (0,1)n refers to the open n-dimensional unit cube. The analytic solution is u(x,y, t ) = t2e-tsin( jcx) sin( rcy) , hence the parameters are Q = P = ? = ?? = 1/2, v = 3/2, and 6 = 1. Table 3 lists IIE^ 2 ) (h) II, the maximum absolute error at the sine 80 gridpoints. Note that the result corresponding.to M„ = 32 was obtained on the CRAY XMP/48 as indicated by the *. Table 3. Numerical Results for Example 5.2 MX N X MY NY M t N t IIE^2) (h) Il 4 4 4 4 . 2 I .813-1 8 8 8 8 3 2 .760-1 16 16 16 : 16 6 3 .387-3 32 32 32 32 11 4 *.451-3 Example 5.3: Damped sine wave in three- dimensions. Lu(x ,y ,z ,t) = (2 - 4t + (I + 3n 2)t 2) e-tsin( rcx) sin( jry) sin( jcz ; = 0 3 (0 ,1)^ = u, = 0 \ t=0 't=0 The true solution u(x,y;z,t) = t2e-tsin( jtx) sin (iry) sin( Jtz) is the three-dimensional analogue of the previous two solutions. The results for this problem are given in Table 4. 81 mX nX My Hy Mz V Mt Mt HEj13 1 (H)II 4 4 4 4 4 4 2 I . 325-0 8 8 8 8 8 8 3 2 * .123-1 16 16 16 16 16 16 6 3 * .451-3 Table 4. Numerical Results for Example 5.3. With respect to Examples 5.1, 5.2, and 5.3, the maximum absolute value of the true solution is the same. Table 5 indicates that there is no consistent difference in the error results for the various space dimensions. Table 5. Numerical Results for the Damped Sine Wave. Mx h IIE^ lj (h)ll IIE^) (h)|| IIE^ 3* (h)|| Asymptotic Error 4 1.57080 .378-1 .813-1 .325-0 .432-1 8 I.11072 .253-1 .760-1 *.123-1 .118-1 16 .78540 .905-3 .387-3 *.451-3 .188-2 32 .55536 .320-3 *.451-3 wmm — mtm — ■ .138-3 82 Singular Problems Example 5.4: A singular problem in one space dimension. Lu(x ,t) (3 - 12t + 4t2 ) x An(x) ITt u(0,t) = u(l,t) = 0 u(x,0) = U t (XfO) = 0 The true solution for this problem is u(x,t ) = t3/2e-t x An(x). The solution is algebraically singular at t = 0 and logarithmically singular at x = 0. Although ux is unbounded at x = 0 , condition (3.36) is satisfied. Further, despite the different character of the singularities the method requires no modification with regard to implementation. For a = p = % and Y = 6 = 1 , the stepsize is again h = hx = ht = k / (Mx )^ and the asymptotic rate shown in Table 6 is achieved, despite the presence of singularities. Table 6. Numerical Results for Example 5.4. MX NX M t N t IIEy ) (h) Il Asymptotic Error 4 4 . 2 I .321-2 .432-1 8 8 4 2 .405-2 .118-1 16 16 8 3 .852-3 .188-2 32 32 16 4 .767-4 .138-3 83 In fact, comparing Tables 6 and 2, this problem has slightly smaller errors with a minor increase in the number of gridpoints near t = 0 due to the algebraic singularity. The logarithmic singularity affects neither the performance of the method nor the system size. Example 5.5: A singular problem in two space dimensions. Lu(x ,y ,t) (3 - 12t + 4t2 ) x An(x)y An(y) - 4t2 ' ^ An(y) + — An(x) e 4>TF u I = 0 '3(0,I)2 u I = u t I = 0 It=0 't=0 The true solution for this problem, u (x ,t ) = t3/2e-t x An(x)y An(y), exhibits the same algebraic singularity at t = 0 as the previous example. In addition, the solution is singular on the spatial boundary. If anything, the difficulties appear more severe here than in ‘ Example 5.4; however, the errors shown in Table 7 are slightly better than those, in Table 6. . The parameters used are ot = fi = C = f? = % and y = 6 = 1 . 84 Table 7. Numerical Results for Example 5.5. MX Nx mY ”y Mt Nt . . '• IIE^ 12 > (h)|| 4 4 4 4 2 I .221-2 8 . 8 8 8 4 2 .370-2 16 16 16 16 8 3 .235-3 32 32 32 32 16 4 *.530-4 Example 5.6: A singular problem in three space dimensions. Lu (x , y , z , t ) = (3 - 12t + 4t2) x An(x)y An(y)z An (z ) - 4t2 ^ An(y)An(z) + ^ An(x)An(z) + — An(x)An(y) z uj = 0 '3(0,I)3 U j = U t I = 0 *t=0 't=0 The true solution is u(x,y,z, t) = t3/,2e-t x An(x)y An(y)z An(z) and, with the addition of u = v = ?£, the parameters are identical to those in Example 5.5. Table 8 lists the results for the present example while Table 9 includes the errors for each of the singular problems. e U t 85/ Table 8. Numerical Results for Example 5.6. MX Nx mY NY Mz Nz Mt Nt IIEjl3) (h)|| 4 4 4 4 4 4 2 I .354-2 8 8 8 8 8 8 4 2 *.286-3 16 16 16 16 16 16 8 3 *.405-4 Table 9. Numerical Results for the Singular Problems. MX h I l ) (h)ll IIEjl2 ) (h) I IIEjl3) (h) I AsymptoticError 4 1.57080 .321-2 .221-2 .354-2 .432-1 8 I .11072 .405-2 .370-2 *.286-3 .118-1 16 .78540 .852-3 .235-3 *.405-4 .188-2 32 .55536 .767-4 *.530-4 — .138-3 For runs corresponding to Mx = 8 and 16 the error associated with Example 5.6 is almost a full decimal place better than the singular examples in one and two space dimensions. Moreover, this improvement occurs despite the greater singularity of the problem. . No such analogue exists when using finite differences. 86 Problems Dictating Noncentered Sums in All Indices Example 5.7: Noncentered sums in one space dimension. Lu(x,t) = {(2 - 4t + t2 ) x (I - x ) 3 - 6t2 (l - x)(2x - I)}e_t u(0,t) = u(l,t) = O U(XfO) = u t(x,0) = 0 The analytic solution of this problem is u(x,t) = t2e-tx(l - x)3 . The parameter selections a = 1/2, P = 5/2, Y = 3/2, and 6 = 1 dictate noncentered sums in both space and time (Mx f Nx and M t ^ Nt ) a^s shown with the errors in Table 10. Again h s hx = ht = %/(Mx )%. Table 10. Numerical Results for Example 5.7. MX NX Mt Nt IIE^l) (h)|| Asymptotic Error 4 I 2 I .237-2 .432-1 8 2 3 2 .208-2 .118-1 16 4 6 . 3 .707-4 .188-2 32 7 11 4 .237-4 .138-3 Even though there are three-fourths fewer gridpoints in the region % < x < I , the same asymptotic rate exp(-Mxhx/2) is predicted. In comparison to Examples 5.1 and 5.4 the results are better even with the smaller system size. This 87 is because the convergence rate of the Sinc-Galerkin method is governed by the asymptotic behavior of the solution. Example 5.8: Noncentered sums in two space dimensions. Lu(x,y,t) = 10{(2 - 4t + t2) x (I - x)3 y 2(l - y) - 2t2 [3(I - x)(2x - l)y2(l - y ) + x(I - x)3 (l - 3y)]}e-t u I = o '3(0,I)2 = U t I = 0 t=0 U = O The solution of this problem, u(x,y,t) = 10t 2e~tx(i - x)3 y 2(l - y ) , yields the parameter selections, a = 1/2, |3 = 5/2, £ = 3/2, v = 1/2, y = 3/2, and 6 = 1 . As a result of these parameter selections the sums appearing in the approximate (3.62) are all noncentered. A more complete discussion of how small this system is in comparison to a finite difference solution follows the next example. Note that the multiplicative factor of 10 appearing on the right - hand side of the problem was chosen so that the solution was of the same order of magnitude as the previous example. Table 11 gives the results for the current example. 88 Table 11. Numerical Results for Example 5.8. MX Nx “y V Mt Nt IIE^2 ) (h) Il 4 I 2 4 2 I .805-2 8 2 3 8 3 2 .489-2 16 4 6 16 6 3 .867-4 32 7 11 . 32 11 4 *.682-3 . Example 5.9: Noncentered sums in three space dimensions. L u (x ,y ,z ,t ) = 100{(2 - 4t + t2)x(I - x)3Y 2(I - y)z3(I - z)2 - 2t2 [3(I - x)(2x - I)y 2(I - y)z3(I - z)2 + x(I - x)3 (I - 3y)Z3 (I - z)2 + x(I - x)3y2 (I - y ) (IOz3 - 12z2 + 3z)]}e~^ u| = 0 I 3(0,1)3 u| = u t | = 0 . It=O *t=0 The analytic solution of this problem is u(x,y,z, t) = IOOt2S." tXt I - x)3 y 2 (I - y)z3 ( I z) 2 . Analogous to the factor of 10 in Example 5.8, the factor of 100 is a magnitude adjustment. The parameters a = 1/2, P = 5/2, f = 3/2, 7) = 1/2 , u - 5/2, v = 3/2, Y = 3/2, and 6 = 1 yield the summation parameters appearing in Table 12. The composite error results for Examples 5.7 - 5.9 are given in Table 13. 89 Table 12. Numerical Results for Example 5.9. ■ MX Nx My NY . Mz Nz Mt Nt HEjl3> (h) Il 4 I 2 4 I 2 2 I .395-2 8 2 3 8 2 3 3 2 *.389-3 16 4 6 16 4 6 6 3 *.313-4 Table 13. Numerical Results for Problems Dictating Noncentered Sums in All Indices. MX h I l (h) Il IlEu2 ^(%) Il Il E^S ) (h) Il Asymptotic Error 4 1.57080 .237-2 .805-2 .395-2 .432-1 8 1.11072 .208-2 .489-2 *.389-3 .118-1 16 .78540 .707-4 .867-4 *.313-4 .188-2 32 .55536 .237-4 *.682-3 _ _ _ _ _ .138-3 The last three examples illustrate how the parameter selections described by (5.8) - (5.12), (5.14) - (5.16), and (5.19) - (5.21) minimize the expenditure of computational effort for the Sinc-Galerkin method. This also increases the method's competitiveness with respect to alternatives like finite differences. As discussed in Chapter 4, one measure of competitiveness is the number of nonzero elements in the coefficient matrices. For instance, the asymptotic I 90 error corresponding to runs using Mjj = 16 is .188-2. Maintaining this error when using finite differences requires a stepsize hFD = .04 and iteration of the time domain to at least t = 6.3. This translates to 25 gridpoints on any of the spatial intervals and 158 steps in time. Solving the finite difference scheme as one system yields a matrix with 674,200 nonzero elements for Example 5.8 and 21,595,000 for Example 5.9. Their corresponding Sinc-Galerkin matrices have 251,160 and 3,294,060 nonzero entries. This evidence supports Stenger's [13] statement that the computational efficiency of the Sinc-Galerkin method becomes more apparent in higher dimensions. Further, the author notes that the computational savings exhibited herein are, in large part, due to including the time domain in the Galerkin procedure. REFERENCES CITED Farlow, S.J. Partial Differential Equations for ^ Scientists and Engineers, John Wiley and Sons, New York, 1982. Weinberger, H .F . A First Course in Partial Differential Equations, John Wilqy and Sons, New York, 1965. Botha, J .F . and Finder, G .F . Fundamental Concepts in the Numerical Solution of Differential Equations, John Wiley and Sons, 1983. Ames, W.F. Numerical Methods for Partial Differential Equations, 2nd ed., Academic Press, New York, 1977. Stenger, F . "A Sinc-Galerkin Method of Solution of Boundary Value Problems." Mathematics of Computation 33 (January 1979): 85-109. Lund, J. "Symmetrization of the Sinc-Galerkin Method for Boundary Value Problems." Mathematics of Computation 47 (October 1986): 571-588. Gottlieb, D . and Orszag, S.A. Numerical Analysis of Spectral Methods: Theory and Application, SIAM, Philadelphia, 1977. Whittaker, E .T . "On the Functions Which Are Represented by the Expansions of the Interpolation Theory." Proc. Roy. Soc. Edinburgh 35 (1915): 181-194. Whittaker, J.M. Interpolatory Function Theory, Cambridge University Press, London, 1935. McNamee, J., Stenger, F., and Whitney, J.L. "Whittaker's Cardinal Function in Retrospect." Mathematics of Computation 25 (January 1971): 141-154. Stenger, F . "The Approximate Solution of Convolution- Type Integral Equations." SIAM J . Math. Anal. 4 (August 1973): 536-555. 92 REFERENCES CITED— Continued 12. Stenger, F . "Integration Formulas via the Trapezoidal Rule." J . Inst. Maths. Applies. 12 (1973): 103-114. 13. Stenger, F. "Numerical Methods Based on Whittaker Cardinal, or Sine Functions." SIAM Review 23 (April 1981) : 165-223. 14. Gear, W .C . Numerical Initial Value Problems in Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, New Jersey, 1971. 15. McArthur, K.M., Bowers, K.L., and Lund, J. "Numerical Implementation of the Sinc-Galerkin Method for Second- Order Hyperbolic Equations." To appear in Numerical Methods for Partial Differential Equations, (1987). 16. Davis, P.J. Circulant Matrices, John Wiley and Sons, New York, 1979. 17. Isaacson, E . and Keller, H .B . Analysis of Numerical Methods, John Wiley and Sons, New York, 1966. 18. deBoor, C. and Swartz, B . "Collocation Approximation to Eigenvalues of an Ordinary Differential Equation: Numerical Illustrations." Mathematics of Computation 36 (January 1981): 1-19. ■. ' MONTANA STATE UNIVERSITY LIBRARIES 762 1002 490 5