Gauss–Legendre method

In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation methods based on the points of Gauss–Legendre quadrature. The Gauss–Legendre method based on s points has order 2s.[1]

All Gauss–Legendre methods are A-stable.[2]

The Gauss–Legendre method of order two is the implicit midpoint rule. Its Butcher tableau is:

1/2 1/2
1

The Gauss–Legendre method of order four has Butcher tableau:

1 2 1 6 3 {\displaystyle {\tfrac {1}{2}}-{\tfrac {1}{6}}{\sqrt {3}}} 1 4 {\displaystyle {\tfrac {1}{4}}} 1 4 1 6 3 {\displaystyle {\tfrac {1}{4}}-{\tfrac {1}{6}}{\sqrt {3}}}
1 2 + 1 6 3 {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{6}}{\sqrt {3}}} 1 4 + 1 6 3 {\displaystyle {\tfrac {1}{4}}+{\tfrac {1}{6}}{\sqrt {3}}} 1 4 {\displaystyle {\tfrac {1}{4}}}
1 2 {\displaystyle {\tfrac {1}{2}}} 1 2 {\displaystyle {\tfrac {1}{2}}}

The Gauss–Legendre method of order six has Butcher tableau:

1 2 1 10 15 {\displaystyle {\tfrac {1}{2}}-{\tfrac {1}{10}}{\sqrt {15}}} 5 36 {\displaystyle {\tfrac {5}{36}}} 2 9 1 15 15 {\displaystyle {\tfrac {2}{9}}-{\tfrac {1}{15}}{\sqrt {15}}} 5 36 1 30 15 {\displaystyle {\tfrac {5}{36}}-{\tfrac {1}{30}}{\sqrt {15}}}
1 2 {\displaystyle {\tfrac {1}{2}}} 5 36 + 1 24 15 {\displaystyle {\tfrac {5}{36}}+{\tfrac {1}{24}}{\sqrt {15}}} 2 9 {\displaystyle {\tfrac {2}{9}}} 5 36 1 24 15 {\displaystyle {\tfrac {5}{36}}-{\tfrac {1}{24}}{\sqrt {15}}}
1 2 + 1 10 15 {\displaystyle {\tfrac {1}{2}}+{\tfrac {1}{10}}{\sqrt {15}}} 5 36 + 1 30 15 {\displaystyle {\tfrac {5}{36}}+{\tfrac {1}{30}}{\sqrt {15}}} 2 9 + 1 15 15 {\displaystyle {\tfrac {2}{9}}+{\tfrac {1}{15}}{\sqrt {15}}} 5 36 {\displaystyle {\tfrac {5}{36}}}
5 18 {\displaystyle {\tfrac {5}{18}}} 4 9 {\displaystyle {\tfrac {4}{9}}} 5 18 {\displaystyle {\tfrac {5}{18}}}

The computational cost of higher-order Gauss–Legendre methods is usually excessive, and thus, they are rarely used.[3]

Intuition

Gauss-Legendre Runge-Kutta (GLRK) methods solve an ordinary differential equation x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} with x ( 0 ) = x 0 {\displaystyle x(0)=x_{0}} . The distinguishing feature of GLRK is the estimation of x ( h ) x 0 = 0 h d t f ( x ( t ) ) {\textstyle x(h)-x_{0}=\int _{0}^{h}dt\,f(x(t))} with Gaussian quadrature.

x ( h ) = x ( 0 ) + h 2 i = 1 w i k i + O ( h 2 ) , {\displaystyle x(h)=x(0)+{\frac {h}{2}}\sum _{i=1}^{\ell }w_{i}k_{i}+O(h^{2\ell }),}

where k i = f ( x ( h c i ) ) {\displaystyle k_{i}=f(x(hc_{i}))} are the sampled velocities, w i {\displaystyle w_{i}} are the quadrature weights, c i = 1 2 h ( 1 + r i ) {\textstyle c_{i}={\frac {1}{2}}h(1+r_{i})} are the abscissas, and r i {\displaystyle r_{i}} are the roots P ( r i ) = 0 {\displaystyle P_{\ell }(r_{i})=0} of the Legendre polynomial of degree {\displaystyle \ell } . A further approximation is needed, as k i {\displaystyle k_{i}} is still impossible to evaluate. To maintain truncation error of order O ( h 2 ) {\displaystyle O(h^{2\ell })} , we only need k i {\displaystyle k_{i}} to order O ( h 2 1 ) {\displaystyle O(h^{2\ell -1})} . The Runge-Kutta implicit definition k i = f ( x 0 + h j a i j k j ) {\textstyle k_{i}=f{\left(x_{0}+h\sum _{j}a_{ij}k_{j}\right)}} is invoked to accomplish this. This is an implicit constraint that must be solved by a root finding algorithm like Newton's method. The values of the Runge-Kutta parameters a i j {\displaystyle a_{ij}} can be determined from a Taylor series expansion in h {\displaystyle h} .

Practical example

The Gauss-Legendre methods are implicit, so in general they cannot be applied exactly. Instead one makes an educated guess of k i {\displaystyle k_{i}} , and then uses Newton's method to converge arbitrarily close to the true solution. Below is a Matlab function which implements the Gauss-Legendre method of order four.

% starting point
x = [ 10.5440; 4.1124; 35.8233];

dt = 0.01;
N = 10000;
x_series = [x];
for i = 1:N
  x = gauss_step(x, @lorenz_dynamics, dt, 1e-7, 1, 100);
  x_series = [x_series x];
end

plot3( x_series(1,:), x_series(2,:), x_series(3,:) );
set(gca,'xtick',[],'ytick',[],'ztick',[]);
title('Lorenz Attractor');
return;

function [td, j] = lorenz_dynamics(state)
  % return a time derivative and a Jacobian of that time derivative
  x = state(1);
  y = state(2);
  z = state(3);

  sigma = 10;
  beta  = 8/3;
  rho   = 28;

  td = [sigma*(y-x); x*(rho-z)-y; x*y-beta*z];

  j = [-sigma, sigma, 0;
        rho-z, -1, -x;
        y, x, -beta];
end

function x_next = gauss_step( x, dynamics, dt, threshold, damping, max_iterations )
  [d,~] = size(x);
  sq3 = sqrt(3);
  if damping > 1 || damping <= 0
    error('damping should be between 0 and 1.')
  end

  % Use explicit Euler steps as initial guesses
  [k,~] = dynamics(x);
  x1_guess = x + (1/2-sq3/6)*dt*k;
  x2_guess = x + (1/2+sq3/6)*dt*k;
  [k1,~] = dynamics(x1_guess);
  [k2,~] = dynamics(x2_guess);

  a11 = 1/4;
  a12 = 1/4 - sq3/6;
  a21 = 1/4 + sq3/6;
  a22 = 1/4;

  error = @(k1, k2) [k1 - dynamics(x+(a11*k1+a12*k2)*dt); k2 - dynamics(x+(a21*k1+a22*k2)*dt)];
  er = error(k1, k2);
  iteration=1;
  while (norm(er) > threshold && iteration < max_iterations)
    fprintf('Newton iteration %d: error is %f.\n', iteration, norm(er) );
    iteration = iteration + 1;
   
    [~, j1] = dynamics(x+(a11*k1+a12*k2)*dt);
    [~, j2] = dynamics(x+(a21*k1+a22*k2)*dt);
   
    j = [eye(d) - dt*a11*j1, -dt*a12*j1;
         -dt*a21*j2, eye(d) - dt*a22*j2];
    
    k_next = [k1;k2] - damping * linsolve(j, er);
    k1 = k_next(1:d);
    k2 = k_next(d+(1:d));
   
    er = error(k1, k2);
  end
  if norm(er) > threshold
    error('Newton did not converge by %d iterations.', max_iterations);
  end
  x_next = x + dt / 2 * (k1 + k2);
end

This algorithm is surprisingly cheap. The error in k i {\displaystyle k_{i}} can fall below 10 12 {\displaystyle 10^{-12}} in as few as 2 Newton steps. The only extra work compared to explicit Runge-Kutta methods is the computation of the Jacobian.

An integrated orbit near the Lorenz attractor.

Time-symmetric variants

At the cost of adding an additional implicit relation, these methods can be adapted to have time reversal symmetry. In these methods, the averaged position ( x f + x i ) / 2 {\displaystyle (x_{f}+x_{i})/2} is used in computing k i {\displaystyle k_{i}} instead of just the initial position x i {\displaystyle x_{i}} in standard Runge-Kutta methods. The method of order 2 is just an implicit midpoint method.

k 1 = f ( x f + x i 2 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}\right)}
x f = x i + h k 1 {\displaystyle x_{f}=x_{i}+hk_{1}}

The method of order 4 with 2 stages is as follows.

k 1 = f ( x f + x i 2 3 6 h k 2 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}-{\frac {\sqrt {3}}{6}}hk_{2}\right)}
k 2 = f ( x f + x i 2 + 3 6 h k 1 ) {\displaystyle k_{2}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {3}}{6}}hk_{1}\right)}
x f = x i + h 2 ( k 1 + k 2 ) {\displaystyle x_{f}=x_{i}+{\frac {h}{2}}(k_{1}+k_{2})}

The method of order 6 with 3 stages is as follows.

k 1 = f ( x f + x i 2 15 15 h k 2 15 30 h k 3 ) {\displaystyle k_{1}=f\left({\frac {x_{f}+x_{i}}{2}}-{\frac {\sqrt {15}}{15}}hk_{2}-{\frac {\sqrt {15}}{30}}hk_{3}\right)}
k 2 = f ( x f + x i 2 + 15 24 h k 1 15 24 h k 3 ) {\displaystyle k_{2}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {15}}{24}}hk_{1}-{\frac {\sqrt {15}}{24}}hk_{3}\right)}
k 3 = f ( x f + x i 2 + 15 30 h k 1 + 15 15 h k 2 ) {\displaystyle k_{3}=f\left({\frac {x_{f}+x_{i}}{2}}+{\frac {\sqrt {15}}{30}}hk_{1}+{\frac {\sqrt {15}}{15}}hk_{2}\right)}
x f = x i + h 18 ( 5 k 1 + 8 k 2 + 5 k 3 ) {\displaystyle x_{f}=x_{i}+{\frac {h}{18}}(5k_{1}+8k_{2}+5k_{3})}

Notes

  1. ^ Iserles 1996, p. 47
  2. ^ Iserles 1996, p. 63
  3. ^ Iserles 1996, p. 47

References

  • Iserles, Arieh (1996), A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, ISBN 978-0-521-55655-2.