Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox Chapter IntroductionD01 — Quadrature

## Scope of the Chapter

This chapter provides functions for the numerical evaluation of definite integrals in one or more dimensions and for evaluating weights and abscissae of integration rules.

## Background to the Problems

The functions in this chapter are designed to estimate:
(a) the value of a one-dimensional definite integral of the form
 b ∫ f(x)dx a
$∫abf(x)dx$
(1)
where f(x)$f\left(x\right)$ is defined by you, either at a set of points (xi,f(xi))$\left({x}_{\mathit{i}},f\left({x}_{\mathit{i}}\right)\right)$, for i = 1,2,,n$\mathit{i}=1,2,\dots ,n$, where a = x1 < x2 < < xn = b$a={x}_{1}<{x}_{2}<\cdots <{x}_{n}=b$, or in the form of a function; and the limits of integration a,b$a,b$ may be finite or infinite.
Some methods are specially designed for integrands of the form
 f(x) = w(x)g(x) $f(x)=w(x)g(x)$ (2)
which contain a factor w(x)$w\left(x\right)$, called the weight-function, of a specific form. These methods take full account of any peculiar behaviour attributable to the w(x)$w\left(x\right)$ factor.
(b) the values of the one-dimensional indefinite integrals arising from (1) where the ranges of integration are interior to the interval [a,b]$\left[a,b\right]$.
(c) the value of a multidimensional definite integral of the form
 ∫Rnf(x1,x2, … ,xn)dxn ⋯ dx2dx1 $∫Rnf(x1,x2,…,xn)dxn⋯dx2dx1$ (3)
where f(x1,x2,,xn)$f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is a function defined by you and Rn${R}_{n}$ is some region of n$n$-dimensional space.
The simplest form of Rn${R}_{n}$ is the n$n$-rectangle defined by
 ai ≤ xi ≤ bi,  i = 1,2, … ,n $ai≤xi≤bi, i=1,2,…,n$ (4)
where ai${a}_{i}$ and bi${b}_{i}$ are constants. When ai${a}_{i}$ and bi${b}_{i}$ are functions of xj${x}_{j}$ (j < i$j), the region can easily be transformed to the rectangular form (see page 266 of Davis and Rabinowitz (1975)). Some of the methods described incorporate the transformation procedure.

### One-dimensional Integrals

To estimate the value of a one-dimensional integral, a quadrature rule uses an approximation in the form of a weighted sum of integrand values, i.e.,
 b N ∫ f(x)dx ≃ ∑ wif(xi). a i = 1
$∫abf(x)dx≃∑i=1Nwif(xi).$
(5)
The points xi${x}_{i}$ within the interval [a,b]$\left[a,b\right]$ are known as the abscissae, and the wi${w}_{i}$ are known as the weights.
More generally, if the integrand has the form (2), the corresponding formula is
 b N ∫ w(x)g(x)dx ≃ ∑ wig(xi). a i = 1
$∫abw(x)g(x)dx≃∑i=1Nwig(xi).$
(6)
If the integrand is known only at a fixed set of points, these points must be used as the abscissae, and the weighted sum is calculated using finite difference methods. However, if the functional form of the integrand is known, so that its value at any abscissa is easily obtained, then a wide variety of quadrature rules are available, each characterised by its choice of abscissae and the corresponding weights.
The appropriate rule to use will depend on the interval [a,b]$\left[a,b\right]$ – whether finite or otherwise – and on the form of any w(x)$w\left(x\right)$ factor in the integrand. A suitable value of N$N$ depends on the general behaviour of f(x)$f\left(x\right)$; or of g(x)$g\left(x\right)$, if there is a w(x)$w\left(x\right)$ factor present.
Among possible rules, we mention particularly the Gaussian formulae, which employ a distribution of abscissae which is optimal for f(x)$f\left(x\right)$ or g(x)$g\left(x\right)$ of polynomial form.
The choice of basic rules constitutes one of the principles on which methods for one-dimensional integrals may be classified. The other major basis of classification is the implementation strategy, of which some types are now presented.
(a) Single rule evaluation procedures
A fixed number of abscissae, N$N$, is used. This number and the particular rule chosen uniquely determine the weights and abscissae. No estimate is made of the accuracy of the result.
(b) Automatic procedures
The number of abscissae, N$N$, within [a,b]$\left[a,b\right]$ is gradually increased until consistency is achieved to within a level of accuracy (absolute or relative) you requested. There are essentially two ways of doing this; hybrid forms of these two methods are also possible:
 (i) whole interval procedures (non-adaptive) A series of rules using increasing values of N$N$ are successively applied over the whole interval [a,b]$\left[a,b\right]$. It is clearly more economical if abscissae already used for a lower value of N$N$ can be used again as part of a higher-order formula. This principle is known as optimal extension. There is no overlap between the abscissae used in Gaussian formulae of different orders. However, the Kronrod formulae are designed to give an optimal (2N + 1)$\left(2N+1\right)$-point formula by adding (N + 1)$\left(N+1\right)$ points to an N$N$-point Gauss formula. Further extensions have been developed by Patterson. (ii) adaptive procedures The interval [a,b]$\left[a,b\right]$ is repeatedly divided into a number of sub-intervals, and integration rules are applied separately to each sub-interval. Typically, the subdivision process will be carried further in the neighbourhood of a sharp peak in the integrand than where the curve is smooth. Thus, the distribution of abscissae is adapted to the shape of the integrand. Subdivision raises the problem of what constitutes an acceptable accuracy in each sub-interval. The usual global acceptability criterion demands that the sum of the absolute values of the error estimates in the sub-intervals should meet the conditions required of the error over the whole interval. Automatic extrapolation over several levels of subdivision may eliminate the effects of some types of singularities.
An ideal general-purpose method would be an automatic method which could be used for a wide variety of integrands, was efficient (i.e., required the use of as few abscissae as possible), and was reliable (i.e., always gave results to within the requested accuracy). Complete reliability is unobtainable, and generally higher reliability is obtained at the expense of efficiency, and vice versa. It must therefore be emphasized that the automatic functions in this chapter cannot be assumed to be 100%$100%$ reliable. In general, however, the reliability is very high.

### Multidimensional Integrals

A distinction must be made between cases of moderately low dimensionality (say, up to 4$4$ or 5$5$ dimensions), and those of higher dimensionality. Where the number of dimensions is limited, a one-dimensional method may be applied to each dimension, according to some suitable strategy, and high accuracy may be obtainable (using product rules). However, the number of integrand evaluations rises very rapidly with the number of dimensions, so that the accuracy obtainable with an acceptable amount of computational labour is limited; for example a product of 3$3$-point rules in 20$20$ dimensions would require more than 109${10}^{9}$ integrand evaluations. Special techniques such as the Monte–Carlo methods can be used to deal with high dimensions.
(a) Products of one-dimensional rules
Using a two-dimensional integral as an example, we have
b1b2 N
f(x,y)dydxwi
 ( b2 ) ∫ f(xi,y)dy a2
a1a2 i = 1
$∫a1b1∫a2b2f(x,y)dy dx≃∑i=1Nwi [∫a2b2f(xi,y)dy]$
(7)
 b1 b2 N N ∫ ∫ f(x,y)dydx ≃ ∑ ∑ wivjf(xi,yj) a1 a2 i = 1 j = 1
$∫a1b1∫a2b2f(x,y)dy dx≃∑i=1N∑j=1Nwivjf(xi,yj)$
(8)
where (wi,xi)$\left({w}_{i},{x}_{i}\right)$ and (vi,yi)$\left({v}_{i},{y}_{i}\right)$ are the weights and abscissae of the rules used in the respective dimensions.
A different one-dimensional rule may be used for each dimension, as appropriate to the range and any weight function present, and a different strategy may be used, as appropriate to the integrand behaviour as a function of each independent variable.
For a rule-evaluation strategy in all dimensions, the formula (8) is applied in a straightforward manner. For automatic strategies (i.e., attempting to attain a requested accuracy), there is a problem in deciding what accuracy must be requested in the inner integral(s). Reference to formula (7) shows that the presence of a limited but random error in the y$y$-integration for different values of xi${x}_{i}$ can produce a ‘jagged’ function of x$x$, which may be difficult to integrate to the desired accuracy and for this reason products of automatic one-dimensional functions should be used with caution (see Lyness (1983)).
(b) Monte–Carlo methods
These are based on estimating the mean value of the integrand sampled at points chosen from an appropriate statistical distribution function. Usually a variance reducing procedure is incorporated to combat the fundamentally slow rate of convergence of the rudimentary form of the technique. These methods can be effective by comparison with alternative methods when the integrand contains singularities or is erratic in some way, but they are of quite limited accuracy.
(c) Number theoretic methods
These are based on the work of Korobov and Conroy and operate by exploiting implicitly the properties of the Fourier expansion of the integrand. Special rules, constructed from so-called optimal coefficients, give a particularly uniform distribution of the points throughout n$n$-dimensional space and from their number theoretic properties minimize the error on a prescribed class of integrals. The method can be combined with the Monte–Carlo procedure.
(d) Sag–Szekeres method
By transformation this method seeks to induce properties into the integrand which make it accurately integrable by the trapezoidal rule. The transformation also allows effective control over the number of integrand evaluations.
(e) Automatic adaptive procedures
An automatic adaptive strategy in several dimensions normally involves division of the region into subregions, concentrating the divisions in those parts of the region where the integrand is worst behaved. It is difficult to arrange with any generality for variable limits in the inner integral(s). For this reason, some methods use a region where all the limits are constants; this is called a hyper-rectangle. Integrals over regions defined by variable or infinite limits may be handled by transformation to a hyper-rectangle. Integrals over regions so irregular that such a transformation is not feasible may be handled by surrounding the region by an appropriate hyper-rectangle and defining the integrand to be zero outside the desired region. Such a technique should always be followed by a Monte–Carlo method for integration.
The method used locally in each subregion produced by the adaptive subdivision process is usually one of three types: Monte–Carlo, number theoretic or deterministic. Deterministic methods are usually the most rapidly convergent but are often expensive to use for high dimensionality and not as robust as the other techniques.

## Recommendations on Choice and Use of Available Functions

This section is divided into five subsections. The first subsection illustrates the difference between direct and reverse communication functions. The second subsection highlights the different levels of vectorization provided by different interfaces.
Sections [One-dimensional Integrals over a Finite Interval], [One-dimensional Integrals over a Semi-infinite or Infinite Interval] and [Multidimensional Integrals] consider in turn functions for: one-dimensional integrals over a finite interval, and over a semi-infinite or an infinite interval; and multidimensional integrals. Within each sub-section, functions are classified by the type of method, which ranges from simple rule evaluation to automatic adaptive algorithms. The recommendations apply particularly when the primary objective is simply to compute the value of one or more integrals, and in these cases the automatic adaptive functions are generally the most convenient and reliable, although also the most expensive in computing time.
Note however that in some circumstances it may be counter-productive to use an automatic function. If the results of the quadrature are to be used in turn as input to a further computation (e.g., an ‘outer’ quadrature or an optimization problem), then this further computation may be adversely affected by the ‘jagged performance profile’ of an automatic function; a simple rule-evaluation function may provide much better overall performance. For further guidance, the article by Lyness (1983) is recommended.

### Direct and Reverse Communication

Functions in this chapter which evaluate an integral value may be classified as either direct communication or reverse communication.
 (a) Direct communication Direct communication functions require a user-supplied (sub)routine to be provided as an actual argument to the NAG Library function. These functions are usually more straightforward to use than a reverse communication equivalent, although they require the user-supplied (sub)routine to have a specific interface. (b) Reverse communication Instead of calling a user-supplied (sub)routine to evaluate a function, reverse communication functions return repeatedly, requesting different operations to be performed by the calling program. Reverse communication functions will typically be more complicated to use than direct communication equivalents. However, they provide great flexibility for the evaluation of the integrands. In particular, as the function evaluations are performed by the calling program, any information required for their evaluation that is not generated by the library function is immediately available. Currently in this chapter the only function explicitly using reverse communication is nag_quad_1d_gen_vec_multi_rcomm (d01ra).

### Choice of Interface

This section concerns the design of the interface for the provision of abscissae, and the subsequent collection of calculated information, typically integrand evaluations. Vectorized interfaces typically allow for more efficient operation.
 (a) Single abscissa interfaces The algorithm will provide a single abscissa at which information is required. These are typically the most simple to use, although they may be significantly less efficient than a vectorized equivalent. Most of the algorithms in this chapter are of this type. Examples of this include nag_quad_1d_fin_bad (d01aj) and nag_quad_md_gauss (d01fb). (b) Vectorized abscissae interfaces The algorithm will return a set of abscissae, at all of which information is required. While these are more complicated to use, they are typically more efficient than a non-vectorized equivalent. They reduce the overhead of function calls, allow the avoidance of repetition of computations common to each of the integrand evaluations, and offer greater scope for vectorization and parallelization of your code. Examples include nag_quad_1d_fin_gonnet_vec (d01rg), nag_quad_1d_gauss_vec (d01ua), and the functions nag_quad_1d_fin_bad_vec (d01at) and nag_quad_1d_fin_osc_vec (d01au), which are vectorized equivalents of nag_quad_1d_fin_bad (d01aj) and nag_quad_1d_fin_osc (d01ak). (c) Multiple integral interfaces These are functions which allow for multiple integrals to be estimated simultaneously. As with (b) above, these are more complicated to use than single integral functions, however they can provide higher efficiency, particularly if several integrals require the same subcalculations at the same abscissae. They are most efficient if integrals which are supplied together are expected to have similar behaviour over the domain, particularly when the algorithm is adaptive. Examples include nag_quad_md_adapt_multi (d01ea) and nag_quad_1d_gen_vec_multi_rcomm (d01ra).

### One-dimensional Integrals over a Finite Interval

(a) Integrand defined at a set of points
If f(x)$f\left(x\right)$ is defined numerically at four or more points, then the Gill–Miller finite difference method (nag_quad_1d_data (d01ga)) should be used. The interval of integration is taken to coincide with the range of x$x$ values of the points supplied. It is in the nature of this problem that any function may be unreliable. In order to check results independently and so as to provide an alternative technique you may fit the integrand by Chebyshev series using nag_fit_1dcheb_arb (e02ad) and then use function nag_fit_1dcheb_integ (e02aj) to evaluate its integral (which need not be restricted to the range of the integration points, as is the case for nag_quad_1d_data (d01ga)). A further alternative is to fit a cubic spline to the data using nag_fit_1dspline_knots (e02ba) and then to evaluate its integral using nag_fit_1dspline_integ (e02bd).
(b) Integrand defined as a function
If the functional form of f(x)$f\left(x\right)$ is known, then one of the following approaches should be taken. They are arranged in the order from most specific to most general, hence the first applicable procedure in the list will be the most efficient. However, if you do not wish to make any assumptions about the integrand, the most reliable functions to use will be nag_quad_1d_fin_bad_vec (d01at) (or nag_quad_1d_fin_bad (d01aj)), nag_quad_1d_fin_osc_vec (d01au) (or nag_quad_1d_fin_osc (d01ak)), nag_quad_1d_fin_sing (d01al), nag_quad_1d_fin_gonnet_vec (d01rg) or nag_quad_1d_gen_vec_multi_rcomm (d01ra), although these will in general be less efficient for simple integrals.
(i) Rule-evaluation functions
If f(x)$f\left(x\right)$ is known to be sufficiently well behaved (more precisely, can be closely approximated by a polynomial of moderate degree), a Gaussian function with a suitable number of abscissae may be used.
nag_quad_1d_gauss_wgen (d01bc) or nag_quad_1d_gauss_wres (d01tb) with nag_quad_md_gauss (d01fb) may be used if it is required to examine the weights and abscissae.
nag_quad_1d_gauss_wres (d01tb) is faster and more accurate, whereas nag_quad_1d_gauss_wgen (d01bc) is more general. nag_quad_1d_gauss_vec (d01ua) uses the same quadrature rules as nag_quad_1d_gauss_wres (d01tb), and may be used if you do not explicitly require the weights and abscissae.
If f(x)$f\left(x\right)$ is well behaved, apart from a weight-function of the form
 |x − (a + b)/2|c  or  (b − x)c(x − a)d, $|x-a+b2| c or (b-x)c(x-a)d,$
nag_quad_1d_gauss_wgen (d01bc) with nag_quad_md_gauss (d01fb) may be used.
(ii) Automatic whole-interval functions
If f(x)$f\left(x\right)$ is reasonably smooth, and the required accuracy is not too high, the automatic whole-interval functions nag_quad_1d_indef (d01ar) or nag_quad_1d_fin_smooth (d01bd) may be used. nag_quad_1d_indef (d01ar) incorporates high-order extensions of the Kronrod rule and is the only function which can also be used for indefinite integration.
(iii) Automatic adaptive functions
Firstly, several functions are available for integrands of the form w(x)g(x)$w\left(x\right)g\left(x\right)$ where g(x)$g\left(x\right)$ is a ‘smooth’ function (i.e., has no singularities, sharp peaks or violent oscillations in the interval of integration) and w(x)$w\left(x\right)$ is a weight function of one of the following forms.
1. if w(x) = (bx)α(xa)β (log(bx))k(log(xa))l$w\left(x\right)={\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$, where k,l = 0​ or ​1$k,l=0\text{​ or ​}1$, α,β > 1$\alpha ,\beta >-1$: use nag_quad_1d_fin_wsing (d01ap);
2. if w(x) = 1/(xc) $w\left(x\right)=\frac{1}{x-c}$: use nag_quad_1d_fin_wcauchy (d01aq) (this integral is called the Hilbert transform of g$g$);
3. if w(x) = cos(ωx)$w\left(x\right)=\mathrm{cos}\left(\omega x\right)$ or sin(ωx)$\mathrm{sin}\left(\omega x\right)$: use nag_quad_1d_fin_wtrig (d01an) (this function can also handle certain types of singularities in g(x)$g\left(x\right)$).
Secondly, there are multiple routines for general f(x)$f\left(x\right)$, using different strategies.
nag_quad_1d_fin_bad_vec (d01at) (and nag_quad_1d_fin_bad (d01aj)), and nag_quad_1d_fin_osc_vec (d01au) (and nag_quad_1d_fin_osc (d01ak)) use the strategy of Piessens et al. (1983), using repeated bisection of the interval, and in the first case the ε$\epsilon$-algorithm (Wynn (1956)), to improve the integral estimate. This can cope with singularities away from the end points, provided singular points do not occur as absicssae. nag_quad_1d_fin_osc_vec (d01au) tends to perform better than nag_quad_1d_fin_bad_vec (d01at) on more oscillatory integrals.
nag_quad_1d_fin_sing (d01al) uses the same subdivision strategy as nag_quad_1d_fin_bad_vec (d01at) over a set of initial interval segments determined by supplied breakpoints. It is hence suitable for integrals with discontinuities (including switches in definition) or sharp peaks occuring at known points. Such integrals may also be approximated using other functions which do not allow breakpoints, although such integrals should be evaluated over each of the sub-intervals seperately.
nag_quad_1d_gen_vec_multi_rcomm (d01ra) again uses the strategy of Piessens et al. (1983), and provides the functionality of nag_quad_1d_fin_sing (d01al), nag_quad_1d_fin_bad_vec (d01at) and nag_quad_1d_fin_osc_vec (d01au) in a reverse communication framework. It also supports multiple integrals and uses a vectorized interface for the abscissae. Hence it is likely to be more efficient if several similar integrals are required to be evaluated over the same domain. Furthermore, its behaviour can be tailored through the use of optional parameters.
nag_quad_1d_fin_well (d01ah) uses the strategy of Patterson (1968) and the ε$\epsilon$-algorithm to adaptively evaluate the integral in question. It tends to be more efficient than the bisection based algorithms, although these tend to be more robust when singularities occur away from the end points.
nag_quad_1d_fin_gonnet_vec (d01rg) uses another adaptive scheme due to Gonnet (2010). This attempts to match the quadrature rule to the underlying integrand as well as subdividing the domain. Further, it can explicitly deal with singular points at abscissae, should NaN's or ∞ be returned by the user-supplied (sub)routine, provided the generation of these does not cause the program to halt.

### One-dimensional Integrals over a Semi-infinite or Infinite Interval

(a) Integrand defined at a set of points
If f(x)$f\left(x\right)$ is defined numerically at four or more points, and the portion of the integral lying outside the range of the points supplied may be neglected, then the Gill–Miller finite difference method, nag_quad_1d_data (d01ga), should be used.
(b) Integrand defined as a function
(i) Rule evaluation functions
If f(x)$f\left(x\right)$ behaves approximately like a polynomial in x$x$, apart from a weight function of the form:
1. eβx,β > 0${e}^{-\beta x},\beta >0$ (semi-infinite interval, lower limit finite); or
2. eβx,β < 0${e}^{-\beta x},\beta <0$ (semi-infinite interval, upper limit finite); or
3. eβ(xα)2,β > 0${e}^{-\beta {\left(x-\alpha \right)}^{2}},\beta >0$ (infinite interval),
or if f(x)$f\left(x\right)$ behaves approximately like a polynomial in (x + b)1${\left(x+b\right)}^{-1}$ (semi-infinite range), then the Gaussian functions may be used.
nag_quad_1d_gauss_vec (d01ua) may be used if it is not required to examine the weights and abscissae.
nag_quad_1d_gauss_wgen (d01bc) or nag_quad_1d_gauss_wres (d01tb) with nag_quad_md_gauss (d01fb) may be used if it is required to examine the weights and abscissae.
nag_quad_1d_gauss_wres (d01tb) is faster and more accurate, whereas nag_quad_1d_gauss_wgen (d01bc) is more general.
(ii) Automatic adaptive functions
nag_quad_1d_inf (d01am) may be used, except for integrands which decay slowly towards an infinite end point, and oscillate in sign over the entire range. For this class, it may be possible to calculate the integral by integrating between the zeros and invoking some extrapolation process (see nag_sum_accelerate (c06ba)).
nag_quad_1d_inf_wtrig (d01as) may be used for integrals involving weight functions of the form cos(ωx)$\mathrm{cos}\left(\omega x\right)$ and sin(ωx)$\mathrm{sin}\left(\omega x\right)$ over a semi-infinite interval (lower limit finite).
The following alternative procedures are mentioned for completeness, though their use will rarely be necessary.
1. If the integrand decays rapidly towards an infinite end point, a finite cut-off may be chosen, and the finite range methods applied.
2. If the only irregularities occur in the finite part (apart from a singularity at the finite limit, with which nag_quad_1d_inf (d01am) can cope), the range may be divided, with nag_quad_1d_inf (d01am) used on the infinite part.
3. A transformation to finite range may be employed, e.g.,
 x = (1 − t)/t  or  x = − loget $x=1-tt or x=-loge⁡t$
will transform (0,)$\left(0,\infty \right)$ to (1,0)$\left(1,0\right)$ while for infinite ranges we have
 ∞ ∞ ∫ f(x)dx = ∫ (f(x) + f( − x))dx. − ∞ 0
$∫-∞∞f(x)dx=∫0∞(f(x)+f(-x))dx.$
If the integrand behaves badly on (,0)$\left(-\infty ,0\right)$ and well on (0,)$\left(0,\infty \right)$ or vice versa it is better to compute it as 0f(x)dx + 0f(x)dx$\underset{-\infty }{\overset{0}{\int }}f\left(x\right)dx+\underset{0}{\overset{\infty }{\int }}f\left(x\right)dx$. This saves computing unnecessary function values in the semi-infinite range where the function is well behaved.

### Multidimensional Integrals

A number of techniques are available in this area and the choice depends to a large extent on the dimension and the required accuracy. It can be advantageous to use more than one technique as a confirmation of accuracy particularly for high-dimensional integrations. Two of the functions incorporate a transformation procedure, using a user-supplied function parameter region, which allows general product regions to be easily dealt with in terms of conversion to the standard n$n$-cube region.
(a) Products of one-dimensional rules (suitable for up to about 5$5$ dimensions)
If f(x1,x2,,xn)$f\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$ is known to be a sufficiently well behaved function of each variable xi${x}_{i}$, apart possibly from weight functions of the types provided, a product of Gaussian rules may be used. These are provided by nag_quad_1d_gauss_wgen (d01bc) or nag_quad_1d_gauss_wres (d01tb) with nag_quad_md_gauss (d01fb). Rules for finite, semi-infinite and infinite ranges are included.
For two-dimensional integrals only, unless the integrand is very badly behaved, the automatic whole-interval product procedure of nag_quad_2d_fin (d01da) may be used. The limits of the inner integral may be user-specified functions of the outer variable. Infinite limits may be handled by transformation (see Section [One-dimensional Integrals over a Semi-infinite or Infinite Interval]); end point singularities introduced by transformation should not be troublesome, as the integrand value will not be required on the boundary of the region.
If none of these functions proves suitable and convenient, the one-dimensional functions may be used recursively. For example, the two-dimensional integral
 b1 b2 I = ∫ ∫ f(x,y)dydx a1 a2
$I=∫a1b1∫a2b2f(x,y)dy dx$
may be expressed as
 b1 b2 I = ∫ F(x)dx,   where  F(x) = ∫ f(x,y)dy. a1 a2
$I=∫a1b1 F(x)dx, where F(x)=∫a2b2 f(x,y)dy.$
The user-supplied code to evaluate F(x)$F\left(x\right)$ will call the integration function for the y$y$-integration, which will call more user-supplied code for f(x,y)$f\left(x,y\right)$ as a function of y$y$ (x$x$ being effectively a constant).
From Mark 24 onwards, all direct communication functions are defined as recursive. As such, you may use any function, including the same function, for each dimension. Note however, in previous releases, direct communication functions were not defined as recursive, and thus a different library integration function must be used for each dimension if you are using an older product. Apart from this restriction, the following combinations were not permitted: nag_quad_1d_fin_bad (d01aj) and nag_quad_1d_fin_sing (d01al), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_fin_wsing (d01ap), nag_quad_1d_fin_wsing (d01ap) and nag_quad_1d_fin_wcauchy (d01aq), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_fin_wcauchy (d01aq), nag_quad_1d_fin_wtrig (d01an) and nag_quad_1d_inf_wtrig (d01as), nag_quad_1d_inf (d01am) and nag_quad_1d_inf_wtrig (d01as), nag_quad_1d_fin_bad_vec (d01at) and nag_quad_1d_fin_osc_vec (d01au). Otherwise the full range of one-dimensional functions are available, for finite/infinite intervals, constant/variable limits, rule evaluation/automatic strategies etc.
The reverse communication function nag_quad_1d_gen_vec_multi_rcomm (d01ra) may be used by itself in a pseudo-recursive manner, in that it may be called to evaluate an inner integral for the integrand value of an outer integral also being calculated by nag_quad_1d_gen_vec_multi_rcomm (d01ra).
(b) Sag–Szekeres method
Two functions are based on this method.
nag_quad_md_sphere (d01fd) is particularly suitable for integrals of very large dimension although the accuracy is generally not high. It allows integration over either the general product region (with built-in transformation to the n$n$-cube) or the n$n$-sphere. Although no error estimate is provided, two adjustable parameters may be varied for checking purposes or may be used to tune the algorithm to particular integrals.
nag_quad_md_sphere_bad (d01ja) is also based on the Sag–Szekeres method and integrates over the n$n$-sphere. It uses improved transformations which may be varied according to the behaviour of the integrand. Although it can yield very accurate results it can only practically be employed for dimensions not exceeding 4$4$.
(c) Number Theoretic method
Two functions are based on this method.
nag_quad_md_numth (d01gc) carries out multiple integration using the Korobov–Conroy method over a product region with built-in transformation to the n$n$-cube. A stochastic modification of this method is incorporated hybridising the technique with the Monte–Carlo procedure. An error estimate is provided in terms of the statistical standard error. The function includes a number of optimal coefficient rules for up to 20$20$ dimensions; others can be computed using nag_quad_md_numth_coeff_prime (d01gy) and nag_quad_md_numth_coeff_2prime (d01gz). Like the Sag–Szekeres method it is suitable for large dimensional integrals although the accuracy is not high.
nag_quad_md_numth_vec (d01gd) uses the same method as nag_quad_md_numth (d01gc), but has a vectorized interface which can result in faster execution, especially on vector-processing machines. You are required to provide two functions, the first to return an array of values of the integrand at each of an array of points, and the second to evaluate the limits of integration at each of an array of points. This reduces the overhead of function calls, avoids repetitions of computations common to each of the evaluations of the integral and limits of integration, and offers greater scope for vectorization of your code.
(d) A combinatorial extrapolation method
nag_quad_md_simplex (d01pa) computes a sequence of approximations and an error estimate to the integral of a function over a multidimensional simplex using a combinatorial method with extrapolation.
Both functions are for integrals of the form
 b1 b2 bn ∫ ∫ ⋯ ∫ f(x1,x2, … ,xn)dxndxn − 1 ⋯ dx1. a1 a2 an
$∫a1b1 ∫a2b2 ⋯ ∫anbn f(x1,x2,…,xn)dxndxn-1⋯dx1.$
nag_quad_md_mcarlo (d01gb) is an adaptive Monte–Carlo function. This function is usually slow and not recommended for high-accuracy work. It is a robust function that can often be used for low-accuracy results with highly irregular integrands or when n$n$ is large.
nag_quad_md_adapt (d01fc) is an adaptive deterministic function. Convergence is fast for well behaved integrands. Highly accurate results can often be obtained for n$n$ between 2$2$ and 5$5$, using significantly fewer integrand evaluations than would be required by nag_quad_md_mcarlo (d01gb). The function will usually work when the integrand is mildly singular and for n10$n\le 10$ should be used before nag_quad_md_mcarlo (d01gb). If it is known in advance that the integrand is highly irregular, it is best to compare results from at least two different functions.
There are many problems for which one or both of the functions will require large amounts of computing time to obtain even moderately accurate results. The amount of computing time is controlled by the number of integrand evaluations you have allowed, and you should set this parameter carefully, with reference to the time available and the accuracy desired.
nag_quad_md_adapt_multi (d01ea) extends the technique of nag_quad_md_adapt (d01fc) to integrate adaptively more than one integrand, that is to calculate the set of integrals
 b1 b2 bn ∫ ∫ ⋯ ∫ (f1,f2, … ,fm)dxndxn − 1 ⋯ dx1 a1 a2 an
$∫a1b1 ∫a2b2 ⋯ ∫anbn (f1,f2,…,fm) dxndxn-1⋯dx1$
for a set of similar integrands f1,f2,,fm${f}_{1},{f}_{2},\dots ,{f}_{m}$ where fi = fi(x1,x2,,xn)${f}_{i}={f}_{i}\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)$.

## Decision Trees

### Tree 1: One-dimensional integrals over a finite interval

 Is the functional form of the integrand known? _yes Is indefinite integration required? _yes nag_quad_1d_indef (d01ar) | no| | Do you require reverse communication? _yes nag_quad_1d_gen_vec_multi_rcomm (d01ra) | no| | Are you concerned with efficiency for simple integrals? _yes Is the integrand smooth (polynomial-like) apart from weight function |x − (a + b) / 2|c${|x-\left(a+b\right)/2|}^{c}$ or (b − x)c(x − a)d${\left(b-x\right)}^{c}{\left(x-a\right)}^{d}$? _yes nag_quad_1d_indef (d01ar), nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_gauss_wres (d01tb) or nag_quad_1d_gauss_wgen (d01bc) and nag_quad_md_gauss (d01fb), or nag_quad_md_numth (d01gc) | | no| | | Is the integrand reasonably smooth and the required accuracy not too great? _yes nag_quad_1d_fin_smooth (d01bd) and nag_quad_1d_gauss_vec (d01ua) | | no| | | Are multiple integrands to be integrated simultaneously? _yes nag_quad_1d_gen_vec_multi_rcomm (d01ra) | | no| | | Has the integrand discontinuities, sharp peaks or singularities at known points other than the end points? _yes Split the range and begin again; or use nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_sing (d01al) or nag_quad_1d_fin_gonnet_vec (d01rg) | | no| | | Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function (b − x)α (x − a)β (log(b − x))k (log(x − a))l ${\left(b-x\right)}^{\alpha }{\left(x-a\right)}^{\beta }{\left(\mathrm{log}\left(b-x\right)\right)}^{k}{\left(\mathrm{log}\left(x-a\right)\right)}^{l}$? _yes nag_quad_1d_fin_wsing (d01ap) | | no| | | Is the integrand free of singularities, sharp peaks and violent oscillations apart from weight function (x − c)-1 ${\left(x-c\right)}^{-1}$? _yes nag_quad_1d_fin_wcauchy (d01aq) | | no| | | Is the integrand free of violent oscillations apart from weight function cos(ωx)$\mathrm{cos}\left(\omega x\right)$ or sin(ωx)$\mathrm{sin}\left(\omega x\right)$? _yes nag_quad_1d_fin_wtrig (d01an) | | no| | | Is the integrand free of singularities? _yes nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_osc (d01ak) or nag_quad_1d_fin_osc_vec (d01au) | | no| | | Is the integrand free of discontinuities and of singularities except possibly at the end points? _yes nag_quad_1d_fin_well (d01ah) | | no| | | nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_bad_vec (d01at), nag_quad_1d_gen_vec_multi_rcomm (d01ra) or nag_quad_1d_fin_gonnet_vec (d01rg) | no| | nag_quad_1d_fin_well (d01ah), nag_quad_1d_fin_bad (d01aj), nag_quad_1d_fin_bad_vec (d01at), nag_quad_1d_gen_vec_multi_rcomm (d01ra) or nag_quad_1d_fin_gonnet_vec (d01rg) no| nag_quad_1d_data (d01ga)
Note: nag_quad_1d_fin_bad_vec (d01at), nag_quad_1d_fin_osc_vec (d01au), nag_quad_1d_gen_vec_multi_rcomm (d01ra) and nag_quad_1d_fin_gonnet_vec (d01rg) are likely to be more efficient due to their vectorized interfaces than nag_quad_1d_fin_bad (d01aj) and nag_quad_1d_fin_osc (d01ak), which use a more conventional user-interface, consistent with other functions in the chapter.

### Tree 2: One-dimensional integrals over a semi-infinite or infinite interval

 Is the functional form of the integrand known? _yes Are you concerned with efficiency for simple integrands? _yes Is the integrand smooth (polynomial-like) with no exceptions? _yes nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_fin_smooth (d01bd), nag_quad_1d_indef (d01ar) with transformation. See Section [One-dimensional Integrals over a Semi-infinite or Infinite Interval] (b)(ii). | | no| | | Is the integrand smooth (polynomial-like) apart from weight function e − β (x) ${e}^{-\beta \left(x\right)}$ (semi-infinite range) or e − β (x − a) 2${e}^{{-\beta \left(x-a\right)}^{2}}$ (infinite range) or is the integrand polynomial-like in 1/(x + b)$\frac{1}{x+b}$? (semi-infinite range)? _yes nag_quad_1d_gauss_vec (d01ua), nag_quad_1d_gauss_wres (d01tb) and nag_quad_md_gauss (d01fb) or nag_quad_1d_gauss_wgen (d01bc) and nag_quad_md_gauss (d01fb) | | no| | | Has integrand discontinuities, sharp peaks or singularities at known points other than a finite limit? _yes Split range; begin again using finite or infinite range trees | | no| | | Does the integrand oscillate over the entire range? _yes Does the integrand decay rapidly towards an infinite limit? _yes Use nag_quad_1d_inf (d01am); or set cutoff and use finite range tree | | | no| | | | Is the integrand free of violent oscillations apart from weight function cos(ωx)$\mathrm{cos}\left(\omega x\right)$ or sin(ωx)$\mathrm{sin}\left(\omega x\right)$ (semi-infinite range)? _yes nag_quad_1d_inf_wtrig (d01as) | | | no| | | | Use finite-range integration between the zeros and extrapolate (see nag_sum_accelerate (c06ba)) | | no| | | nag_quad_1d_inf (d01am) | no| | nag_quad_1d_inf (d01am) no| nag_quad_1d_data (d01ga) (integrates over the range of the points supplied)

### Tree 3: Multidimensional integrals

 Is dimension = 2$\text{}=2$ and product region? _yes nag_quad_2d_fin (d01da) no| Is dimension ≤ 4$\text{}\le 4$ _yes Is region an n$n$-sphere? _yes nag_quad_md_gauss (d01fb) with user transformation or nag_quad_md_sphere_bad (d01ja) | no| | Is region a Simplex? _yes nag_quad_md_gauss (d01fb) with user transformation or nag_quad_md_simplex (d01pa) | no| | Is the integrand smooth (polynomial-like) in each dimension apart from weight function? _yes nag_quad_1d_gauss_wres (d01tb) and nag_quad_md_gauss (d01fb) or nag_quad_1d_gauss_wgen (d01bc) and nag_quad_md_gauss (d01fb) | no| | Is integrand free of extremely bad behaviour? _yes nag_quad_md_adapt (d01fc), nag_quad_md_sphere (d01fd) or nag_quad_md_numth (d01gc) | no| | Is bad behaviour on the boundary? _yes nag_quad_md_adapt (d01fc) or nag_quad_md_sphere (d01fd) | no| | Compare results from at least two of nag_quad_md_adapt (d01fc), nag_quad_md_sphere (d01fd), nag_quad_md_mcarlo (d01gb) and nag_quad_md_numth (d01gc) and one-dimensional recursive application no| Is region an n$n$-sphere? _yes nag_quad_md_sphere (d01fd) no| Is region a Simplex? _yes nag_quad_md_simplex (d01pa) no| Is high accuracy required? _yes nag_quad_md_sphere (d01fd) with parameter tuning no| Is dimension high? _yes nag_quad_md_sphere (d01fd), nag_quad_md_numth (d01gc) or nag_quad_md_numth_vec (d01gd) no| nag_quad_md_adapt (d01fc)
Note: in the case where there are many integrals to be evaluated nag_quad_md_adapt_multi (d01ea) should be preferred to nag_quad_md_adapt (d01fc).
nag_quad_md_numth_vec (d01gd) is likely to be more efficient than nag_quad_md_numth (d01gc), which uses a more conventional user-interface, consistent with other functions in the chapter.

## Functionality Index

 Korobov optimal coefficients for use in nag_quad_md_numth (d01gc) and nag_quad_md_numth_vec (d01gd):
 Korobov optimal coefficients for use in nag_quad_md_numth_vec (d01gd):
 when number of points is a product of 2 primes nag_quad_md_numth_coeff_2prime (d01gz)
 when number of points is prime nag_quad_md_numth_coeff_prime (d01gy)
 over a general product region,
 Korobov–Conroy number-theoretic method nag_quad_md_numth (d01gc)
 Sag–Szekeres method (also over n-sphere) nag_quad_md_sphere (d01fd)
 variant of nag_quad_md_numth (d01gc) especially efficient on vector machines
 number-theoretic method nag_quad_md_numth_vec (d01gd)
 over a hyper-rectangle,
 Monte–Carlo method nag_quad_md_mcarlo (d01gb)
 over an n-simplex nag_quad_md_simplex (d01pa)
 over an n-sphere (n  ≤  4),
 adaptive integration of a function over a finite interval,
 strategy due to Gonnet,
 suitable for badly behaved integrals,
 vectorized interface nag_quad_1d_fin_gonnet_vec (d01rg)
 strategy due to Patterson,
 suitable for well-behaved integrands, except possibly at end-points nag_quad_1d_fin_well (d01ah)
 strategy due to Piessens and de Doncker,
 allowing for singularities at user-specified break-points nag_quad_1d_fin_sing (d01al)
 suitable for badly behaved integrands,
 suitable for highly oscillatory integrals,
 single abscissa interface nag_quad_1d_fin_osc (d01ak)
 vectorized interface nag_quad_1d_fin_osc_vec (d01au)
 weight function 1 / (x − c) Cauchy principal value (Hilbert transform) nag_quad_1d_fin_wcauchy (d01aq)
 weight function cos(ωx) or sin(ωx) nag_quad_1d_fin_wtrig (d01an)
 weight function with end-point singularities of algebraico-logarithmic type nag_quad_1d_fin_wsing (d01ap)
 adaptive integration of a function over an infinite interval or semi-infinite interval,
 no weight function nag_quad_1d_inf (d01am)
 weight function cos(ωx) or sin(ωx) nag_quad_1d_inf_wtrig (d01as)
 integration of a function defined by data values only,
 Gill–Miller method nag_quad_1d_data (d01ga)
 non-adaptive integration over a finite, semi-infinite or infinite interval,
 using pre-computed weights and abscissae nag_quad_1d_gauss_vec (d01ua)
 non-adaptive integration over a finite interval nag_quad_1d_fin_smooth (d01bd)
 non-adaptive integration over a finite interval,
 with provision for indefinite integrals also nag_quad_1d_indef (d01ar)
 reverse communication,
 adaptive integration over a finite interval,
 multiple integrands,
 efficient on vector machines nag_quad_1d_gen_vec_multi_rcomm (d01ra)
 Service functions,
 array size query for nag_quad_1d_gen_vec_multi_rcomm (d01ra) nag_quad_1d_gen_vec_multi_dimreq (d01rc)
 general option getting nag_quad_opt_get (d01zl)
 general option setting and initialization nag_quad_opt_set (d01zk)
 Two-dimensional quadrature over a finite region nag_quad_2d_fin (d01da)
 Weights and abscissae for Gaussian quadrature rules,
 more general choice of rule,
 calculating the weights and abscissae nag_quad_1d_gauss_wgen (d01bc)
 restricted choice of rule,
 using pre-computed weights and abscissae nag_quad_1d_gauss_wres (d01tb)

## References

Davis P J and Rabinowitz P (1975) Methods of Numerical Integration Academic Press
Gonnet P (2010) Increasing the reliability of adaptive quadrature using explicit interpolants ACM Trans. Math. software 37 26
Lyness J N (1983) When not to use an automatic quadrature routine SIAM Rev. 25 63–87
Patterson T N L (1968) The Optimum addition of points to quadrature formulae Math. Comput. 22 847–856
Piessens R, de Doncker–Kapenga E, Überhuber C and Kahaner D (1983) QUADPACK, A Subroutine Package for Automatic Integration Springer–Verlag
Sobol I M (1974) The Monte Carlo Method The University of Chicago Press
Stroud A H (1971) Approximate Calculation of Multiple Integrals Prentice–Hall
Wynn P (1956) On a device for computing the em(Sn)${e}_{m}\left({S}_{n}\right)$ transformation Math. Tables Aids Comput. 10 91–96

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013