Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_interp_1d_cheb (e01ae)

Purpose

nag_interp_1d_cheb (e01ae) constructs the Chebyshev series representation of a polynomial interpolant to a set of data which may contain derivative values.

Syntax

[a, wrk, iwrk, ifail] = e01ae(xmin, xmax, x, y, ip, lwrk, liwrk, 'm', m, 'itmin', itmin, 'itmax', itmax)
[a, wrk, iwrk, ifail] = nag_interp_1d_cheb(xmin, xmax, x, y, ip, lwrk, liwrk, 'm', m, 'itmin', itmin, 'itmax', itmax)

Description

Let m$m$ distinct values xi${x}_{\mathit{i}}$ of an independent variable x$x$ be given, with xminxixmax${x}_{\mathrm{min}}\le {x}_{\mathit{i}}\le {x}_{\mathrm{max}}$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,m$. For each value xi${x}_{i}$, suppose that the value yi${y}_{i}$ of the dependent variable y$y$ together with the first pi${p}_{i}$ derivatives of y$y$ with respect to x$x$ are given. Each pi${p}_{i}$ must therefore be a non-negative integer, with the total number of interpolating conditions, n$n$, equal to m + i = 1mpi$m+\sum _{i=1}^{m}{p}_{i}$.
nag_interp_1d_cheb (e01ae) calculates the unique polynomial q(x)$q\left(x\right)$ of degree n1$n-1$ (or less) which is such that q(k)(xi) = yi(k)${q}^{\left(\mathit{k}\right)}\left({x}_{\mathit{i}}\right)={y}_{\mathit{i}}^{\left(\mathit{k}\right)}$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,m$ and k = 0,1,,pi$\mathit{k}=0,1,\dots ,{p}_{\mathit{i}}$. Here q(0)(xi)${q}^{\left(0\right)}\left({x}_{i}\right)$ means q(xi)$q\left({x}_{i}\right)$. This polynomial is represented in Chebyshev series form in the normalized variable x$\stackrel{-}{x}$, as follows:
 q(x) = (1/2)a0T0(x) + a1T1(x) + ⋯ + an − 1Tn − 1(x), $q(x)=12a0T0(x-)+a1T1(x-)+⋯+an-1Tn-1(x-),$
where
 x = (2x − xmin − xmax)/(xmax − xmin) $x-=2x-xmin-xmax xmax-xmin$
so that 1x1$-1\le \stackrel{-}{x}\le 1$ for x$x$ in the interval xmin${x}_{\mathrm{min}}$ to xmax${x}_{\mathrm{max}}$, and where Ti(x)${T}_{i}\left(\stackrel{-}{x}\right)$ is the Chebyshev polynomial of the first kind of degree i$i$ with argument x$\stackrel{-}{x}$.
(The polynomial interpolant can subsequently be evaluated for any value of x$x$ in the given range by using nag_fit_1dcheb_eval2 (e02ak). Chebyshev series representations of the derivative(s) and integral(s) of q(x)$q\left(x\right)$ may be obtained by (repeated) use of nag_fit_1dcheb_deriv (e02ah) and nag_fit_1dcheb_integ (e02aj).)
The method used consists first of constructing a divided-difference table from the normalized x$\stackrel{-}{x}$ values and the given values of y$y$ and its derivatives with respect to x$\stackrel{-}{x}$. The Newton form of q(x)$q\left(x\right)$ is then obtained from this table, as described in Huddleston (1974) and Krogh (1970), with the modification described in Section [Divided-difference Strategy]. The Newton form of the polynomial is then converted to Chebyshev series form as described in Section [Conversion to Chebyshev Form].
Since the errors incurred by these stages can be considerable, a form of iterative refinement is used to improve the solution. This refinement is particularly useful when derivatives of rather high order are given in the data. In reasonable examples, the refinement will usually terminate with a certain accuracy criterion satisfied by the polynomial (see Section [Accuracy]). In more difficult examples, the criterion may not be satisfied and refinement will continue until the maximum number of iterations (as specified by the input parameter itmax) is reached.
In extreme examples, the iterative process may diverge (even though the accuracy criterion is satisfied): if a certain divergence criterion is satisfied, the process terminates at once. In all cases the function returns the ‘best’ polynomial achieved before termination. For the definition of ‘best’ and details of iterative refinement and termination criteria, see Section [Iterative Refinement].

References

Huddleston R E (1974) CDC 6600 routines for the interpolation of data and of data with derivatives SLL-74-0214 Sandia Laboratories (Reprint)
Krogh F T (1970) Efficient algorithms for polynomial interpolation and numerical differentiation Math. Comput. 24 185–190

Parameters

Compulsory Input Parameters

1:     xmin – double scalar
2:     xmax – double scalar
The lower and upper end points, respectively, of the interval [xmin,xmax]$\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. If they are not determined by your problem, it is recommended that they be set respectively to the smallest and largest values among the xi${x}_{i}$.
Constraint: ${\mathbf{xmin}}<{\mathbf{xmax}}$.
3:     x(m) – double array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
the value of xi${x}_{\mathit{i}}$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,m$. The x(i)${\mathbf{x}}\left(i\right)$ need not be ordered.
Constraint: xminx(i)xmax${\mathbf{xmin}}\le {\mathbf{x}}\left(i\right)\le {\mathbf{xmax}}$, and the x(i)${\mathbf{x}}\left(i\right)$ must be distinct.
4:     y(n) – double array
n, the dimension of the array, must satisfy the constraint n = m + i = 1m ip(i) $\mathit{n}={\mathbf{m}}+\sum _{\mathit{i}=1}^{{\mathbf{m}}}{\mathbf{ip}}\left(\mathit{i}\right)$.
The given values of the dependent variable, and derivatives, as follows:
The first p1 + 1${p}_{1}+1$ elements contain y1,y1(1),,y1(p1)${y}_{1},{y}_{1}^{\left(1\right)},\dots ,{y}_{1}^{\left({p}_{1}\right)}$ in that order.
The next p2 + 1${p}_{2}+1$ elements contain y2,y2(1),,y2(p2)${y}_{2},{y}_{2}^{\left(1\right)},\dots ,{y}_{2}^{\left({p}_{2}\right)}$ in that order.
$\text{ }⋮$
The last pm + 1${p}_{m}+1$ elements contain ym,ym(1),,ym(pm)${y}_{m},{y}_{m}^{\left(1\right)},\dots ,{y}_{m}^{\left({p}_{m}\right)}$ in that order.
5:     ip(m) – int64int32nag_int array
m, the dimension of the array, must satisfy the constraint m1${\mathbf{m}}\ge 1$.
pi${p}_{\mathit{i}}$, the order of the highest-order derivative whose value is given at xi${x}_{\mathit{i}}$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,m$. If the value of y$y$ only is given for some xi${x}_{i}$ then the corresponding value of ip(i)${\mathbf{ip}}\left(i\right)$ must be zero.
Constraint: ip(i)0${\mathbf{ip}}\left(\mathit{i}\right)\ge 0$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
6:     lwrk – int64int32nag_int scalar
The dimension of the array wrk as declared in the (sub)program from which nag_interp_1d_cheb (e01ae) is called.
Constraint: lwrk7 × n + 5 × ipmax + m + 7 ${\mathbf{lwrk}}\ge 7×\mathit{n}+5×\mathit{ipmax}+{\mathbf{m}}+7$, where ipmax$\mathit{ipmax}$ is the largest element of ip(i)${\mathbf{ip}}\left(\mathit{i}\right)$, for i = 1,2,,m$\mathit{i}=1,2,\dots ,{\mathbf{m}}$.
7:     liwrk – int64int32nag_int scalar
The dimension of the array iwrk as declared in the (sub)program from which nag_interp_1d_cheb (e01ae) is called.
Constraint: liwrk2 × m + 2${\mathbf{liwrk}}\ge 2×{\mathbf{m}}+2$.

Optional Input Parameters

1:     m – int64int32nag_int scalar
Default: The dimension of the arrays x, ip. (An error is raised if these dimensions are not equal.)
m$m$, the number of given values of the independent variable x$x$.
Constraint: m1${\mathbf{m}}\ge 1$.
2:     itmin – int64int32nag_int scalar
3:     itmax – int64int32nag_int scalar
Respectively the minimum and maximum number of iterations to be performed by the function (for full details see Section [Iterative Refinement]). Setting itmin and/or itmax negative or zero invokes default value(s) of 2$2$ and/or 10$10$, respectively.
The default values will be satisfactory for most problems, but occasionally significant improvement will result from using higher values.
Default: 0$0$

n

Output Parameters

1:     a(n) – double array
n = m + i = 1m ip(i) $\mathit{n}={\mathbf{m}}+\sum _{\mathit{i}=1}^{{\mathbf{m}}}{\mathbf{ip}}\left(\mathit{i}\right)$.
a(i + 1)${\mathbf{a}}\left(\mathit{i}+1\right)$ contains the coefficient ai${a}_{\mathit{i}}$ in the Chebyshev series representation of q(x)$q\left(x\right)$, for i = 0,1,,n1$\mathit{i}=0,1,\dots ,n-1$.
2:     wrk(lwrk) – double array
3:     iwrk(liwrk) – int64int32nag_int array
4:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

Error Indicators and Warnings

Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

ifail = 1${\mathbf{ifail}}=1$
 On entry, m < 1${\mathbf{m}}<1$, or n ≠ m + ip(1) + ip(2) + ⋯ + ip(m)$\mathit{n}\ne {\mathbf{m}}+{\mathbf{ip}}\left(1\right)+{\mathbf{ip}}\left(2\right)+\cdots +{\mathbf{ip}}\left({\mathbf{m}}\right)$, or lwrk < 7 × n + 5 × ipmax + m + 7${\mathbf{lwrk}}<7×\mathit{n}+5×\mathit{ipmax}+{\mathbf{m}}+7$ (see lwrk for the definition of ipmax$\mathit{ipmax}$), or liwrk < 2 × m + 2${\mathbf{liwrk}}<2×{\mathbf{m}}+2$.
ifail = 2${\mathbf{ifail}}=2$
 On entry, ip(i) < 0${\mathbf{ip}}\left(i\right)<0$ for some i$i$.
ifail = 3${\mathbf{ifail}}=3$
 On entry, ${\mathbf{xmin}}\ge {\mathbf{xmax}}$, or x(i) < xmin${\mathbf{x}}\left(i\right)<{\mathbf{xmin}}$ for some i$i$, or x(i) > xmax${\mathbf{x}}\left(i\right)>{\mathbf{xmax}}$, or x(i) = x(j)${\mathbf{x}}\left(i\right)={\mathbf{x}}\left(j\right)$ for some i ≠ j$i\ne j$.
W ifail = 4${\mathbf{ifail}}=4$
Not all the performance indices are less than eight times the machine precision, although itmax iterations have been performed. Parameters a, wrk and iwrk relate to the best polynomial determined. A more accurate solution may possibly be obtained by increasing itmax and recalling the function. See also Sections [Accuracy], [Iterative Refinement] and [Workspace Information].
ifail = 5${\mathbf{ifail}}=5$
The computation has been terminated because the iterative process appears to be diverging. (Parameters a, wrk and iwrk relate to the best polynomial determined.) Thus the problem specified by your data is probably too ill-conditioned for the solution to be satisfactory. This may result from some of the x(i)${\mathbf{x}}\left(i\right)$ being very close together, or from the number of interpolating conditions, n, being large. If in such cases the conditions do not involve derivatives, you are likely to obtain a much more satisfactory solution to your problem either by cubic spline interpolation (see nag_interp_1d_spline (e01ba)) or by curve-fitting with a polynomial or spline in which the number of coefficients is less than n, preferably much less if n is large (see Chapter E02). But see Sections [Accuracy], [Iterative Refinement] and [Workspace Information].

Accuracy

A complete error analysis is not currently available, but the method gives good results for reasonable problems.
It is important to realise that for some sets of data, the polynomial interpolation problem is ill-conditioned. That is, a small perturbation in the data may induce large changes in the polynomial, even in exact arithmetic. Though by no means the worst example, interpolation by a single polynomial to a large number of function values given at points equally spaced across the range is notoriously ill-conditioned and the polynomial interpolating such a dataset is prone to exhibit enormous oscillations between the data points, especially near the ends of the range. These will be reflected in the Chebyshev coefficients being large compared with the given function values. A more familiar example of ill-conditioning occurs in the solution of certain systems of linear algebraic equations, in which a small change in the elements of the matrix and/or in the components of the right-hand side vector induces a relatively large change in the solution vector. The best that can be achieved in these cases is to make the residual vector small in some sense. If this is possible, the computed solution is exact for a slightly perturbed set of data. Similar considerations apply to the interpolation problem.
The residuals yi(k)q(k)(xi)${y}_{i}^{\left(k\right)}-{q}^{\left(k\right)}\left({x}_{i}\right)$ are available for inspection (see Section [Workspace Information]). To assess whether these are reasonable, however, it is necessary to relate them to the largest function and derivative values taken by q(x)$q\left(x\right)$ over the interval [xmin,xmax]$\left[{x}_{\mathrm{min}},{x}_{\mathrm{max}}\right]$. The following performance indices aim to do this. Let the k$k$th derivative of q$q$ with respect to the normalized variable x$\stackrel{-}{x}$ be given by the Chebyshev series
 (1/2)a0kT0(x) + a1kT1(x) + ⋯ + an − 1 − kkTn − 1 − k(x). $12a0kT0(x-)+a1kT1(x-)+⋯+an-1-kkTn-1-k(x-).$
Let Ak${A}_{k}$ denote the sum of the moduli of these coefficients (this is an upper bound on the k$k$th derivative in the interval and is taken as a measure of the maximum size of this derivative), and define
 Sk = max Ai. i ≤ k
$Sk = max i≤k Ai .$
Then if the root-mean-square value of the residuals of q(k)${q}^{\left(k\right)}$, scaled so as to relate to the normalized variable x$\stackrel{-}{x}$, is denoted by rk${r}_{k}$, the performance indices are defined by
 Pk = rk / Sk,   for ​k = 0,1, … , max (pi). i
$Pk=rk/Sk, for ​k=0,1,…,maxi(pi).$
It is expected that, in reasonable cases, they will all be less than (say) 8$8$ times the machine precision (this is the accuracy criterion mentioned in Section [Description]), and in many cases will be of the order of machine precision or less.

Timing

Computation time is approximately proportional to it × n3$\mathit{it}×{n}^{3}$, where it$\mathit{it}$ is the number of iterations actually used. (See Section [Workspace Information].)

Divided-difference Strategy

In constructing each new coefficient in the Newton form of the polynomial, a new xi${x}_{i}$ must be brought into the computation. The xi${x}_{i}$ chosen is that which yields the smallest new coefficient. This strategy increases the stability of the divided-difference technique, sometimes quite markedly, by reducing errors due to cancellation.

Conversion to Chebyshev Form

Conversion from the Newton form to Chebyshev series form is effected by evaluating the former at the n$n$ values of x$\stackrel{-}{x}$ at which Tn1(x)${T}_{n-1}\left(x\right)$ takes the value ± 1$±1$, and then interpolating these n$n$ function values by a call of nag_fit_1dcheb_glp (e02af), which provides the Chebyshev series representation of the polynomial with very small additional relative error.

Iterative Refinement

The iterative refinement process is performed as follows.
Firstly, an initial approximation, q1(x)${q}_{1}\left(x\right)$ say, is found by the technique described in Section [Description]. The r$r$th step of the refinement process then consists of evaluating the residuals of the r$r$th approximation qr(x)${q}_{r}\left(x\right)$, and constructing an interpolant, dqr(x)$d{q}_{r}\left(x\right)$, to these residuals. The next approximation qr + 1(x)${q}_{r+1}\left(x\right)$ to the interpolating polynomial is then obtained as
 qr + 1(x) = qr(x) + dqr(x). $qr+1(x)=qr(x)+dqr(x).$
This completes the description of the r$r$th step.
The iterative process is terminated according to the following criteria. When a polynomial is found whose performance indices (as defined in Section [Accuracy]) are all less than 8$8$ times the machine precision, the process terminates after itmin further iterations (or after a total of itmax iterations if that occurs earlier). This will occur in most reasonable problems. The extra iterations are to allow for the possibility of further improvement. If no such polynomial is found, the process terminates after a total of itmax iterations. Both these criteria are over-ridden, however, in two special cases. Firstly, if for some value of r$r$ the sum of the moduli of the Chebyshev coefficients of dqr(x)$d{q}_{r}\left(x\right)$ is greater than that of qr(x)${q}_{r}\left(x\right)$, it is concluded that the process is diverging and the process is terminated at once (qr + 1(x)${q}_{r+1}\left(x\right)$ is not computed).
Secondly, if at any stage, the performance indices are all computed as zero, again the process is terminated at once.
As the iterations proceed, a record is kept of the best polynomial. Subsequently, at the end of each iteration, the new polynomial replaces the current best polynomial if it satisfies two conditions (otherwise the best polynomial remains unchanged). The first condition is that at least one of its root-mean-square residual values, rk${r}_{k}$ (see Section [Accuracy]) is smaller than the corresponding value for the current best polynomial. The second condition takes two different forms according to whether or not the performance indices (see Section [Accuracy]) of the current best polynomial are all less than 8$8$ times the machine precision. If they are, then the largest performance index of the new polynomial is required to be less than that of the current best polynomial. If they are not, the number of indices which are less than 8$8$ times the machine precision must not be smaller than for the current best polynomial. When the iterative process is terminated, it is the polynomial then recorded as best, which is returned to you as q(x)$q\left(x\right)$.

Workspace Information

On successful exit, and also if ${\mathbf{ifail}}={\mathbf{4}}$ or 5${\mathbf{5}}$ on exit, the following information is contained in the workspace arrays wrk and iwrk:
wrk(k + 1)${\mathbf{wrk}}\left(\mathit{k}+1\right)$, for k = 0,1,,ipmax$\mathit{k}=0,1,\dots ,\mathit{ipmax}$ where ipmax = maxi pi$\mathit{ipmax}=\underset{i}{\mathrm{max}}\phantom{\rule{0.25em}{0ex}}{p}_{i}$, contains the ratio of pk${p}_{k}$, the performance index relating to the k$k$th derivative of the q(x)$q\left(x\right)$ finally provided, to 8$8$ times the machine precision.
wrk(ipmax + 1 + j)${\mathbf{wrk}}\left(\mathit{ipmax}+1+\mathit{j}\right)$, for j = 1,2,,n$\mathit{j}=1,2,\dots ,n$, contains the j$j$th residual, i.e., the value of yi(k)q(k)(xi)${y}_{i}^{\left(k\right)}-{q}^{\left(k\right)}\left({x}_{i}\right)$, where i$i$ and k$k$ are the appropriate values corresponding to the j$j$th element in the array y (see the description of y in Section [Parameters]).
iwrk(1)${\mathbf{iwrk}}\left(1\right)$ contains the number of iterations actually performed in deriving q(x)$q\left(x\right)$.
If, on exit, ${\mathbf{ifail}}={\mathbf{4}}$ or 5${\mathbf{5}}$, the q(x)$q\left(x\right)$ finally provided may still be adequate for your requirements. To assess this you should examine the residuals contained in wrk(ipmax + 1 + j)${\mathbf{wrk}}\left(\mathit{ipmax}+1+\mathit{j}\right)$, for j = 1,2,,n$\mathit{j}=1,2,\dots ,n$, to see whether they are acceptably small.

Example

```function nag_interp_1d_cheb_example
xmin = 2;
xmax = 6;
x = [2;
4;
5;
6];
y = [1;
2;
-1;
1;
2;
4;
-2];
ip = [int64(0);1;0;2];
lwrk = int64(77);
liwrk = int64(10);
[a, wrk, iwrk, ifail] = nag_interp_1d_cheb(xmin, xmax, x, y, ip, lwrk, liwrk)
```
```

a =

9.1250
-4.5781
0.4609
2.8516
-2.8125
2.2266
-0.7109

wrk =

0.0440
0
0.0309
0.0000
0
0
-0.0000
0
0
-0.0000
0
0
0
-1.0000
0
0.5000
1.0000
9.1250
-4.5781
0.4609
2.8516
-2.8125
2.2266
-0.7109
0.0315
0.0137
0.0927
0.0000
0
0
-0.0000
0
0
-0.0000
0
0
0
0
-0.0000
0.0000
0.0000
0
-0.0000
-0.0000
0
-0.0000
-0.0000
0.0000
-82.4687
83.9062
-67.8750
44.5312
-21.3281
0
0
0
-0.0000
0.0000
-0.0000
-0.0000
0.0000
-0.0000
0.0000
0
0.0000
0
0.0000
0.0000
0
0.0000
0
0
0
0
0
0
0

iwrk =

2
0
2
4
3
1
1
2
4
5

ifail =

0

```
```function e01ae_example
xmin = 2;
xmax = 6;
x = [2;
4;
5;
6];
y = [1;
2;
-1;
1;
2;
4;
-2];
ip = [int64(0);1;0;2];
lwrk = int64(77);
liwrk = int64(10);
[a, wrk, iwrk, ifail] = e01ae(xmin, xmax, x, y, ip, lwrk, liwrk)
```
```

a =

9.1250
-4.5781
0.4609
2.8516
-2.8125
2.2266
-0.7109

wrk =

0.0440
0
0.0309
0.0000
0
0
-0.0000
0
0
-0.0000
0
0
0
-1.0000
0
0.5000
1.0000
9.1250
-4.5781
0.4609
2.8516
-2.8125
2.2266
-0.7109
0.0315
0.0137
0.0927
0.0000
0
0
-0.0000
0
0
-0.0000
0
0
0
0
-0.0000
0.0000
0.0000
0
-0.0000
-0.0000
0
-0.0000
-0.0000
0.0000
-82.4687
83.9062
-67.8750
44.5312
-21.3281
0
0
0
-0.0000
0.0000
-0.0000
-0.0000
0.0000
-0.0000
0.0000
0
0.0000
0
0.0000
0.0000
0
0.0000
0
0
0
0
0
0
0

iwrk =

2
0
2
4
3
1
1
2
4
5

ifail =

0

```