e04 Chapter Contents
e04 Chapter Introduction
NAG C Library Manual

# NAG Library Function Documentnag_opt_lsq_deriv (e04gbc)

## 1  Purpose

nag_opt_lsq_deriv (e04gbc) is a comprehensive algorithm for finding an unconstrained minimum of a sum of squares of $m$ nonlinear functions in $n$ variables $\left(m\ge n\right)$. First derivatives are required.
nag_opt_lsq_deriv (e04gbc) is intended for objective functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

## 2  Specification

 #include #include
void  nag_opt_lsq_deriv (Integer m, Integer n,
 void (*lsqfun)(Integer m, Integer n, const double x[], double fvec[], double fjac[], Integer tdfjac, Nag_Comm *comm),
double x[], double *fsumsq, double fvec[], double fjac[], Integer tdfjac, Nag_E04_Opt *options, Nag_Comm *comm, NagError *fail)

## 3  Description

nag_opt_lsq_deriv (e04gbc) is applicable to problems of the form:
 $Minimize ​ F x = ∑ i=1 m f i x 2$
where $x={\left({x}_{1},{x}_{2},\dots ,{x}_{n}\right)}^{\mathrm{T}}$ and $m\ge n$. (The functions ${f}_{i}\left(x\right)$ are often referred to as ‘residuals’.) You must supply a function to calculate the values of the ${f}_{i}\left(x\right)$ and their first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$.
From a starting point ${x}^{\left(1\right)}$ nag_opt_lsq_deriv (e04gbc) generates a sequence of points ${x}^{\left(2\right)},{x}^{\left(3\right)},\dots ,$ which is intended to converge to a local minimum of $F\left(x\right)$. The sequence of points is given by
 $x k+1 = x k + α k p k$
where the vector ${p}^{\left(k\right)}$ is a direction of search, and ${\alpha }^{\left(k\right)}$ is chosen such that $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ is approximately a minimum with respect to ${\alpha }^{\left(k\right)}$.
The vector ${p}^{\left(k\right)}$ used depends upon the reduction in the sum of squares obtained during the last iteration. If the sum of squares was sufficiently reduced, then ${p}^{\left(k\right)}$ is the Gauss–Newton direction; otherwise the second derivatives of the ${f}_{i}\left(x\right)$ are taken into account using a quasi-Newton updating scheme.
The method is designed to ensure that steady progress is made whatever the starting point, and to have the rapid ultimate convergence of Newton's method.

## 4  References

Gill P E and Murray W (1978) Algorithms for the solution of the nonlinear least-squares problem SIAM J. Numer. Anal. 15 977–992

## 5  Arguments

1:     mIntegerInput
On entry: $m$, the number of residuals, ${f}_{i}\left(x\right)$.
2:     nIntegerInput
On entry: $n$, the number of variables, ${x}_{j}$.
Constraint: $1\le {\mathbf{n}}\le {\mathbf{m}}$.
3:     lsqfunfunction, supplied by the userExternal Function
lsqfun must calculate the vector of values ${f}_{i}\left(x\right)$ and their first derivatives $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ at any point $x$. (However, if you do not wish to calculate the residuals at a particular $x$, there is the option of setting an argument to cause nag_opt_lsq_deriv (e04gbc) to terminate immediately.)
The specification of lsqfun is:
 void lsqfun (Integer m, Integer n, const double x[], double fvec[], double fjac[], Integer tdfjac, Nag_Comm *comm)
1:     mIntegerInput
2:     nIntegerInput
On entry: the numbers $m$ and $n$ of residuals and variables, respectively.
3:     x[n]const doubleInput
On entry: the point $x$ at which the values of the ${f}_{i}$ and the $\frac{\partial {f}_{i}}{\partial {x}_{j}}$ are required.
4:     fvec[m]doubleOutput
On exit: unless $\mathbf{comm}\mathbf{\to }\mathbf{flag}=1$ on entry, or $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ is reset to a negative number, then ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ must contain the value of ${f}_{\mathit{i}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$.
5:     fjac[${\mathbf{m}}×{\mathbf{tdfjac}}$]doubleOutput
On exit: unless $\mathbf{comm}\mathbf{\to }\mathbf{flag}=0$ on entry, or $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ is reset to a negative number, then ${\mathbf{fjac}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdfjac}}+\mathit{j}-1\right]$ must contain the value of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the point $x$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
6:     tdfjacIntegerInput
On entry: the stride separating matrix column elements in the array fjac.
7:     commNag_Comm *
Pointer to structure of type Nag_Comm; the following members are relevant to lsqfun.
flagIntegerInput/Output
On entry: $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ contains 0, 1 or 2. The value 0 indicates that only the residuals need to be evaluated, the value 1 indicates that only the Jacobian matrix needs to be evaluated, and the value 2 indicates that both the residuals and the Jacobian matrix must be calculated. (If the default value of the optional argument ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ is used (i.e., ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_Deriv}$), then lsqfun will always be called with $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ set to 2.)
On exit: if lsqfun resets $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to some negative number then nag_opt_lsq_deriv (e04gbc) will terminate immediately with the error indicator NE_USER_STOP. If fail is supplied to nag_opt_lsq_deriv (e04gbc), ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be set to the user's setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
firstNag_BooleanInput
On entry: will be set to Nag_TRUE on the first call to lsqfun and Nag_FALSE for all subsequent calls.
nfIntegerInput
On entry: the number of calls made to lsqfun including the current one.
userdouble *
iuserInteger *
pPointer
The type Pointer will be void * with a C compiler that defines void * and char * otherwise. Before calling nag_opt_lsq_deriv (e04gbc) these pointers may be allocated memory and initialized with various quantities for use by lsqfun when called from nag_opt_lsq_deriv (e04gbc).
Note: lsqfun should be tested separately before being used in conjunction with nag_opt_lsq_deriv (e04gbc). Function nag_opt_lsq_check_deriv (e04yac) may be used to check the derivatives.
4:     x[n]doubleInput/Output
On entry: ${\mathbf{x}}\left[\mathit{j}-1\right]$ must be set to a guess at the $\mathit{j}$th component of the position of the minimum, for $\mathit{j}=1,2,\dots ,n$.
On exit: the final point ${x}^{*}$. On successful exit, ${\mathbf{x}}\left[j-1\right]$ is the $j$th component of the estimated position of the minimum.
5:     fsumsqdouble *Output
On exit: the value of $F\left(x\right)$, the sum of squares of the residuals ${f}_{i}\left(x\right)$, at the final point given in x.
6:     fvec[m]doubleOutput
On exit: ${\mathbf{fvec}}\left[\mathit{i}-1\right]$ is the value of the residual ${f}_{\mathit{i}}\left(x\right)$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$.
7:     fjac[${\mathbf{m}}×{\mathbf{tdfjac}}$]doubleOutput
On exit: ${\mathbf{fjac}}\left[\left(\mathit{i}-1\right)×{\mathbf{tdfjac}}+\mathit{j}-1\right]$ contains the value of the first derivative $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$ at the final point given in x, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$.
8:     tdfjacIntegerInput
On entry: the stride separating matrix column elements in the array fjac.
Constraint: ${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
9:     optionsNag_E04_Opt *Input/Output
On entry/exit: a pointer to a structure of type Nag_E04_Opt whose members are optional arguments for nag_opt_lsq_deriv (e04gbc). These structure members offer the means of adjusting some of the argument values of the algorithm and on output will supply further details of the results. A description of the members of options is given in Section 10.2.
If any of these optional arguments are required then the structure options should be declared and initialized by a call to nag_opt_init (e04xxc) and supplied as an argument to nag_opt_lsq_deriv (e04gbc). However, if the optional arguments are not required the NAG defined null pointer, E04_DEFAULT, can be used in the function call.
10:   commNag_Comm *Input/Output
Note: comm is a NAG defined type (see Section 3.2.1.1 in the Essential Introduction).
On entry/exit: structure containing pointers for communication to the user-supplied function; see the above description of lsqfun for details. If you do not need to make use of this communication feature the null pointer NAGCOMM_NULL may be used in the call to nag_opt_lsq_deriv (e04gbc); comm will then be declared internally for use in calls to the user-supplied function.
11:   failNagError *Input/Output
The NAG error argument (see Section 3.6 in the Essential Introduction).

### 5.1  Description of Printed Output

Intermediate and final results are printed out by default. The level of printed output can be controlled with the option ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 10.2). The default, ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$, provides a single line of output at each iteration and the final result. The line of results printed at each iteration gives:
 Itn the current iteration number $k$. Nfun the cumulative number of calls to lsqfun. Objective the current value of the objective function, $F\left({x}^{\left(k\right)}\right)$. Norm g the Euclidean norm of the gradient of $F\left({x}^{\left(k\right)}\right)$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$.
The printout of the final result consists of:
 x the final point ${x}^{*}$. g the gradient of $F$ at the final point. Residuals the values of the residuals ${f}_{i}$ at the final point. Sum of squares the value of $F\left({x}^{*}\right)$, the sum of squares of the residuals at the final point.

## 6  Error Indicators and Warnings

If one of NE_USER_STOP, NE_2_INT_ARG_LT, NE_DERIV_ERRORS, NE_OPT_NOT_INIT, NE_BAD_PARAM, NE_2_REAL_ARG_LT, NE_INVALID_INT_RANGE_1, NE_INVALID_REAL_RANGE_EF, NE_INVALID_REAL_RANGE_FF and NE_ALLOC_FAIL occurs, no values will have been assigned to fsumsq, or to the elements of fvec, fjac, ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ or ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$.
The exits NW_TOO_MANY_ITER, NW_COND_MIN, and NE_SVD_FAIL may also be caused by mistakes in lsqfun, by the formulation of the problem or by an awkward function. If there are no such mistakes it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.
NE_2_INT_ARG_LT
On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{m}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}\ge {\mathbf{n}}$.
On entry, ${\mathbf{tdfjac}}=〈\mathit{\text{value}}〉$ while ${\mathbf{n}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdfjac}}\ge {\mathbf{n}}$.
NE_2_REAL_ARG_LT
On entry, ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}=〈\mathit{\text{value}}〉$ while ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
NE_ALLOC_FAIL
Dynamic memory allocation failed.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ had an illegal value.
On entry, argument ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ had an illegal value.
NE_DERIV_ERRORS
Large errors were found in the derivatives of the objective function.
You should check carefully the derivation and programming of expressions for the $\frac{\partial {f}_{i}}{\partial {x}_{j}}$, because it is very unlikely that lsqfun is calculating them correctly.
NE_INT_ARG_LT
On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$.
Constraint: ${\mathbf{n}}\ge 1$.
NE_INVALID_INT_RANGE_1
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ not valid. Correct range is ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
NE_INVALID_REAL_RANGE_EF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ not valid. Correct range is $〈\mathit{\text{value}}〉$ $\le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
NE_INVALID_REAL_RANGE_FF
Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ not valid. Correct range is $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
NE_NOT_APPEND_FILE
Cannot open file $〈\mathit{string}〉$ for appending.
NE_NOT_CLOSE_FILE
Cannot close file $〈\mathit{string}〉$.
NE_OPT_NOT_INIT
Options structure not initialized.
NE_SVD_FAIL
The computation of the singular value decomposition of the Jacobian matrix has failed to converge in a reasonable number of sub-iterations.
It may be worth applying nag_opt_lsq_deriv (e04gbc) again starting with an initial approximation which is not too close to the point at which the failure occurred.
NE_USER_STOP
User requested termination, user flag value $\text{}=〈\mathit{\text{value}}〉$.
This exit occurs if you set $\mathbf{comm}\mathbf{\to }\mathbf{flag}$ to a negative value in lsqfun. If fail is supplied the value of ${\mathbf{fail}}\mathbf{.}\mathbf{errnum}$ will be the same as your setting of $\mathbf{comm}\mathbf{\to }\mathbf{flag}$.
NE_WRITE_ERROR
Error occurred when writing to file $〈\mathit{string}〉$.
NW_COND_MIN
The conditions for a minimum have not all been satisfied, but a lower point could not be found.
This could be because ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ has been set so small that rounding errors in the evaluation of the residuals make attainment of the convergence conditions impossible.
NW_TOO_MANY_ITER
The maximum number of iterations, $〈\mathit{\text{value}}〉$, have been performed.
If steady reductions in the sum of squares, $F\left(x\right)$, were monitored up to the point where this exit occurred, then the exit probably occurred simply because ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}$ was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that $F\left(x\right)$ has no minimum.

## 7  Accuracy

If the problem is reasonably well scaled and a successful exit is made, then, for a computer with a mantissa of $t$ decimals, one would expect to get about $t/2-1$ decimals accuracy in the components of $x$ and between $t-1$ (if $F\left(x\right)$ is of order 1 at the minimum) and $2t-2$ (if $F\left(x\right)$ is close to zero at the minimum) decimals accuracy in $F\left(x\right)$.
A successful exit (${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$) is made from nag_opt_lsq_deriv (e04gbc) when (B1, B2 and B3) or B4 or B5 hold, where
 $B1 ≡ α k × p k < options.optim_tol+ε × 1.0 + x k B2 ≡ F k - F k-1 < options.optim_tol+ε 2 × 1.0 + F k B3 ≡ g k < ε 1/3 × 1.0 + F k B4 ≡ F k < ε 2 B5 ≡ g k < ε × F k 1/2$
and where $‖\text{.}‖$, $\epsilon$ and the optional argument ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$ are as defined in Section 10.2, while ${F}^{\left(k\right)}$ and ${g}^{\left(k\right)}$ are the values of $F\left(x\right)$ and its vector of first derivatives at ${x}^{\left(k\right)}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}=\mathrm{NE_NOERROR}$ then the vector in x on exit, ${x}_{\mathrm{sol}}$, is almost certainly an estimate of ${x}_{\mathrm{true}}$, the position of the minimum to the accuracy specified by ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
If ${\mathbf{fail}}\mathbf{.}\mathbf{code}={\mathbf{NW_COND_MIN}}$, then ${x}_{\mathrm{sol}}$ may still be a good estimate of ${x}_{\mathrm{true}}$, but to verify this you should make the following checks. If
 (a) the sequence $\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to $F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, and (b) $g{\left({x}_{\mathrm{sol}}\right)}^{\mathrm{T}}g\left({x}_{\mathrm{sol}}\right)<10\epsilon$,
where $\mathrm{T}$ denotes transpose, then it is almost certain that ${x}_{\mathrm{sol}}$ is a close approximation to the minimum. When (b) is true, then usually $F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to $F\left({x}_{\mathrm{true}}\right)$.
Further suggestions about confirmation of a computed solution are given in the e04 Chapter Introduction.

## 8  Further Comments

The number of iterations required depends on the number of variables, the number of residuals, the behaviour of $F\left(x\right)$, the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed per iteration of nag_opt_lsq_deriv (e04gbc) varies, but for $m>>n$ is approximately $n×{m}^{2}+O\left({n}^{3}\right)$. In addition, each iteration makes at least one call of lsqfun. So, unless the residuals can be evaluated very quickly, the run time will be dominated by the time spent in lsqfun.
Ideally, the problem should be scaled so that, at the solution, $F\left(x\right)$ and the corresponding values of the ${x}_{j}$ are each in the range $\left(-1,+1\right)$, and so that at points one unit away from the solution, $F\left(x\right)$ differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix of $F\left(x\right)$ at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_lsq_deriv (e04gbc) will take less computer time.
When the sum of squares represents the goodness-of-fit of a nonlinear model to observed data, elements of the variance-covariance matrix of the estimated regression coefficients can be computed by a subsequent call to nag_opt_lsq_covariance (e04ycc), using information returned in the arrays ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. See nag_opt_lsq_covariance (e04ycc) for further details.

## 9  Example

This example finds the least squares estimates of ${x}_{1}$, ${x}_{2}$ and ${x}_{3}$ in the model
 $y = x 1 + t 1 x 2 t 2 + x 3 t 3$
using the 15 sets of data given in the following table.
 $y$ ${t}_{1}$ ${t}_{2}$ ${t}_{3}$ 0.14 1.0 15.0 1.0 0.18 2.0 14.0 2.0 0.22 3.0 13.0 3.0 0.25 4.0 12.0 4.0 0.29 5.0 11.0 5.0 0.32 6.0 10.0 6.0 0.35 7.0 9.0 7.0 0.39 8.0 8.0 8.0 0.37 9.0 7.0 7.0 0.58 10.0 6.0 6.0 0.73 11.0 5.0 5.0 0.96 12.0 4.0 4.0 1.34 13.0 3.0 3.0 2.10 14.0 2.0 2.0 4.39 15.0 1.0 1.0
The program uses (0.5, 1.0, 1.5) as the initial guess at the position of the minimum.
The program shows the use of certain optional arguments, with some option values being assigned directly within the program text and by reading values from a data file. The options structure is declared and initialized by nag_opt_init (e04xxc). A value is then assigned directly to options ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}$ and three further options are read from the data file by use of nag_opt_read (e04xyc). The memory freeing function nag_opt_free (e04xzc) is used to free the memory assigned to the pointers in the option structure. You must not use the standard C function free() for this purpose.

### 9.1  Program Text

Program Text (e04gbce.c)

### 9.2  Program Data

Program Data (e04gbce.d)

Program Options (e04gbce.opt)

### 9.3  Program Results

Program Results (e04gbce.r)

## 10  Optional Arguments

A number of optional input and output arguments to nag_opt_lsq_deriv (e04gbc) are available through the structure argument options, type Nag_E04_Opt. An argument may be selected by assigning an appropriate value to the relevant structure member; those arguments not selected will be assigned default values. If no use is to be made of any of the optional arguments you should use the NAG defined null pointer, E04_DEFAULT, in place of options when calling nag_opt_lsq_deriv (e04gbc); the default settings will then be used for all arguments.
Before assigning values to options directly the structure must be initialized by a call to the function nag_opt_init (e04xxc). Values may then be assigned to the structure members in the normal C manner.
Optional argument settings may also be read from a text file using the function nag_opt_read (e04xyc) in which case initialization of the options structure will be performed automatically if not already done. Any subsequent direct assignment to the options structure must not be preceded by initialization.
If assignment of functions and memory to pointers in the options structure is required, this must be done directly in the calling program. They cannot be assigned using nag_opt_read (e04xyc).

### 10.1  Optional Argument Checklist and Default Values

For easy reference, the following list shows the members of options which are valid for nag_opt_lsq_deriv (e04gbc) together with their default values where relevant. The number $\epsilon$ is a generic notation for machine precision (see nag_machine_precision (X02AJC)).
 Boolean list Nag_TRUE Nag_PrintType print_level Nag_Soln_Iter char outfile[80] stdout void (*print_fun)() NULL Boolean deriv_check Nag_TRUE Integer max_iter $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$ double optim_tol $\sqrt{\epsilon }$ Nag_LinFun minlin Nag_Lin_Deriv double linesearch_tol 0.9 (0.0 if ${\mathbf{n}}=1$) double step_max 100000.0 double *s size n double *v size ${\mathbf{n}}×{\mathbf{n}}$ Integer tdv n Integer grade Integer iter Integer nf

### 10.2  Description of the Optional Arguments

 list – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ the argument settings in the call to nag_opt_lsq_deriv (e04gbc) will be printed.
 print_level – Nag_PrintType Default $\text{}=\mathrm{Nag_Soln_Iter}$
On entry: the level of results printout produced by nag_opt_lsq_deriv (e04gbc). The following values are available:
 Nag_NoPrint No output. Nag_Soln The final solution. Nag_Iter One line of output for each iteration. Nag_Soln_Iter The final solution and one line of output for each iteration. Nag_Soln_Iter_Full The final solution and detailed printout at each iteration.
Details of each level of results printout are described in Section 10.3.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$, $\mathrm{Nag_Soln}$, $\mathrm{Nag_Iter}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$.
 outfile – const char[80] Default $\text{}=\mathtt{stdout}$
On entry: the name of the file to which results should be printed. If ${\mathbf{options}}\mathbf{.}{\mathbf{outfile}}\left[0\right]=\text{' \0 '}$ then the stdout stream is used.
 print_fun – pointer to function Default $\text{}=\text{}$ NULL
On entry: printing function defined by you; the prototype of ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ is
```void (*print_fun)(const Nag_Search_State *st, Nag_Comm *comm);
```
See Section 10.3.1 for further details.
 deriv_check – Nag_Boolean Default $\text{}=\mathrm{Nag_TRUE}$
On entry: if ${\mathbf{options}}\mathbf{.}{\mathbf{deriv_check}}=\mathrm{Nag_TRUE}$ a check of the derivatives defined by lsqfun will be made at the starting point x. The derivative check is carried out by a call to nag_opt_lsq_check_deriv (e04yac). A starting point of $x=0$ or $x=1$ should be avoided if this test is to be meaningful, but if either of these starting points is necessary then nag_opt_lsq_check_deriv (e04yac) should be used to check lsqfun at a different point prior to calling nag_opt_lsq_deriv (e04gbc).
 max_iter – Integer Default $\text{}=\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(50,5{\mathbf{n}}\right)$
On entry: the limit on the number of iterations allowed before termination.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{max_iter}}\ge 0$.
 optim_tol – double Default $\text{}=\sqrt{\epsilon }$
On entry: the accuracy in $x$ to which the solution is required. If ${x}_{\mathrm{true}}$ is the true value of $x$ at the minimum, then ${x}_{\mathrm{sol}}$, the estimated position prior to a normal exit, is such that
 $x sol - x true < options.optim_tol × 1.0 + x true ,$
where $‖y‖=\sqrt{{\sum }_{j=1}^{n}{y}_{j}^{2}}$. For example, if the elements of ${x}_{\mathrm{sol}}$ are not much larger than 1.0 in modulus and if ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=1.0×{10}^{-5}$, then ${x}_{\mathrm{sol}}$ is usually accurate to about five decimal places. (For further details see Section 7.) If $F\left(x\right)$ and the variables are scaled roughly as described in Section 8 and $\epsilon$ is the machine precision, then a setting of order ${\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}=\sqrt{\epsilon }$ will usually be appropriate.
Constraint: $10\epsilon \le {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}<1.0$.
 minlin – Nag_LinFun Default $\text{}=\mathrm{Nag_Lin_Deriv}$
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ specifies whether the linear minimizations (i.e., minimizations of $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to ${\alpha }^{\left(k\right)}$) are to be performed by a function which just requires the evaluation of the ${f}_{i}\left(x\right)$, Nag_Lin_NoDeriv, or by a function which also requires the first derivatives of the ${f}_{i}\left(x\right)$, Nag_Lin_Deriv.
It will often be possible to evaluate the first derivatives of the residuals in about the same amount of computer time that is required for the evaluation of the residuals themselves – if this is so then nag_opt_lsq_deriv (e04gbc) should be called with ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}$ set to Nag_Lin_Deriv. However, if the evaluation of the derivatives takes more than about four times as long as the evaluation of the residuals, then a setting of Nag_Lin_NoDeriv will usually be preferable. If in doubt, use the default setting Nag_Lin_Deriv as it is slightly more robust.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_Deriv}$ or $\mathrm{Nag_Lin_NoDeriv}$.
 linesearch_tol – double Default $\text{}=0.9$. (If ${\mathbf{n}}=1$, default $\text{}=0.0$)
If ${\mathbf{options}}\mathbf{.}{\mathbf{minlin}}=\mathrm{Nag_Lin_NoDeriv}$ then the default value of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ will be changed from 0.9 to 0.5 if ${\mathbf{n}}>1$.
On entry: ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ specifies how accurately the linear minimizations are to be performed.
Every iteration of nag_opt_lsq_deriv (e04gbc) involves a linear minimization, i.e., minimization of $F\left({x}^{\left(k\right)}+{\alpha }^{\left(k\right)}{p}^{\left(k\right)}\right)$ with respect to ${\alpha }^{\left(k\right)}$. The minimum with respect to ${\alpha }^{\left(k\right)}$ will be located more accurately for small values of ${\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}$ (say 0.01) than for large values (say 0.9). Although accurate linear minimizations will generally reduce the number of iterations performed by nag_opt_lsq_deriv (e04gbc), they will increase the number of calls of lsqfun made each iteration. On balance it is usually more efficient to perform a low accuracy minimization.
Constraint: $0.0\le {\mathbf{options}}\mathbf{.}{\mathbf{linesearch_tol}}<1.0$.
 step_max – double Default $\text{}=100000.0$
On entry: an estimate of the Euclidean distance between the solution and the starting point supplied. (For maximum efficiency, a slight overestimate is preferable.) nag_opt_lsq_deriv (e04gbc) will ensure that, for each iteration,
 $∑ j=1 n x j k - x j k-1 2 ≤ options.step_max 2$
where $k$ is the iteration number. Thus, if the problem has more than one solution, nag_opt_lsq_deriv (e04gbc) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence ${x}^{\left(k\right)}$ entering a region where the problem is ill-behaved and can help avoid overflow in the evaluation of $F\left(x\right)$. However, an underestimate of ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}$ can lead to inefficiency.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{step_max}}\ge {\mathbf{options}}\mathbf{.}{\mathbf{optim_tol}}$.
 s – double * Default memory $\text{}={\mathbf{n}}$
On entry: n values of memory will be automatically allocated by nag_opt_lsq_deriv (e04gbc) and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$. However, you may supply memory from the calling program.
On exit: the singular values of the Jacobian matrix at the final point. Thus ${\mathbf{options}}\mathbf{.}{\mathbf{s}}$ may be useful as information about the structure of your problem.
 v – double * Default memory $\text{}={\mathbf{n}}×{\mathbf{n}}$
On entry: ${\mathbf{n}}×{\mathbf{n}}$ values of memory will be automatically allocated by nag_opt_lsq_deriv (e04gbc) and this is the recommended method of use of ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. However, you may supply memory from the calling program.
On exit: the matrix $V$ associated with the singular value decomposition
 $J = USVT$
of the Jacobian matrix at the final point, stored by rows. This matrix may be useful for statistical purposes, since it is the matrix of orthonormalized eigenvectors of ${J}^{\mathrm{T}}J$.
 tdv – Integer Default $\text{}={\mathbf{n}}$
On entry: if memory is supplied then ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ must contain the last dimension of the array assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}$ as declared in the function from which nag_opt_lsq_deriv (e04gbc) is called.
On exit: the trailing dimension used by ${\mathbf{options}}\mathbf{.}{\mathbf{v}}$. If the NAG default memory allocation has been used this value will be n.
Constraint: ${\mathbf{options}}\mathbf{.}{\mathbf{tdv}}\ge {\mathbf{n}}$.
On exit: the grade of the Jacobian at the final point. nag_opt_lsq_deriv (e04gbc) estimates the dimension of the subspace for which the Jacobian matrix can be used as a valid approximation to the curvature (see Gill and Murray (1978)); this estimate is called the grade.
 iter – Integer
On exit: the number of iterations which have been performed in nag_opt_lsq_deriv (e04gbc).
 nf – Integer
On exit: the number of times the residuals have been evaluated (i.e., the number of calls of lsqfun).

### 10.3  Description of Printed Output

The level of printed output can be controlled with the structure members ${\mathbf{options}}\mathbf{.}{\mathbf{list}}$ and ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ (see Section 10.2). If ${\mathbf{options}}\mathbf{.}{\mathbf{list}}=\mathrm{Nag_TRUE}$ then the argument values to nag_opt_lsq_deriv (e04gbc) are listed, whereas the printout of results is governed by the value of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$. The default of ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter}$ provides a single line of output at each iteration and the final result. This section describes all of the possible levels of results printout available from nag_opt_lsq_deriv (e04gbc).
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Iter}$ or $\mathrm{Nag_Soln_Iter}$ a single line of output is produced on completion of each iteration, this gives the following values:
 Itn the current iteration number $k$. Nfun the cumulative number of calls to lsqfun. Objective the value of the objective function, $F\left({x}^{\left(k\right)}\right)$. Norm g the Euclidean norm of the gradient of $F\left({x}^{\left(k\right)}\right)$. Norm x the Euclidean norm of ${x}^{\left(k\right)}$. Norm(x(k-1)-x(k)) the Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$. Step the step ${\alpha }^{\left(k\right)}$ taken along the computed search direction ${p}^{\left(k\right)}$.
When ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln_Iter_Full}$ more detailed results are given at each iteration. Additional values output are:
 Grade the grade of the Jacobian matrix. (See description of ${\mathbf{options}}\mathbf{.}{\mathbf{grade}}$, Section 8.) x the current point ${x}^{\left(k\right)}$. g the current gradient of $F\left({x}^{\left(k\right)}\right)$. Singular values the singular values of the current approximation to the Jacobian matrix.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_Soln}$, $\mathrm{Nag_Soln_Iter}$ or $\mathrm{Nag_Soln_Iter_Full}$ the final result consists of:
 x the final point ${x}^{*}$. g the gradient of $F$ at the final point. Residuals the values of the residuals ${f}_{i}$ at the final point. Sum of squares the value of $F\left({x}^{*}\right)$, the sum of squares of the residuals at the final point.
If ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}=\mathrm{Nag_NoPrint}$ then printout will be suppressed; you can print the final solution when nag_opt_lsq_deriv (e04gbc) returns to the calling program.

#### 10.3.1  Output of results via a user-defined printing function

You may also specify your own print function for output of iteration results and the final solution by use of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ function pointer, which has prototype
```void (*print_fun)(const Nag_Search State *st, Nag_Comm *comm);
```
The rest of this section can be skipped if the default printing facilities provide the required functionality.
When a user-defined function is assigned to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$ this will be called in preference to the internal print function of nag_opt_lsq_deriv (e04gbc). Calls to the user-defined function are again controlled by means of the ${\mathbf{options}}\mathbf{.}{\mathbf{print_level}}$ member. Information is provided through st and comm, the two structure arguments to ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The structure member $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}$ is relevant in this context. If $\mathbf{comm}\mathbf{\to }\mathbf{it_prt}=\mathrm{Nag_TRUE}$ then the results from the last iteration of nag_opt_lsq_deriv (e04gbc) are in the following members of st:
mInteger
The number of residuals.
nInteger
The number of variables.
xdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the current point ${x}^{\left(k\right)}$.
fvecdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{m}$ memory locations holding the values of the residuals ${f}_{i}$ at the current point ${x}^{\left(k\right)}$.
fjacdouble *
Points to $\mathbf{st}\mathbf{\to }\mathbf{m}×\mathbf{st}\mathbf{\to }\mathbf{tdfjac}$ memory locations. $\mathbf{st}\mathbf{\to }\mathbf{fjac}\left[\left(\mathit{i}-1\right)×\mathbf{st}\mathbf{\to }\mathbf{tdfjac}+\left(\mathit{j}-1\right)\right]$ contains the value of $\frac{\partial {f}_{\mathit{i}}}{\partial {x}_{\mathit{j}}}$, for $\mathit{i}=1,2,\dots ,m$ and $\mathit{j}=1,2,\dots ,n$ at the current point ${x}^{\left(k\right)}$.
tdfjacInteger
The trailing dimension for .
stepdouble
The step ${\alpha }^{\left(k\right)}$ taken along the search direction ${p}^{\left(k\right)}$.
xk_normdouble
The Euclidean norm of ${x}^{\left(k-1\right)}-{x}^{\left(k\right)}$.
gdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the gradient of $F$ at the current point ${x}^{\left(k\right)}$.
The grade of the Jacobian matrix.
sdouble *
Points to the $\mathbf{st}\mathbf{\to }\mathbf{n}$ memory locations holding the singular values of the current Jacobian.
iterInteger
The number of iterations, $k$, performed by nag_opt_lsq_deriv (e04gbc).
nfInteger
The cumulative number of calls made to lsqfun.
The relevant members of the structure comm are:
it_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the result of the current iteration.
sol_prtNag_Boolean
Will be Nag_TRUE when the print function is called with the final result.
userdouble *
iuserInteger *
pPointer
Pointers for communication of user information. If used they must be allocated memory either before entry to nag_opt_lsq_deriv (e04gbc) or during a call to lsqfun or ${\mathbf{options}}\mathbf{.}{\mathbf{print_fun}}$. The type Pointer will be void * with a C compiler that defines void * and char * otherwise.