hide long namesshow long names
hide short namesshow short names
Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

NAG Toolbox: nag_opt_bounds_mod_deriv_comp (e04kd)

Purpose

nag_opt_bounds_mod_deriv_comp (e04kd) is a comprehensive modified Newton algorithm for finding:
First derivatives are required. The function is intended for functions which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

Syntax

[bl, bu, x, hesl, hesd, istate, f, g, iw, w, ifail] = e04kd(funct, monit, eta, ibound, bl, bu, x, lh, iw, w, 'n', n, 'iprint', iprint, 'maxcal', maxcal, 'xtol', xtol, 'delta', delta, 'stepmx', stepmx)
[bl, bu, x, hesl, hesd, istate, f, g, iw, w, ifail] = nag_opt_bounds_mod_deriv_comp(funct, monit, eta, ibound, bl, bu, x, lh, iw, w, 'n', n, 'iprint', iprint, 'maxcal', maxcal, 'xtol', xtol, 'delta', delta, 'stepmx', stepmx)
Note: the interface to this routine has changed since earlier releases of the toolbox:
Mark 22: liw, lw have been removed from the interface
.

Description

nag_opt_bounds_mod_deriv_comp (e04kd) is applicable to problems of the form:
MinimizeF(x1,x2,,xn)  subject to  ljxjuj,  j = 1,2,,n.
MinimizeF(x1,x2,,xn)  subject to  ljxjuj,  j=1,2,,n.
Special provision is made for unconstrained minimization (i.e., problems which actually have no bounds on the xjxj), problems which have only non-negativity bounds, and problems in which l1 = l2 = = lnl1=l2==ln and u1 = u2 = = unu1=u2==un. It is possible to specify that a particular xjxj should be held constant. You must supply a starting point, and a funct to calculate the value of F(x)F(x) and its first derivatives (F)/(xj) F xj  at any point xx.
A typical iteration starts at the current point xx where nznz (say) variables are free from their bounds. The vector gzgz, whose elements are the derivatives of F(x)F(x) with respect to the free variables, is known. The matrix of second derivatives with respect to the free variables, HH, is estimated by finite differences. (Note that gzgz and HH are both of dimension nznz.) The equations
(H + E)pz = gz
(H+E)pz=-gz
are solved to give a search direction pzpz. (The matrix EE is chosen so that H + EH+E is positive definite.)
pzpz is then expanded to an nn-vector pp by the insertion of appropriate zero elements, αα is found such that F(x + αp)F(x+αp) is approximately a minimum (subject to the fixed bounds) with respect to αα; and xx is replaced by x + αpx+αp. (If a saddle point is found, a special search is carried out so as to move away from the saddle point.) If any variable actually reaches a bound, it is fixed and nznz is reduced for the next iteration.
There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., nznz is increased). Otherwise minimization continues in the current subspace until the stronger convergence criteria are satisfied. If at this point there are no negative or near-zero Lagrange multiplier estimates, the process is terminated.
If you specify that the problem is unconstrained, nag_opt_bounds_mod_deriv_comp (e04kd) sets the ljlj to 106-106 and the ujuj to 106106. Thus, provided that the problem has been sensibly scaled, no bounds will be encountered during the minimization process and nag_opt_bounds_mod_deriv_comp (e04kd) will act as an unconstrained minimization algorithm.

References

Gill P E and Murray W (1973) Safeguarded steplength algorithms for optimization using descent methods NPL Report NAC 37 National Physical Laboratory
Gill P E and Murray W (1974) Newton-type methods for unconstrained and linearly constrained optimization Math. Programming 7 311–350
Gill P E and Murray W (1976) Minimization subject to bounds on the variables NPL Report NAC 72 National Physical Laboratory

Parameters

Compulsory Input Parameters

1:     funct – function handle or string containing name of m-file
funct must evaluate the function F(x)F(x) and its first derivatives (F)/(xj) F xj  at a specified point. (However, if you do not wish to calculate FF or its first derivatives at a particular xx, there is the option of setting a parameter to cause nag_opt_bounds_mod_deriv_comp (e04kd) to terminate immediately.)
[iflag, fc, gc, iw, w] = funct(iflag, n, xc, iw, w)

Input Parameters

1:     iflag – int64int32nag_int scalar
Will have been set to 11 or 22. The value 11 indicates that only the first derivatives of FF need be supplied, and the value 22 indicates that both FF itself and its first derivatives must be calculated.
2:     n – int64int32nag_int scalar
The number nn of variables.
3:     xc(n) – double array
The point xx at which the (F)/(xj) F xj , or FF and the (F)/(xj) F xj , are required.
4:     iw(liw) – int64int32nag_int array
5:     w(lw) – double array
funct is called with the same parameters iw, liw, w, lw as for nag_opt_bounds_mod_deriv_comp (e04kd). They are present so that, when other library functions require the solution of a minimization subproblem, constants needed for the function evaluation can be passed through iw and w. Similarly, you could use elements 3,4,,liw3,4,,liw of iw and elements from max (8,7 × n + n × (n1) / 2) + 1max(8,7×n+n×(n-1)/2)+1 onwards of w for passing quantities to funct from the function which calls nag_opt_bounds_mod_deriv_comp (e04kd). However, because of the danger of mistakes in partitioning, it is recommended that you should pass information to funct via global variables and not use iw or w at all. In any case you must not change the first 22 elements of iw or the first max (8,7 × n + n × (n1) / 2)max(8,7×n+n×(n-1)/2) elements of w.

Output Parameters

1:     iflag – int64int32nag_int scalar
If it is not possible to evaluate FF or its first derivatives at the point given in xc (or if it is wished to stop the calculations for any other reason) you should reset iflag to a negative number and return control to nag_opt_bounds_mod_deriv_comp (e04kd). nag_opt_bounds_mod_deriv_comp (e04kd) will then terminate immediately, with ifail set to your setting of iflag.
2:     fc – double scalar
Unless iflag = 1iflag=1 on entry or iflag is reset, funct must set fc to the value of the objective function FF at the current point xx.
3:     gc(n) – double array
Unless funct resets iflag, it must set gc(j)gcj to the value of the first derivative (F)/(xj) F xj at the point xx, for j = 1,2,,nj=1,2,,n.
4:     iw(liw) – int64int32nag_int array
5:     w(lw) – double array
Note:  funct should be tested separately before being used in conjunction with nag_opt_bounds_mod_deriv_comp (e04kd).
2:     monit – function handle or string containing name of m-file
If iprint0iprint0, you must supply monit which is suitable for monitoring the minimization process. monit must not change the values of any of its parameters.
If iprint < 0iprint<0, a monit with the correct parameter list must still be supplied, although it will not be called.
[iw, w] = monit(n, xc, fc, gc, istate, gpjnrm, cond, posdef, niter, nf, iw, w)

Input Parameters

1:     n – int64int32nag_int scalar
The number nn of variables.
2:     xc(n) – double array
The coordinates of the current point xx.
3:     fc – double scalar
The value of F(x)F(x) at the current point xx.
4:     gc(n) – double array
The value of (F)/(xj) F xj  at the current point xx, for j = 1,2,,nj=1,2,,n.
5:     istate(n) – int64int32nag_int array
Information about which variables are currently fixed on their bounds and which are free.
If istate(j)istatej is negative, xjxj is currently:
fixed on its upper bound if istate(j) = 1istatej=-1
fixed on its lower bound if istate(j) = 2istatej=-2
effectively a constant (i.e., lj = ujlj=uj) if istate(j) = 3istatej=-3
If istate(j)istatej is positive, its value gives the position of xjxj in the sequence of free variables.
6:     gpjnrm – double scalar
The Euclidean norm of the current projected gradient vector gzgz.
7:     cond – double scalar
The ratio of the largest to the smallest elements of the diagonal factor DD of the approximated projected Hessian matrix. This quantity is usually a good estimate of the condition number of the projected Hessian matrix. (If no variables are currently free, cond is set to zero.)
8:     posdef – logical scalar
Specifies true or false according to whether or not the approximation to the second derivative matrix for the current subspace, HH, is positive definite.
9:     niter – int64int32nag_int scalar
The number of iterations (as outlined in Section [Description]) which have been performed by nag_opt_bounds_mod_deriv_comp (e04kd) so far.
10:   nf – int64int32nag_int scalar
The number of evaluations of F(x)F(x) so far, i.e., the number of calls of funct with iflag set to 22. Each such call of funct also calculates the first derivatives of FF. (In addition to these calls monitored by nf, funct is called with iflag set to 11 not more than n times per iteration.)
11:   iw(liw) – int64int32nag_int array
12:   w(lw) – double array
As in funct, these parameters correspond to the parameters iw, liw, w, lw of nag_opt_bounds_mod_deriv_comp (e04kd). They are included in monit's parameter list primarily for when nag_opt_bounds_mod_deriv_comp (e04kd) is called by other library functions.

Output Parameters

1:     iw(liw) – int64int32nag_int array
2:     w(lw) – double array
You should normally print fc, gpjnrm and cond to be able to compare the quantities mentioned in Section [Accuracy]. It is usually helpful to examine xc, posdef and nf too.
3:     eta – double scalar
Every iteration of nag_opt_bounds_mod_deriv_comp (e04kd) involves a linear minimization (i.e., minimization of F(x + αp)F(x+αp) with respect to αα). eta specifies how accurately these linear minimizations are to be performed. The minimum with respect to αα will be located more accurately for small values of eta (say, 0.010.01) than large values (say, 0.90.9).
Although accurate linear minimizations will generally reduce the number of iterations (and hence the number of calls of funct to estimate the second derivatives), they will tend to increase the number of calls of funct needed for each linear minimization. On balance, it is usually more efficient to perform a low accuracy linear minimization when nn is small and a high accuracy minimization when nn is large.
Suggested value:
  • eta = 0.5eta=0.5 if 1 < n < 101<n<10;
  • eta = 0.1eta=0.1 if 10n2010n20;
  • eta = 0.01eta=0.01 if n > 20n>20.
If n = 1n=1, eta should be set to 0.00.0 (also when the problem is effectively one-dimensional even though n > 1n>1; i.e., if for all except one of the variables the lower and upper bounds are equal).
Constraint: 0.0eta < 1.00.0eta<1.0.
4:     ibound – int64int32nag_int scalar
Indicates whether the problem is unconstrained or bounded. If there are bounds on the variables, ibound can be used to indicate whether the facility for dealing with bounds of special forms is to be used. It must be set to one of the following values:
ibound = 0ibound=0
If the variables are bounded and you are supplying all the ljlj and ujuj individually.
ibound = 1ibound=1
If the problem is unconstrained.
ibound = 2ibound=2
If the variables are bounded, but all the bounds are of the form 0xj0xj.
ibound = 3ibound=3
If all the variables are bounded, and l1 = l2 = = lnl1=l2==ln and u1 = u2 = = unu1=u2==un.
ibound = 4ibound=4
If the problem is unconstrained. (The ibound = 4ibound=4 option is provided for consistency with other functions. In nag_opt_bounds_mod_deriv_comp (e04kd) it produces the same effect as ibound = 1.ibound=1.)
Constraint: 0ibound40ibound4.
5:     bl(n) – double array
n, the dimension of the array, must satisfy the constraint n1n1.
The fixed lower bounds ljlj.
If ibound is set to 00, you must set bl(j)blj to ljlj, for j = 1,2,,nj=1,2,,n. (If a lower bound is not specified for any xjxj, the corresponding bl(j)blj should be set to a large negative number, e.g., 106-106.)
If ibound is set to 33, you must set bl(1)bl1 to l1l1; nag_opt_bounds_mod_deriv_comp (e04kd) will then set the remaining elements of bl equal to bl(1)bl1.
If ibound is set to 11, 22 or 44, bl will be initialized by nag_opt_bounds_mod_deriv_comp (e04kd).
6:     bu(n) – double array
n, the dimension of the array, must satisfy the constraint n1n1.
The fixed upper bounds ujuj.
If ibound is set to 00, you must set bu(j)buj to ujuj, for j = 1,2,,nj=1,2,,n. (If an upper bound is not specified for any variable, the corresponding bu(j)buj should be set to a large positive number, e.g., 106106.)
If ibound is set to 33, you must set bu(1)bu1 to u1u1; nag_opt_bounds_mod_deriv_comp (e04kd) will then set the remaining elements of bu equal to bu(1)bu1.
If ibound is set to 11, 22 or 44, bu will be initialized by nag_opt_bounds_mod_deriv_comp (e04kd).
7:     x(n) – double array
n, the dimension of the array, must satisfy the constraint n1n1.
x(j)xj must be set to a guess at the jjth component of the position of the minimum, for j = 1,2,,nj=1,2,,n.
8:     lh – int64int32nag_int scalar
The dimension of the array hesl as declared in the (sub)program from which nag_opt_bounds_mod_deriv_comp (e04kd) is called.
Constraint: lhmax (n × (n1) / 2,1)lhmax(n×(n-1)/2,1).
9:     iw(liw) – int64int32nag_int array
liw, the dimension of the array, must satisfy the constraint liw2liw2.
Constraint: liw2liw2.
10:   w(lw) – double array
lw, the dimension of the array, must satisfy the constraint lwmax (7 × n + n × (n1) / 2,8)lwmax(7×n+n×(n-1)/2,8).
Constraint: lwmax (7 × n + n × (n1) / 2,8)lwmax(7×n+n×(n-1)/2,8).

Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The dimension of the arrays bl, bu, x. (An error is raised if these dimensions are not equal.)
The number nn of independent variables.
Constraint: n1n1.
2:     iprint – int64int32nag_int scalar
The frequency with which monit is to be called.
iprint > 0iprint>0
monit is called once every iprint iterations and just before exit from nag_opt_bounds_mod_deriv_comp (e04kd).
iprint = 0iprint=0
monit is just called at the final point.
iprint < 0iprint<0
monit is not called at all.
iprint should normally be set to a small positive number.
Default: 11
3:     maxcal – int64int32nag_int scalar
The maximum permitted number of evaluations of F(x)F(x), i.e., the maximum permitted number of calls of funct with iflag set to 22. It should be borne in mind that, in addition to the calls of funct which are limited directly by maxcal, there will be calls of funct (with iflag set to 11) to evaluate only first derivatives.
Default: 50 × n50×n
Constraint: maxcal1maxcal1.
4:     xtol – double scalar
The accuracy in xx to which the solution is required.
If xtruextrue is the true value of xx at the minimum, then xsolxsol, the estimated position before a normal exit, is such that xsolxtrue < xtol × (1.0 + xtrue)xsol-xtrue<xtol×(1.0+xtrue) where y = sqrt(j = 1nyj2)y=j=1nyj2. For example, if the elements of xsolxsol are not much larger than 1.01.0 in modulus, and if xtol is set to 10510-5, then xsolxsol is usually accurate to about five decimal places. (For further details see Section [Accuracy].)
If the problem is scaled as described in Section [Scaling] and εε is the machine precision, then sqrt(ε)ε is probably the smallest reasonable choice for xtol. This is because, normally, to machine accuracy, F(x + sqrt(ε)ej) = F(x)F(x+εej)=F(x), for any jj where ejej is the jjth column of the identity matrix. If you set xtol to 0.00.0 (or any positive value less than εε), nag_opt_bounds_mod_deriv_comp (e04kd) will use 10.0 × sqrt(ε)10.0×ε instead of xtol.
Default: 0.00.0
Constraint: xtol0.0xtol0.0.
5:     delta – double scalar
The differencing interval to be used for approximating the second derivatives of F(x)F(x). Thus, for the finite difference approximations, the first derivatives of F(x)F(x) are evaluated at points which are delta apart. If εε is the machine precision, then sqrt(ε)ε will usually be a suitable setting for delta. If you set delta to 0.00.0 (or to any positive value less than εε), nag_opt_bounds_mod_deriv_comp (e04kd) will automatically use sqrt(ε)ε as the differencing interval.
Default: 0.00.0
Constraint: delta0.0delta0.0.
6:     stepmx – double scalar
An estimate of the Euclidean distance between the solution and the starting point supplied by you. (For maximum efficiency a slight overestimate is preferable.)
nag_opt_bounds_mod_deriv_comp (e04kd) will ensure that, for each iteration,
sqrt(j = 1n [xj(k)xj(k1)]2)stepmx,
j=1n [xj (k) -xj (k-1) ] 2 stepmx,
where kk is the iteration number. Thus, if the problem has more than one solution, nag_opt_bounds_mod_deriv_comp (e04kd) is most likely to find the one nearest to the starting point. On difficult problems, a realistic choice can prevent the sequence of x(k)x (k)  entering a region where the problem is ill-behaved and can also help to avoid possible overflow in the evaluation of F(x)F(x). However, an underestimate of stepmx can lead to inefficiency.
Default: 100000.0100000.0
Constraint: stepmxxtolstepmxxtol.

Input Parameters Omitted from the MATLAB Interface

liw lw

Output Parameters

1:     bl(n) – double array
The lower bounds actually used by nag_opt_bounds_mod_deriv_comp (e04kd), e.g., if ibound = 2ibound=2, bl(1) = bl(2) = = bl(n) = 0.0bl1=bl2==bln=0.0.
2:     bu(n) – double array
The upper bounds actually used by nag_opt_bounds_mod_deriv_comp (e04kd), e.g., if ibound = 2ibound=2, bu(1) = bu(2) = = bu(n) = 106bu1=bu2==bun=106.
3:     x(n) – double array
The final point x(k)x (k) . Thus, if ifail = 0ifail=0 on exit, x(j)xj is the jjth component of the estimated position of the minimum.
4:     hesl(lh) – double array
During the determination of a direction pzpz (see Section [Description]), H + EH+E is decomposed into the product LDLTLDLT, where LL is a unit lower triangular matrix and DD is a diagonal matrix. (The matrices HH, EE, LL and DD are all of dimension nznz, where nznz is the number of variables free from their bounds. HH consists of those rows and columns of the full estimated second derivative matrix which relate to free variables. EE is chosen so that H + EH+E is positive definite.)
hesl and hesd are used to store the factors LL and DD. The elements of the strict lower triangle of LL are stored row by row in the first nz(nz1) / 2nz(nz-1)/2 positions of hesl. The diagonal elements of DD are stored in the first nznz positions of hesd. In the last factorization before a normal exit, the matrix EE will be zero, so that hesl and hesd will contain, on exit, the factors of the final estimated second derivative matrix HH. The elements of hesd are useful for deciding whether to accept the results produced by nag_opt_bounds_mod_deriv_comp (e04kd) (see Section [Accuracy]).
5:     hesd(n) – double array
During the determination of a direction pzpz (see Section [Description]), H + EH+E is decomposed into the product LDLTLDLT, where LL is a unit lower triangular matrix and DD is a diagonal matrix. (The matrices HH, EE, LL and DD are all of dimension nznz, where nznz is the number of variables free from their bounds. HH consists of those rows and columns of the full estimated second derivative matrix which relate to free variables. EE is chosen so that H + EH+E is positive definite.)
hesl and hesd are used to store the factors LL and DD. The elements of the strict lower triangle of LL are stored row by row in the first nz(nz1) / 2nz(nz-1)/2 positions of hesl. The diagonal elements of DD are stored in the first nznz positions of hesd. In the last factorization before a normal exit, the matrix EE will be zero, so that hesl and hesd will contain, on exit, the factors of the final estimated second derivative matrix HH. The elements of hesd are useful for deciding whether to accept the results produced by nag_opt_bounds_mod_deriv_comp (e04kd) (see Section [Accuracy]).
6:     istate(n) – int64int32nag_int array
Information about which variables are currently on their bounds and which are free. If istate(j)istatej is:
  • – equal to 1-1, xjxj is fixed on its upper bound;
  • – equal to 2-2, xjxj is fixed on its lower bound;
  • – equal to 3-3, xjxj is effectively a constant (i.e., lj = ujlj=uj);
  • – positive, istate(j)istatej gives the position of xjxj in the sequence of free variables.
7:     f – double scalar
The function value at the final point given in x.
8:     g(n) – double array
The first derivative vector corresponding to the final point given in x. The components of g corresponding to free variables should normally be close to zero.
9:     iw(liw) – int64int32nag_int array
liw2liw2.
Communication array, used to store information between calls to nag_opt_bounds_mod_deriv_comp (e04kd).
10:   w(lw) – double array
lwmax (7 × n + n × (n1) / 2,8)lwmax(7×n+n×(n-1)/2,8).
Communication array, used to store information between calls to nag_opt_bounds_mod_deriv_comp (e04kd).
11:   ifail – int64int32nag_int scalar
ifail = 0ifail=0 unless the function detects an error (see [Error Indicators and Warnings]).

Error Indicators and Warnings

Note: nag_opt_bounds_mod_deriv_comp (e04kd) may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the function:

Cases prefixed with W are classified as warnings and do not generate an error of type NAG:error_n. See nag_issue_warnings.

W ifail < 0ifail<0
A negative value of ifail indicates an exit from nag_opt_bounds_mod_deriv_comp (e04kd) because you have set iflag negative in funct. The value of ifail will be the same as your setting of iflag.
  ifail = 1ifail=1
On entry,n < 1n<1,
ormaxcal < 1maxcal<1,
oreta < 0.0eta<0.0,
oreta1.0eta1.0,
orxtol < 0.0xtol<0.0,
ordelta < 0.0delta<0.0,
orstepmx < xtolstepmx<xtol,
oribound < 0ibound<0,
oribound > 4ibound>4,
orbl(j) > bu(j)blj>buj for some jj if ibound = 0ibound=0,
orbl(1) > bu(1)bl1>bu1 if ibound = 3ibound=3,
or lh < max (1,n × (n1) / 2) lh < max(1,n×(n-1)/2) ,
orliw < 2liw<2,
or lw < max (8,7 × n + n × (n1) / 2) lw < max(8,7×n+n×(n-1)/2) .
(Note that if you have set xtol or delta to 0.00.0, nag_opt_bounds_mod_deriv_comp (e04kd) uses the default values and continues without failing.) When this exit occurs, no values will have been assigned to f or to the elements of hesl, hesd or g.
  ifail = 2ifail=2
There have been maxcal function evaluations. If steady reductions in F(x)F(x) were monitored up to the point where this exit occurred, then the exit probably occurred simply because maxcal was set too small, so the calculations should be restarted from the final point held in x. This exit may also indicate that F(x)F(x) has no minimum.
W ifail = 3ifail=3
The conditions for a minimum have not all been met, but a lower point could not be found.
Provided that, on exit, the first derivatives of F(x)F(x) with respect to the free variables are sufficiently small, and that the estimated condition number of the second derivative matrix is not too large, this error exit may simply mean that, although it has not been possible to satisfy the specified requirements, the algorithm has in fact found the minimum as far as the accuracy of the machine permits. Such a situation can arise, for instance, if xtol has been set so small that rounding errors in the evaluation of F(x)F(x) or its derivatives make it impossible to satisfy the convergence conditions.
If the estimated condition number of the second derivative matrix at the final point is large, it could be that the final point is a minimum, but that the smallest eigenvalue of the Hessian matrix is so close to zero that it is not possible to recognize the point as a minimum.
  ifail = 4ifail=4
Not used. (This is done to make the significance of ifail = 5ifail=5 similar for nag_opt_bounds_mod_deriv_comp (e04kd) and nag_opt_bounds_mod_deriv2_comp (e04lb).)
W ifail = 5ifail=5
All the Lagrange multiplier estimates which are not indisputably positive lie relatively close to zero, but it is impossible either to continue minimizing on the current subspace or to find a feasible lower point by releasing and perturbing any of the fixed variables. You should investigate as for ifail = 3ifail=3.
The values ifail = 2ifail=2, 33 or 55 may also be caused by mistakes in funct, by the formulation of the problem or by an awkward function. If there are no such mistakes, it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure.

Accuracy

A successful exit (ifail = 0ifail=0) is made from nag_opt_bounds_mod_deriv_comp (e04kd) when H(k)H (k)  is positive definite and when (B1, B2 and B3) or B4 hold, where
B1 α(k) × p(k) < (xtol + sqrt(ε)) × (1.0 + x(k))
B2 |F(k)F(k1)| < (xtol2 + ε) × (1.0 + |F(k)|)
B3 gz(k) < (ε1 / 3 + xtol) × (1.0 + |F(k)|)
B4 gz(k) < 0.01 × sqrt(ε).
B1 α (k) ×p (k) <(xtol+ε)×(1.0+x (k) ) B2 |F (k) -F (k-1) |<(xtol2+ε)×(1.0+|F (k) |) B3 gz (k) <(ε1/3+xtol)×(1.0+|F (k) |) B4 gz (k) <0.01×ε.
(Quantities with superscript kk are the values at the kkth iteration of the quantities mentioned in Section [Description], εε is the machine precision and . . denotes the Euclidean norm.)
If ifail = 0ifail=0, then the vector in x on exit, xsolxsol, is almost certainly an estimate of the position of the minimum, xtruextrue, to the accuracy specified by xtol.
If ifail = 3ifail=3 or 55, xsolxsol may still be a good estimate of xtruextrue, but the following checks should be made. Let the largest of the first nznz elements of hesd be hesd(b)hesdb, let the smallest be hesd(s)hesds, and define k = hesd(b) / hesd(s)k=hesdb/hesds. The scalar kk is usually a good estimate of the condition number of the projected Hessian matrix at xsolxsol. If
(i) the sequence {F(x(k))}{F(x (k) )} converges to F(xsol)F(xsol) at a superlinear or fast linear rate,
(ii) gz(xsol)2 < 10.0 × εgz(xsol)2<10.0×ε, and
(iii) k < 1.0 / gz(xsol)k<1.0/gz(xsol),
then it is almost certain that xsolxsol is a close approximation to the position of a minimum. When (ii) is true, then usually F(xsol)F(xsol) is a close approximation to F(xtrue)F(xtrue). The quantities needed for these checks are all available via monit; in particular the value of cond in the last call of monit before exit gives kk
Further suggestions about confirmation of a computed solution are given in the E04 Chapter Introduction.

Further Comments

Timing

The number of iterations required depends on the number of variables, the behaviour of F(x)F(x), the accuracy demanded and the distance of the starting point from the solution. The number of multiplications performed in an iteration of nag_opt_bounds_mod_deriv_comp (e04kd) is (nz3)/6 + O(nz2) nz36+O(nz2). In addition, each iteration makes nznz calls of funct (with iflag set to 11) in approximating the projected Hessian matrix, and at least one other call of funct (with iflag set to 22). So, unless F(x)F(x) and its first derivatives can be evaluated very quickly, the run time will be dominated by the time spent in funct.

Scaling

Ideally, the problem should be scaled so that, at the solution, F(x)F(x) and the corresponding values of xjxj are each in the range (1, + 1)(-1,+1), and so that at points one unit away from the solution, F(x)F(x) differs from its value at the solution by approximately one unit. This will usually imply that the Hessian matrix at the solution is well-conditioned. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_mod_deriv_comp (e04kd) will take less computer time.

Unconstrained Minimization

If a problem is genuinely unconstrained and has been scaled sensibly, the following points apply:
(a) nznz will always be nn,
(b) hesl and hesd will be factors of the full estimated second derivative matrix with elements stored in the natural order,
(c) the elements of gg should all be close to zero at the final point,
(d) the values of the istate(j)istatej given by monit and on exit from nag_opt_bounds_mod_deriv_comp (e04kd) are unlikely to be of interest (unless they are negative, which would indicate that the modulus of one of the xjxj has reached 106106 for some reason),
(e) monit's parameter gpjnrm simply gives the norm of the first derivative vector.

Example

function nag_opt_bounds_mod_deriv_comp_example
eta = 0.5;
ibound = int64(0);
bl = [1;
     -2;
     -1000000;
     1];
bu = [3;
     0;
     1000000;
     3];
x = [3;
     -1;
     0;
     1];
lh = int64(6);
iw = [int64(0);0];
w = zeros(34,1);
[blOut, buOut, xOut, hesl, hesd, istate, f, g, iwOut, wOut, ifail] = ...
    nag_opt_bounds_mod_deriv_comp(@funct, @monit, eta, ibound, bl, bu, x, lh, iw, w)

function [iflag, fc, gc] = funct(iflag, n, xc)
  gc = zeros(n, 1);
  fc = 0;

  if (iflag ~= 1)
    fc = (xc(1)+10*xc(2))^2 + 5*(xc(3)-xc(4))^2 + (xc(2)-2*xc(3))^4 + ...
         10*(xc(1)-xc(4))^4;
  end
  gc(1) = 2*(xc(1)+10*xc(2)) + 40*(xc(1)-xc(4))^3;
  gc(2) = 20*(xc(1)+10*xc(2)) + 4*(xc(2)-2*xc(3))^3;
  gc(3) = 10*(xc(3)-xc(4)) - 8*(xc(2)-2*xc(3))^3;
  gc(4) = 10*(xc(4)-xc(3)) - 40*(xc(1)-xc(4))^3;



function [] = monit(n, xc, fc, gc, istate, gpjnrm, cond, posdef, niter, nf)

  fprintf('\n Itn     Fn evals              Fn value            Norm of proj gradient\n');
  fprintf(' %3d      %5d    %20.4f      %20.4f\n', niter, nf, fc, gpjnrm);
  fprintf('\n J           X(J)                G(J)         Status\n');
  for j = 1:double(n)
    isj = istate(j);
    if (isj > 0)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Free');
    elseif (isj == -1)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Upper Bound');
    elseif (isj == -2)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Lower Bound');
    elseif (isj == -3)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Constant');
    end
  end
  if (cond ~= 0.0d0)
    if (cond > 1.0d6)
      fprintf('\nEstimated condition number of projected Hessian is more than 1.0e+6\n');
    else
      fprintf('\nEstimated condition number of projected Hessian = %10.2f\n', cond);
    end
    if ( not(posdef) )
%     The following statement is included so that this MONIT
%     can be used in conjunction with either of the functions
%     nag_opt_bounds_mod_deriv_comp or nag_opt_bounds_mod_deriv2_comp
      fprintf('\nProjected Hessian matrix is not positive definite\n');
    end
  end
 

 Itn     Fn evals              Fn value            Norm of proj gradient
   0          1                215.0000                  144.0139

 J           X(J)                G(J)         Status
 1           3.0000            306.0000       Upper Bound
 2          -1.0000           -144.0000       Free
 3           0.0000             -2.0000       Free
 4           1.0000           -310.0000       Lower Bound

Estimated condition number of projected Hessian =       3.83

 Itn     Fn evals              Fn value            Norm of proj gradient
   1          2                163.0642                  320.3345

 J           X(J)                G(J)         Status
 1           3.0000            320.3345       Free
 2          -0.2833             -0.0351       Free
 3           0.3311              0.0703       Free
 4           1.0000           -313.3106       Lower Bound

Estimated condition number of projected Hessian =       9.46

 Itn     Fn evals              Fn value            Norm of proj gradient
   2          3                 34.3864                   94.9936

 J           X(J)                G(J)         Status
 1           2.3327             94.9936       Free
 2          -0.2172             -0.0026       Free
 3           0.3565              0.0052       Free
 4           1.0000            -88.2372       Lower Bound

Estimated condition number of projected Hessian =       4.35

 Itn     Fn evals              Fn value            Norm of proj gradient
   3          4                  8.8929                   28.2250

 J           X(J)                G(J)         Status
 1           1.8870             28.2250       Free
 2          -0.1731             -0.0009       Free
 3           0.3742              0.0017       Free
 4           1.0000            -21.6542       Lower Bound

Estimated condition number of projected Hessian =       4.23

 Itn     Fn evals              Fn value            Norm of proj gradient
   4          5                  3.8068                    8.4415

 J           X(J)                G(J)         Status
 1           1.5881              8.4415       Free
 2          -0.1435             -0.0004       Free
 3           0.3861              0.0008       Free
 4           1.0000             -1.9952       Lower Bound

Estimated condition number of projected Hessian =       4.62

 Itn     Fn evals              Fn value            Norm of proj gradient
   5          6                  2.7680                    2.5810

 J           X(J)                G(J)         Status
 1           1.3847              2.5810       Free
 2          -0.1233             -0.0002       Free
 3           0.3941              0.0004       Free
 4           1.0000              3.7808       Lower Bound

Estimated condition number of projected Hessian =       9.60

 Itn     Fn evals              Fn value            Norm of proj gradient
   6          7                  2.5381                    0.8503

 J           X(J)                G(J)         Status
 1           1.2396              0.8503       Free
 2          -0.1090             -0.0001       Free
 3           0.3998              0.0002       Free
 4           1.0000              5.4513       Lower Bound

Estimated condition number of projected Hessian =      18.55

 Itn     Fn evals              Fn value            Norm of proj gradient
   7          8                  2.4702                    0.3609

 J           X(J)                G(J)         Status
 1           1.1165              0.3609       Free
 2          -0.0968             -0.0001       Free
 3           0.4047              0.0001       Free
 4           1.0000              5.8896       Lower Bound

Estimated condition number of projected Hessian =      27.45

 Itn     Fn evals              Fn value            Norm of proj gradient
   8          9                  2.4338                    0.0002

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0001       Free
 3           0.4093              0.0002       Free
 4           1.0000              5.9069       Lower Bound

Estimated condition number of projected Hessian =       4.43

 Itn     Fn evals              Fn value            Norm of proj gradient
   9         10                  2.4338                    0.0000

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0000       Free
 3           0.4093              0.0000       Free
 4           1.0000              5.9070       Lower Bound

Estimated condition number of projected Hessian =       4.43

 Itn     Fn evals              Fn value            Norm of proj gradient
  10         11                  2.4338                    0.0000

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0000       Free
 3           0.4093              0.0000       Free
 4           1.0000              5.9070       Lower Bound

Estimated condition number of projected Hessian =       4.43
Warning: nag_opt_bounds_mod_deriv_comp (e04kd) returned a warning indicator (3) 

blOut =

           1
          -2
    -1000000
           1


buOut =

           3
           0
     1000000
           3


xOut =

    1.0000
   -0.0852
    0.4093
    1.0000


hesl =

   -0.0935
         0
   -0.1978
         0
         0
         0


hesd =

  209.8031
   47.3803
   45.5183
         0


istate =

                   -2
                    1
                    2
                   -2


f =

    2.4338


g =

    0.2953
   -0.0000
    0.0000
    5.9070


iwOut =

                   -1
                    3


wOut =

    1.0000
   -0.0852
    0.4093
    1.0000
    0.2953
    0.0000
   -0.0000
    5.9070
    0.2953
   -0.0000
    0.0000
    5.9070
         0
    0.0000
   -0.0000
         0
   -0.0000
    0.0000
    0.0001
         0
   -0.0000
    0.0000
   -0.0086
         0
         0
  209.8031
   49.2125
         0
   20.0000
         0
  -19.6062
         0
         0
  -10.0000


ifail =

                    3


function e04kd_example
eta = 0.5;
ibound = int64(0);
bl = [1;-2;-1000000;1];
bu = [3;0;1000000;3];
x = [3;-1;0;1];
lh = int64(6);
iw = [int64(0);0];
w = zeros(34,1);
[blOut, buOut, xOut, hesl, hesd, istate, f, g, iwOut, wOut, ifail] = ...
    e04kd(@funct, @monit, eta, ibound, bl, bu, x, lh, iw, w)

function [iflag, fc, gc] = funct(iflag, n, xc)
  gc = zeros(n, 1);
  fc = 0;

  if (iflag ~= 1)
    fc = (xc(1)+10*xc(2))^2 + 5*(xc(3)-xc(4))^2 + (xc(2)-2*xc(3))^4 + ...
         10*(xc(1)-xc(4))^4;
  end
  gc(1) = 2*(xc(1)+10*xc(2)) + 40*(xc(1)-xc(4))^3;
  gc(2) = 20*(xc(1)+10*xc(2)) + 4*(xc(2)-2*xc(3))^3;
  gc(3) = 10*(xc(3)-xc(4)) - 8*(xc(2)-2*xc(3))^3;
  gc(4) = 10*(xc(4)-xc(3)) - 40*(xc(1)-xc(4))^3;



function [] = monit(n, xc, fc, gc, istate, gpjnrm, cond, posdef, niter, nf)

  fprintf('\n Itn     Fn evals              Fn value            Norm of proj gradient\n');
  fprintf(' %3d      %5d    %20.4f      %20.4f\n', niter, nf, fc, gpjnrm);
  fprintf('\n J           X(J)                G(J)         Status\n');
  for j = 1:double(n)
    isj = istate(j);
    if (isj > 0)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Free');
    elseif (isj == -1)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Upper Bound');
    elseif (isj == -2)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Lower Bound');
    elseif (isj == -3)
      fprintf('%2d %16.4f%20.4f   %s\n', j, xc(j), gc(j), '    Constant');
    end
  end
  if (cond ~= 0.0d0)
    if (cond > 1.0d6)
      fprintf('\nEstimated condition number of projected Hessian is more than 1.0e+6\n');
    else
      fprintf('\nEstimated condition number of projected Hessian = %10.2f\n', cond);
    end
    if ( not(posdef) )
%     The following statement is included so that this MONIT
%     can be used in conjunction with either of the functions
%     e04kd or e04lb
      fprintf('\nProjected Hessian matrix is not positive definite\n');
    end
  end
 

 Itn     Fn evals              Fn value            Norm of proj gradient
   0          1                215.0000                  144.0139

 J           X(J)                G(J)         Status
 1           3.0000            306.0000       Upper Bound
 2          -1.0000           -144.0000       Free
 3           0.0000             -2.0000       Free
 4           1.0000           -310.0000       Lower Bound

Estimated condition number of projected Hessian =       3.83

 Itn     Fn evals              Fn value            Norm of proj gradient
   1          2                163.0642                  320.3345

 J           X(J)                G(J)         Status
 1           3.0000            320.3345       Free
 2          -0.2833             -0.0351       Free
 3           0.3311              0.0703       Free
 4           1.0000           -313.3106       Lower Bound

Estimated condition number of projected Hessian =       9.46

 Itn     Fn evals              Fn value            Norm of proj gradient
   2          3                 34.3864                   94.9936

 J           X(J)                G(J)         Status
 1           2.3327             94.9936       Free
 2          -0.2172             -0.0026       Free
 3           0.3565              0.0052       Free
 4           1.0000            -88.2372       Lower Bound

Estimated condition number of projected Hessian =       4.35

 Itn     Fn evals              Fn value            Norm of proj gradient
   3          4                  8.8929                   28.2250

 J           X(J)                G(J)         Status
 1           1.8870             28.2250       Free
 2          -0.1731             -0.0009       Free
 3           0.3742              0.0017       Free
 4           1.0000            -21.6542       Lower Bound

Estimated condition number of projected Hessian =       4.23

 Itn     Fn evals              Fn value            Norm of proj gradient
   4          5                  3.8068                    8.4415

 J           X(J)                G(J)         Status
 1           1.5881              8.4415       Free
 2          -0.1435             -0.0004       Free
 3           0.3861              0.0008       Free
 4           1.0000             -1.9952       Lower Bound

Estimated condition number of projected Hessian =       4.62

 Itn     Fn evals              Fn value            Norm of proj gradient
   5          6                  2.7680                    2.5810

 J           X(J)                G(J)         Status
 1           1.3847              2.5810       Free
 2          -0.1233             -0.0002       Free
 3           0.3941              0.0004       Free
 4           1.0000              3.7808       Lower Bound

Estimated condition number of projected Hessian =       9.60

 Itn     Fn evals              Fn value            Norm of proj gradient
   6          7                  2.5381                    0.8503

 J           X(J)                G(J)         Status
 1           1.2396              0.8503       Free
 2          -0.1090             -0.0001       Free
 3           0.3998              0.0002       Free
 4           1.0000              5.4513       Lower Bound

Estimated condition number of projected Hessian =      18.55

 Itn     Fn evals              Fn value            Norm of proj gradient
   7          8                  2.4702                    0.3609

 J           X(J)                G(J)         Status
 1           1.1165              0.3609       Free
 2          -0.0968             -0.0001       Free
 3           0.4047              0.0001       Free
 4           1.0000              5.8896       Lower Bound

Estimated condition number of projected Hessian =      27.45

 Itn     Fn evals              Fn value            Norm of proj gradient
   8          9                  2.4338                    0.0002

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0001       Free
 3           0.4093              0.0002       Free
 4           1.0000              5.9069       Lower Bound

Estimated condition number of projected Hessian =       4.43

 Itn     Fn evals              Fn value            Norm of proj gradient
   9         10                  2.4338                    0.0000

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0000       Free
 3           0.4093              0.0000       Free
 4           1.0000              5.9070       Lower Bound

Estimated condition number of projected Hessian =       4.43

 Itn     Fn evals              Fn value            Norm of proj gradient
  10         11                  2.4338                    0.0000

 J           X(J)                G(J)         Status
 1           1.0000              0.2953       Lower Bound
 2          -0.0852             -0.0000       Free
 3           0.4093              0.0000       Free
 4           1.0000              5.9070       Lower Bound

Estimated condition number of projected Hessian =       4.43
Warning: nag_opt_bounds_mod_deriv_comp (e04kd) returned a warning indicator (3) 

blOut =

           1
          -2
    -1000000
           1


buOut =

           3
           0
     1000000
           3


xOut =

    1.0000
   -0.0852
    0.4093
    1.0000


hesl =

   -0.0935
         0
   -0.1978
         0
         0
         0


hesd =

  209.8031
   47.3803
   45.5183
         0


istate =

                   -2
                    1
                    2
                   -2


f =

    2.4338


g =

    0.2953
   -0.0000
    0.0000
    5.9070


iwOut =

                   -1
                    3


wOut =

    1.0000
   -0.0852
    0.4093
    1.0000
    0.2953
    0.0000
   -0.0000
    5.9070
    0.2953
   -0.0000
    0.0000
    5.9070
         0
    0.0000
   -0.0000
         0
   -0.0000
    0.0000
    0.0001
         0
   -0.0000
    0.0000
   -0.0086
         0
         0
  209.8031
   49.2125
         0
   20.0000
         0
  -19.6062
         0
         0
  -10.0000


ifail =

                    3



PDF version (NAG web site, 64-bit version, 64-bit version)
Chapter Contents
Chapter Introduction
NAG Toolbox

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013