Integer type:** int32**** int64**** nag_int** show int32 show int32 show int64 show int64 show nag_int show nag_int

nag_opt_bounds_quasi_func_easy (e04jy) is an easy-to-use quasi-Newton algorithm for finding a minimum of a function F(x_{1},x_{2}, … ,x_{n})$F({x}_{1},{x}_{2},\dots ,{x}_{n})$, subject to fixed upper and lower bounds of the independent variables x_{1},x_{2}, … ,x_{n}${x}_{1},{x}_{2},\dots ,{x}_{n}$, using function values only.

It is intended for functions which are continuous and which have continuous first and second derivatives (although it will usually work even if the derivatives have occasional discontinuities).

nag_opt_bounds_quasi_func_easy (e04jy) is applicable to problems of the form:

when derivatives of F(x)$F\left(x\right)$ are unavailable.

Minimize F(x _{1},x_{2}, … ,x_{n}) subject to l_{j} ≤ x_{j} ≤ u_{j}, j = 1,2, … ,n
$$\mathrm{Minimize}F({x}_{1},{x}_{2},\dots ,{x}_{n})\text{\hspace{1em} subject to \hspace{1em}}{l}_{j}\le {x}_{j}\le {u}_{j}\text{, \hspace{1em}}j=1,2,\dots ,n$$ |

Special provision is made for problems which actually have no bounds on the x_{j}${x}_{j}$, problems which have only non-negativity bounds and problems in which l_{1} = l_{2} = ⋯ = l_{n}${l}_{1}={l}_{2}=\cdots ={l}_{n}$ and u_{1} = u_{2} = ⋯ = u_{n}${u}_{1}={u}_{2}=\cdots ={u}_{n}$. You must supply a function to calculate the value of F(x)$F\left(x\right)$ at any point x$x$.

From a starting point you supplied there is generated, on the basis of estimates of the gradient and the curvature of F(x)$F\left(x\right)$, a sequence of feasible points which is intended to converge to a local minimum of the constrained function. An attempt is made to verify that the final point is a minimum.

A typical iteration starts at the current point x$x$ where n_{z}${n}_{z}$ (say) variables are free from both their bounds. The projected gradient vector g_{z}${g}_{z}$, whose elements are finite difference approximations to the derivatives of F(x)$F\left(x\right)$ with respect to the free variables, is known. A unit lower triangular matrix L$L$ and a diagonal matrix D$D$ (both of dimension n_{z}${n}_{z}$), such that LDL^{T}$LD{L}^{\mathrm{T}}$ is a positive definite approximation of the matrix of second derivatives with respect to the free variables (i.e., the projected Hessian) are also held. The equations

are solved to give a search direction p_{z}${p}_{z}$, which is expanded to an n$n$-vector p$p$ by an insertion of appropriate zero elements. Then α$\alpha $ is found such that F(x + αp)$F(x+\alpha p)$ is approximately a minimum (subject to the fixed bounds) with respect to α$\alpha $; x$x$ is replaced by x + αp$x+\alpha p$, and the matrices L$L$ and D$D$ are updated so as to be consistent with the change produced in the estimated gradient by the step αp$\alpha p$. If any variable actually reaches a bound during the search along p$p$, it is fixed and n_{z}${n}_{z}$ is reduced for the next iteration. Most iterations calculate g_{z}${g}_{z}$ using forward differences, but central differences are used when they seem necessary.

LDL ^{T}p_{z} = − g_{z}
$$LD{L}^{\mathrm{T}}{p}_{z}=-{g}_{z}$$ |

There are two sets of convergence criteria – a weaker and a stronger. Whenever the weaker criteria are satisfied, the Lagrange multipliers are estimated for all the active constraints. If any Lagrange multiplier estimate is significantly negative, then one of the variables associated with a negative Lagrange multiplier estimate is released from its bound and the next search direction is computed in the extended subspace (i.e., n_{z}${n}_{z}$ is increased). Otherwise minimization continues in the current subspace provided that this is practicable. When it is not, or when the stronger convergence criteria are already satisfied, then, if one or more Lagrange multiplier estimates are close to zero, a slight perturbation is made in the values of the corresponding variables in turn until a lower function value is obtained. The normal algorithm is then resumed from the perturbed point.

If a saddle point is suspected, a local search is carried out with a view to moving away from the saddle point. A local search is also performed when a point is found which is thought to be a constrained minimum.

Gill P E and Murray W (1976) Minimization subject to bounds on the variables *NPL Report NAC 72* National Physical Laboratory

- 1: ibound – int64int32nag_int scalar
- Indicates whether the facility for dealing with bounds of special forms is to be used.It must be set to one of the following values:
- ibound = 0${\mathbf{ibound}}=0$
- If you are supplying all the l
_{j}${l}_{j}$ and u_{j}${u}_{j}$ individually. - ibound = 1${\mathbf{ibound}}=1$
- If there are no bounds on any x
_{j}${x}_{j}$. - ibound = 2${\mathbf{ibound}}=2$
- If all the bounds are of the form 0 ≤ x
_{j}$0\le {x}_{j}$. - ibound = 3${\mathbf{ibound}}=3$
- If l
_{1}= l_{2}= … = l_{n}${l}_{1}={l}_{2}=\dots ={l}_{n}$ and u_{1}= u_{2}= … = u_{n}${u}_{1}={u}_{2}=\dots ={u}_{n}$.

- 2: funct1 – function handle or string containing name of m-file
- You must supply funct1 to calculate the value of the function F(x)$F\left(x\right)$ at any point x$x$. It should be tested separately before being used with nag_opt_bounds_quasi_func_easy (e04jy) (see the E04 Chapter Introduction).[fc, user] = funct1(n, xc, user)
**Input Parameters**- 1: n – int64int32nag_int scalar
- The number n$n$ of variables.
- 2: xc(n) – double array
- The point x$x$ at which the function value is required.
- 3: user – Any MATLAB object
- funct1 is called from nag_opt_bounds_quasi_func_easy (e04jy) with the object supplied to nag_opt_bounds_quasi_func_easy (e04jy).

**Output Parameters** - 3: bl(n) – double array
- The lower bounds l
_{j}${l}_{j}$.If ibound is set to 0$0$, you must set bl(j)${\mathbf{bl}}\left(\mathit{j}\right)$ to l_{j}${l}_{\mathit{j}}$, for j = 1,2, … ,n$\mathit{j}=1,2,\dots ,n$. (If a lower bound is not specified for a particular x_{j}${x}_{\mathit{j}}$, the corresponding bl(j)${\mathbf{bl}}\left(\mathit{j}\right)$ should be set to − 10^{6}$-{10}^{6}$.) - 4: bu(n) – double array
- The upper bounds u
_{j}${u}_{j}$.If ibound is set to 0$0$, you must set bu(j)${\mathbf{bu}}\left(\mathit{j}\right)$ to u_{j}${u}_{\mathit{j}}$, for j = 1,2, … ,n$\mathit{j}=1,2,\dots ,n$. (If an upper bound is not specified for a particular x_{j}${x}_{j}$, the corresponding bu(j)${\mathbf{bu}}\left(j\right)$ should be set to 10^{6}${10}^{6}$.) - 5: x(n) – double array
- x(j)${\mathbf{x}}\left(\mathit{j}\right)$ must be set to an estimate of the j$\mathit{j}$th component of the position of the minimum, for j = 1,2, … ,n$\mathit{j}=1,2,\dots ,n$.

- 1: n – int64int32nag_int scalar
*Default*: The dimension of the arrays bl, bu, x. (An error is raised if these dimensions are not equal.)The number n$n$ of independent variables.- 2: liw – int64int32nag_int scalar
- The dimension of the array iw as declared in the (sub)program from which nag_opt_bounds_quasi_func_easy (e04jy) is called.
- 3: lw – int64int32nag_int scalar
*Default*: max (n × (n − 1) / 2 + 12 × n,13)$\mathrm{max}\phantom{\rule{0.125em}{0ex}}({\mathbf{n}}\times ({\mathbf{n}}-1)/2+12\times {\mathbf{n}},13)$The dimension of the array w as declared in the (sub)program from which nag_opt_bounds_quasi_func_easy (e04jy) is called.- 4: user – Any MATLAB object

- iuser ruser

- 1: bl(n) – double array
- The lower bounds actually used by nag_opt_bounds_quasi_func_easy (e04jy).
- 2: bu(n) – double array
- The upper bounds actually used by nag_opt_bounds_quasi_func_easy (e04jy).
- 3: x(n) – double array
- 4: f – double scalar
- 5: iw(liw) – int64int32nag_int array
- If ifail = 0${\mathbf{ifail}}={\mathbf{0}}$, 3${\mathbf{3}}$ or 5${\mathbf{5}}$, the first n elements of iw contain information about which variables are currently on their bounds and which are free. Specifically, if x
_{i}${x}_{i}$ is:– fixed on its upper bound, iw(i)${\mathbf{iw}}\left(i\right)$ is − 1$-1$; – fixed on its lower bound, iw(i)${\mathbf{iw}}\left(i\right)$ is − 2$-2$; – effectively a constant (i.e., l _{j}= u_{j}${l}_{j}={u}_{j}$), iw(i)${\mathbf{iw}}\left(i\right)$ is − 3$-3$;– free, iw(i)${\mathbf{iw}}\left(i\right)$ gives its position in the sequence of free variables. - 6: w(lw) – double array
- If ifail = 0${\mathbf{ifail}}={\mathbf{0}}$, 3${\mathbf{3}}$ or 5${\mathbf{5}}$, w(i)${\mathbf{w}}\left(i\right)$ contains a finite difference approximation to the i$\mathit{i}$th element of the projected gradient vector g
_{z}${g}_{z}$, for i = 1,2, … ,n$\mathit{i}=1,2,\dots ,{\mathbf{n}}$. In addition, w(n + 1)${\mathbf{w}}\left({\mathbf{n}}+1\right)$ contains an estimate of the condition number of the projected Hessian matrix (i.e., k$k$). The rest of the array is used as workspace. - 7: user – Any MATLAB object
- 8: ifail – int64int32nag_int scalar
- ifail = 0${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

Errors or warnings detected by the function:

Cases prefixed with `W` are classified as warnings and
do not generate an error of type NAG:error_*n*. See nag_issue_warnings.

On entry, n < 1${\mathbf{n}}<1$, or ibound < 0${\mathbf{ibound}}<0$, or ibound > 3${\mathbf{ibound}}>3$, or ibound = 0${\mathbf{ibound}}=0$ and bl(j) > bu(j)${\mathbf{bl}}\left(j\right)>{\mathbf{bu}}\left(j\right)$ for some j$j$, or ibound = 3${\mathbf{ibound}}=3$ and bl(1) > bu(1)${\mathbf{bl}}\left(1\right)>{\mathbf{bu}}\left(1\right)$, or liw < n + 2${\mathbf{liw}}<{\mathbf{n}}+2$, or lw < max (13,12 × n + n × (n − 1) / 2) ${\mathbf{lw}}<\mathrm{max}\phantom{\rule{0.125em}{0ex}}(13,12\times {\mathbf{n}}+{\mathbf{n}}\times ({\mathbf{n}}-1)/2)$.

- There have been 400 × n$400\times n$ function evaluations, yet the algorithm does not seem to be converging. The calculations can be restarted from the final point held in x. The error may also indicate that F(x)$F\left(x\right)$ has no minimum.

`W`ifail = 3${\mathbf{ifail}}=3$- The conditions for a minimum have not all been met but a lower point could not be found and the algorithm has failed.

- An overflow has occurred during the computation. This is an unlikely failure, but if it occurs you should restart at the latest point given in x.

`W`ifail = 5${\mathbf{ifail}}=5$`W`ifail = 6${\mathbf{ifail}}=6$`W`ifail = 7${\mathbf{ifail}}=7$`W`ifail = 8${\mathbf{ifail}}=8$- There is some doubt about whether the point x$x$ found by nag_opt_bounds_quasi_func_easy (e04jy) is a minimum. The degree of confidence in the result decreases as ifail increases. Thus, when ifail = 5${\mathbf{ifail}}={\mathbf{5}}$ it is probable that the final x$x$ gives a good estimate of the position of a minimum, but when ifail = 8${\mathbf{ifail}}={\mathbf{8}}$ it is very unlikely that the function has found a minimum.

- In the search for a minimum, the modulus of one of the variables has become very large ( ∼ 10
^{6})$(\sim {10}^{6})$. This indicates that there is a mistake in funct1, that your problem has no finite solution, or that the problem needs rescaling (see Section [Further Comments]).

- The computed set of forward-difference intervals (stored in w(9 × n + 1),w(9 × n + 2), … , w(10 × n)${\mathbf{w}}\left(9\times {\mathbf{n}}+1\right),{\mathbf{w}}\left(9\times {\mathbf{n}}+2\right),\dots ,{\mathbf{w}}\left(10\times {\mathbf{n}}\right)$) is such that x(i) + w(9 × n + i) ≤ x(i)${\mathbf{x}}\left(i\right)+{\mathbf{w}}\left(9\times {\mathbf{n}}+i\right)\le {\mathbf{x}}\left(i\right)$ for some i$i$.This is an unlikely failure, but if it occurs you should attempt to select another starting point.

If you are dissatisfied with the result (e.g., because ifail = 5${\mathbf{ifail}}={\mathbf{5}}$, 6${\mathbf{6}}$, 7${\mathbf{7}}$ or 8${\mathbf{8}}$), it is worth restarting the calculations from a different starting point (not the point at which the failure occurred) in order to avoid the region which caused the failure. If persistent trouble occurs and the gradient can be calculated, it may be advisable to change to a function which uses gradients (see the E04 Chapter Introduction).

A successful exit (ifail = 0${\mathbf{ifail}}={\mathbf{0}}$) is made from nag_opt_bounds_quasi_func_easy (e04jy) when (B1$\mathrm{B1}$, B2$\mathrm{B2}$ and B3$\mathrm{B3}$) or B4$\mathrm{B4}$ hold, and the local search confirms a minimum, where
_{tol} = 100sqrt(ε)${x}_{\mathit{tol}}=100\sqrt{\epsilon}$, ε$\epsilon $ is the machine precision and ‖ . ‖$\Vert .\Vert $ denotes the Euclidean norm. The vector g_{z}${g}_{z}$ is returned in the array w.)

- B1 ≡ α
^{(k)}× ‖p^{(k)}‖ < (x_{tol}+ sqrt(ε)) × (1.0 + ‖x^{(k)}‖)$\mathrm{B1}\equiv {\alpha}^{\left(k\right)}\times \Vert {p}^{\left(k\right)}\Vert <({x}_{\mathit{tol}}+\sqrt{\epsilon})\times (1.0+\Vert {x}^{\left(k\right)}\Vert )$ - B2 ≡ |F
^{(k)}− F^{(k − 1)}| < (x_{tol}^{2}+ ε) × (1.0 + |F^{(k)}|)$\mathrm{B2}\equiv |{F}^{\left(k\right)}-{F}^{(k-1)}|<({x}_{\mathit{tol}}^{2}+\epsilon )\times (1.0+\left|{F}^{\left(k\right)}\right|)$ - B3 ≡ ‖g
_{z}^{(k)}‖ < (ε^{1 / 3}+ x_{tol}) × (1.0 + |F^{(k)}|)$\mathrm{B3}\equiv \Vert {g}_{z}^{\left(k\right)}\Vert <({\epsilon}^{1/3}+{x}_{\mathit{tol}})\times (1.0+\left|{F}^{\left(k\right)}\right|)$ - B4 ≡ ‖g
_{z}^{(k)}‖ < 0.01 × sqrt(ε)$\mathrm{B4}\equiv \Vert {g}_{z}^{\left(k\right)}\Vert <0.01\times \sqrt{\epsilon}$.

If ifail = 0${\mathbf{ifail}}={\mathbf{0}}$, then the vector in x on exit, x_{sol}${x}_{\mathrm{sol}}$, is almost certainly an estimate of the position of the minimum, x_{true}${x}_{\mathrm{true}}$, to the accuracy specified by x_{tol}${x}_{\mathit{tol}}$.

If ifail = 3${\mathbf{ifail}}={\mathbf{3}}$ or 5${\mathbf{5}}$, x_{sol}${x}_{\mathrm{sol}}$ may still be a good estimate of x_{true}${x}_{\mathrm{true}}$, but the following checks should be made. Let k$k$ denote an estimate of the condition number of the projected Hessian matrix at x_{sol}${x}_{\mathrm{sol}}$. (The value of k$k$ is returned in w(n + 1)${\mathbf{w}}\left({\mathbf{n}}+1\right)$). If

then it is almost certain that x_{sol}${x}_{\mathrm{sol}}$ is a close approximation to the position of a minimum. When (ii) is true, then usually F(x_{sol})$F\left({x}_{\mathrm{sol}}\right)$ is a close approximation to F(x_{true})$F\left({x}_{\mathrm{true}}\right)$

(i) | the sequence {F(x^{(k)})}$\left\{F\left({x}^{\left(k\right)}\right)\right\}$ converges to F(x_{sol})$F\left({x}_{\mathrm{sol}}\right)$ at a superlinear or a fast linear rate, |

(ii) | ‖g_{z}(x_{xol})‖^{2} < 10.0 × ε${\Vert {g}_{z}\left({x}_{\mathrm{xol}}\right)\Vert}^{2}<10.0\times \epsilon $, and |

(iii) | k < 1.0 / ‖g_{z}(x_{sol})‖$k<1.0/\Vert {g}_{z}\left({x}_{\mathrm{sol}}\right)\Vert $, |

When a successful exit is made then, for a computer with a mantissa of t$t$ decimals, one would expect to get about t / 2 − 1$t/2-1$ decimals accuracy in x$x$ and about t − 1$t-1$ decimals accuracy in F$F$, provided the problem is reasonably well scaled.

The number of iterations required depends on the number of variables, the behaviour of F(x)$F\left(x\right)$ and the distance of the starting point from the solution. The number of operations performed in an iteration of nag_opt_bounds_quasi_func_easy (e04jy) is roughly proportional to n^{2}${n}^{2}$. In addition, each iteration makes at least m + 1$m+1$ calls of funct1, where m$m$ is the number of variables not fixed on bounds. So, unless F(x)$F\left(x\right)$ can be evaluated very quickly, the run time will be dominated by the time spent in funct1.

Ideally the problem should be scaled so that at the solution the value of F(x)$F\left(x\right)$ and the corresponding values of x_{1},x_{2}, … ,x_{n}${x}_{1},{x}_{2},\dots ,{x}_{n}$ are each in the range ( − 1, + 1)$(-1,+1)$, and so that at points a unit distance away from the solution, F$F$ is approximately a unit value greater than at the minimum. It is unlikely that you will be able to follow these recommendations very closely, but it is worth trying (by guesswork), as sensible scaling will reduce the difficulty of the minimization problem, so that nag_opt_bounds_quasi_func_easy (e04jy) will take less computer time.

Open in the MATLAB editor: nag_opt_bounds_quasi_func_easy_example

function nag_opt_bounds_quasi_func_easy_exampleibound = int64(0); bl = [1; -2; -1000000; 1]; bu = [3; 0; 1000000; 3]; x = [3; -1; 0; 1]; [blOut, buOut, xOut, f, iw, w, user, ifail] = ... nag_opt_bounds_quasi_func_easy(ibound, @funct1, bl, bu, x)function [fc, user] = funct1(n, xc, user)fc = (xc(1)+10*xc(2))^2 + 5*(xc(3)-xc(4))^2 + (xc(2)-2*xc(3))^4 + 10*(xc(1)-xc(4))^4;

```
Warning: nag_opt_bounds_quasi_func_easy (e04jy) returned a warning indicator (5)
blOut =
1
-2
-1000000
1
buOut =
3
0
1000000
3
xOut =
1.0000
-0.0852
0.4093
1.0000
f =
2.4338
iw =
-2
1
2
-2
2
-2
w =
0
-0.0000
-0.0000
0
1.0000
1.5832
4.3435
0
0
0
0
0
1.0000
-0.0852
0.4093
1.0000
1.0000
-0.0852
0.4093
1.0000
0
-0.0015
-0.0050
0
-0.0000
-0.0000
0.0622
0
-0.0000
-0.0000
0.3462
0
-0.0000
-0.0000
0.3462
0
0.0000
0.0000
0.0000
0.0000
0
0
0
0
0
0
1.0000
1.0000
0
0
0.2953
-0.0000
-0.0000
5.9070
user =
0
ifail =
5
```

Open in the MATLAB editor: e04jy_example

function e04jy_exampleibound = int64(0); bl = [1; -2; -1000000; 1]; bu = [3; 0; 1000000; 3]; x = [3; -1; 0; 1]; [blOut, buOut, xOut, f, iw, w, user, ifail] = ... e04jy(ibound, @funct1, bl, bu, x)function [fc, user] = funct1(n, xc, user)fc = (xc(1)+10*xc(2))^2 + 5*(xc(3)-xc(4))^2 + (xc(2)-2*xc(3))^4 + 10*(xc(1)-xc(4))^4;

```
Warning: nag_opt_bounds_quasi_func_easy (e04jy) returned a warning indicator (5)
blOut =
1
-2
-1000000
1
buOut =
3
0
1000000
3
xOut =
1.0000
-0.0852
0.4093
1.0000
f =
2.4338
iw =
-2
1
2
-2
2
-2
w =
0
-0.0000
-0.0000
0
1.0000
1.5832
4.3435
0
0
0
0
0
1.0000
-0.0852
0.4093
1.0000
1.0000
-0.0852
0.4093
1.0000
0
-0.0015
-0.0050
0
-0.0000
-0.0000
0.0622
0
-0.0000
-0.0000
0.3462
0
-0.0000
-0.0000
0.3462
0
0.0000
0.0000
0.0000
0.0000
0
0
0
0
0
0
1.0000
1.0000
0
0
0.2953
-0.0000
-0.0000
5.9070
user =
0
ifail =
5
```

© The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013