e04nf solves general quadratic programming problems. It is not intended for large sparse problems.

Syntax

C#
public static void e04nf(
	int n,
	int nclin,
	double[,] a,
	double[] bl,
	double[] bu,
	double[] cvec,
	double[,] h,
	E04..::..E04NF_QPHESS qphess,
	int[] istate,
	double[] x,
	out int iter,
	out double obj,
	double[] ax,
	double[] clamda,
	E04..::..e04nfOptions options,
	out int ifail
)
Visual Basic
Public Shared Sub e04nf ( _
	n As Integer, _
	nclin As Integer, _
	a As Double(,), _
	bl As Double(), _
	bu As Double(), _
	cvec As Double(), _
	h As Double(,), _
	qphess As E04..::..E04NF_QPHESS, _
	istate As Integer(), _
	x As Double(), _
	<OutAttribute> ByRef iter As Integer, _
	<OutAttribute> ByRef obj As Double, _
	ax As Double(), _
	clamda As Double(), _
	options As E04..::..e04nfOptions, _
	<OutAttribute> ByRef ifail As Integer _
)
Visual C++
public:
static void e04nf(
	int n, 
	int nclin, 
	array<double,2>^ a, 
	array<double>^ bl, 
	array<double>^ bu, 
	array<double>^ cvec, 
	array<double,2>^ h, 
	E04..::..E04NF_QPHESS^ qphess, 
	array<int>^ istate, 
	array<double>^ x, 
	[OutAttribute] int% iter, 
	[OutAttribute] double% obj, 
	array<double>^ ax, 
	array<double>^ clamda, 
	E04..::..e04nfOptions^ options, 
	[OutAttribute] int% ifail
)
F#
static member e04nf : 
        n : int * 
        nclin : int * 
        a : float[,] * 
        bl : float[] * 
        bu : float[] * 
        cvec : float[] * 
        h : float[,] * 
        qphess : E04..::..E04NF_QPHESS * 
        istate : int[] * 
        x : float[] * 
        iter : int byref * 
        obj : float byref * 
        ax : float[] * 
        clamda : float[] * 
        options : E04..::..e04nfOptions * 
        ifail : int byref -> unit 

Parameters

n
Type: System..::..Int32
On entry: n, the number of variables.
Constraint: n>0.
nclin
Type: System..::..Int32
On entry: mL, the number of general linear constraints.
Constraint: nclin0.
a
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint: dim1max1,nclin
Note: the second dimension of the array a must be at least n if nclin>0 and at least 1 if nclin=0.
On entry: the ith row of a must contain the coefficients of the ith general linear constraint, for i=1,2,,mL.
If nclin=0, a is not referenced.
bl
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin]
On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, and the next mL elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j-1]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j-1]bigbnd; the default value of bigbnd is 1020, but this may be changed by the optional parameter Infinite Bound Size. To specify the jth constraint as an equality, set bl[j-1]=bu[j-1]=β, say, where β<bigbnd.
Constraints:
  • bl[j-1]bu[j-1], for j=1,2,,n+nclin;
  • if bl[j-1]=bu[j-1]=β, β<bigbnd.
bu
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin]
On entry: bl must contain the lower bounds and bu the upper bounds, for all the constraints in the following order. The first n elements of each array must contain the bounds on the variables, and the next mL elements the bounds for the general linear constraints (if any). To specify a nonexistent lower bound (i.e., lj=-), set bl[j-1]-bigbnd, and to specify a nonexistent upper bound (i.e., uj=+), set bu[j-1]bigbnd; the default value of bigbnd is 1020, but this may be changed by the optional parameter Infinite Bound Size. To specify the jth constraint as an equality, set bl[j-1]=bu[j-1]=β, say, where β<bigbnd.
Constraints:
  • bl[j-1]bu[j-1], for j=1,2,,n+nclin;
  • if bl[j-1]=bu[j-1]=β, β<bigbnd.
cvec
Type: array<System..::..Double>[]()[][]
An array of size [dim1]
Note: the dimension of the array cvec must be at least n if the problem is of type LP, QP2 (the default) or QP4, and at least 1 otherwise.
On entry: the coefficients of the explicit linear term of the objective function when the problem is of type LP, QP2 (the default) and QP4.
If the problem is of type FP, QP1, or QP3, cvec is not referenced.
h
Type: array<System..::..Double,2>[,](,)[,][,]
An array of size [dim1, dim2]
Note: dim1 must satisfy the constraint:
  • if the problem is of type QP1, QP2 (the default), QP3 or QP4, dim1n or at least the value of the optional parameter Hessian Rows;
  • if the problem is of type FP or LP, dim11.
  • if iwsav[43]=1 or 2, dim11;
  • otherwise dim1n.
Note: the second dimension of the array h must be at least n if it is to be used to store H explicitly, and at least 1 otherwise.
On entry: may be used to store the quadratic term H of the QP objective function if desired. In some cases, you need not use h to store H explicitly (see the specification of method qphess). The elements of h are referenced only by method qphess. The number of rows of H is denoted by m, whose default value is n. (The optional parameter Hessian Rows may be used to specify a value of m<n.)
If the default version of qphess is used and the problem is of type QP1 or QP2 (the default), the first m rows and columns of h must contain the leading m by m rows and columns of the symmetric Hessian matrix H. Only the diagonal and upper triangular elements of the leading m rows and columns of h are referenced. The remaining elements need not be assigned.
If the default version of qphess is used and the problem is of type QP3 or QP4, the first m rows of h must contain an m by n upper trapezoidal factor of the symmetric Hessian matrix HTH. The factor need not be of full rank, i.e., some of the diagonal elements may be zero. However, as a general rule, the larger the dimension of the leading nonsingular sub-matrix of h, the fewer iterations will be required. Elements outside the upper trapezoidal part of the first m rows of h need not be assigned.
In other situations, it may be desirable to compute Hx or HTHx without accessing h – for example, if H or HTH is sparse or has special structure. The parameters h and ldh may then refer to any convenient array.
If the problem is of type FP or LP, h is not referenced.
qphess
Type: NagLibrary..::..E04..::..E04NF_QPHESS
In general, you need not provide a version of qphess, because a ‘default’ method with name E04NFU/E54NFU is included in the Library. However, the algorithm of e04nf requires only the product of H or HTH and a vector x; and in some cases you may obtain increased efficiency by providing a version of qphess that avoids the need to define the elements of the matrices H or HTH explicitly.
qphess is not referenced if the problem is of type FP or LP, in which case qphess may be the method E04NFU/E54NFU.

A delegate of type E04NF_QPHESS.

istate
Type: array<System..::..Int32>[]()[][]
An array of size [n+nclin]
On entry: need not be set if the (default) optional parameter Cold Start is used.
If the optional parameter Warm Start has been chosen, istate specifies the desired status of the constraints at the start of the feasibility phase. More precisely, the first n elements of istate refer to the upper and lower bounds on the variables, and the next mL elements refer to the general linear constraints (if any). Possible values for istate[j-1] are as follows:
istate[j-1]Meaning
0The corresponding constraint should not be in the initial working set.
1The constraint should be in the initial working set at its lower bound.
2The constraint should be in the initial working set at its upper bound.
3The constraint should be in the initial working set as an equality. This value must not be specified unless bl[j-1]=bu[j-1].
The values -2, -1 and 4 are also acceptable but will be reset to zero by the method. If e04nf has been called previously with the same values of n and nclin, istate already contains satisfactory information. (See also the description of the optional parameter Warm Start.) The method also adjusts (if necessary) the values supplied in x to be consistent with istate.
Constraint: -2istate[j-1]4, for j=1,2,,n+nclin.
On exit: the status of the constraints in the working set at the point returned in x. The significance of each possible value of istate[j-1] is as follows:
istate[j-1]Meaning
-2The constraint violates its lower bound by more than the feasibility tolerance.
-1The constraint violates its upper bound by more than the feasibility tolerance.
-0The constraint is satisfied to within the feasibility tolerance, but is not in the working set.
-1This inequality constraint is included in the working set at its lower bound.
-2This inequality constraint is included in the working set at its upper bound.
-3This constraint is included in the working set as an equality. This value of istate can occur only when bl[j-1]=bu[j-1].
-4This corresponds to optimality being declared with x[j-1] being temporarily fixed at its current value. This value of istate can occur only when ifail=1 on exit.
x
Type: array<System..::..Double>[]()[][]
An array of size [n]
On entry: an initial estimate of the solution.
On exit: the point at which e04nf terminated. If ifail=01 or 4, x contains an estimate of the solution.
iter
Type: System..::..Int32%
On exit: the total number of iterations performed.
obj
Type: System..::..Double%
On exit: the value of the objective function at x if x is feasible, or the sum of infeasibilities at x otherwise. If the problem is of type FP and x is feasible, obj is set to zero.
ax
Type: array<System..::..Double>[]()[][]
An array of size [max1,nclin]
On exit: the final values of the linear constraints Ax.
If nclin=0, ax is not referenced.
clamda
Type: array<System..::..Double>[]()[][]
An array of size [n+nclin]
On exit: the values of the Lagrange multipliers for each constraint with respect to the current working set. The first n elements contain the multipliers for the bound constraints on the variables, and the next mL elements contain the multipliers for the general linear constraints (if any). If istate[j-1]=0 (i.e., constraint j is not in the working set), clamda[j-1] is zero. If x is optimal, clamda[j-1] should be non-negative if istate[j-1]=1, non-positive if istate[j-1]=2 and zero if istate[j-1]=4.
options
Type: NagLibrary..::..E04..::..e04nfOptions
An Object of type E04.e04nfOptions. Used to configure optional parameters to this method.
ifail
Type: System..::..Int32%
On exit: ifail=0 unless the method detects an error or a warning has been flagged (see [Error Indicators and Warnings]).

Description

e04nf is designed to solve a class of quadratic programming problems that are assumed to be stated in the following general form:
minimizexRnfx  subject to  lxAxu,
where A is an mL by n matrix and fx may be specified in a variety of ways depending upon the particular problem to be solved. The available forms for fx are listed in Table 1, in which the prefixes FP, LP and QP stand for ‘feasible point’, ‘linear programming’ and ‘quadratic programming’ respectively and c is an n-element vector.
Problem typefxMatrix H
FPNot applicableNot applicable
LPcTxNot applicable
QP1cTx+12xTHxsymmetric
QP2cTx+12xTHxsymmetric
QP3cTx+12xTHTHxm by n upper trapezoidal
QP4cTx+12xTHTHxm by n upper trapezoidal
Table 1
There is no restriction on H or HTH apart from symmetry. If the quadratic function is convex, a global minimum is found; otherwise, a local minimum is found. The default problem type is QP2 and other objective functions are selected by using the optional parameter Problem Type. For problems of type FP, the objective function is omitted and the method attempts to find a feasible point for the set of constraints.
The constraints involving A are called the general constraints. Note that upper and lower bounds are specified for all the variables and for all the general constraints. An equality constraint can be specified by setting li=ui. If certain bounds are not present, the associated elements of l or u can be set to special values that will be treated as - or +. (See the description of the optional parameter Infinite Bound Size.)
The defining feature of a quadratic function fx is that the second-derivative matrix 2fx (the Hessian matrix) is constant. For QP1 and QP2 (the default), 2fx=H; for QP3 and QP4, 2fx=HTH; and for the LP case, 2fx=0. If H is positive semidefinite, it is usually more efficient to use e04nc. If H is defined as the zero matrix, e04nf will still attempt to solve the resulting linear programming problem; however, this can be accomplished more efficiently by setting the optional parameter Problem Type=LP, or by using e04mf instead.
You must supply an initial estimate of the solution.
In the QP case, you may supply H either explicitly as an m by n matrix, or implicitly in a method that computes the product Hx or HTHx for any given vector x.
In general, a successful run of e04nf will indicate one of three situations:
(i) a minimizer has been found;
(ii) the algorithm has terminated at a so-called dead-point; or
(iii) the problem has no bounded solution.
If a minimizer is found, and 2fx is positive definite or positive semidefinite, e04nf will obtain a global minimizer; otherwise, the solution will be a local minimizer (which may or may not be a global minimizer). A dead-point is a point at which the necessary conditions for optimality are satisfied but the sufficient conditions are not. At such a point, a feasible direction of decrease may or may not exist, so that the point is not necessarily a local solution of the problem. Verification of optimality in such instances requires further information, and is in general an NP-hard problem (see Pardalos and Schnitger (1988)). Termination at a dead-point can occur only if 2fx is not positive definite. If 2fx is positive semidefinite, the dead-point will be a weak minimizer (i.e., with a unique optimal objective value, but an infinite set of optimal x).
The method used by e04nf (see [Algorithmic Details]) is most efficient when many constraints or bounds are active at the solution.

References

Gill P E, Hammarling S, Murray W, Saunders M A and Wright M H (1986) Users' guide for LSSOL (Version 1.0) Report SOL 86-1 Department of Operations Research, Stanford University
Gill P E and Murray W (1978) Numerically stable methods for quadratic programming Math. Programming 14 349–372
Gill P E, Murray W, Saunders M A and Wright M H (1984) Procedures for optimization problems with a mixture of bounds and general linear constraints ACM Trans. Math. Software 10 282–298
Gill P E, Murray W, Saunders M A and Wright M H (1989) A practical anti-cycling procedure for linearly constrained optimization Math. Programming 45 437–474
Gill P E, Murray W, Saunders M A and Wright M H (1991) Inertia-controlling methods for general quadratic programming SIAM Rev. 33 1–36
Gill P E, Murray W and Wright M H (1981) Practical Optimization Academic Press
Pardalos P M and Schnitger G (1988) Checking local optimality in constrained quadratic programming is NP-hard Operations Research Letters 7 33–35

Error Indicators and Warnings

Note: e04nf may return useful information for one or more of the following detected errors or warnings.
Errors or warnings detected by the method:
Some error messages may refer to parameters that are dropped from this interface (LDA, LDH) In these cases, an error in another parameter has usually caused an incorrect value to be inferred.
ifail=1
The iterations were terminated at a dead-point. The necessary conditions for optimality are satisfied but the sufficient conditions are not. (The reduced gradient is negligible, the Lagrange multipliers are optimal, but HR is singular or there are some very small multipliers.) If 2fx is not positive definite, x is not necessarily a local solution of the problem and verification of optimality requires further information. If 2fx is positive semidefinite or the problem is of type LP, x gives the global minimum value of the objective function, but the final x is not unique.
ifail=2
The solution appears to be unbounded, i.e., the objective function is not bounded below in the feasible region. This value of ifail occurs if a step larger than Infinite Step Size (default value=1020) would have to be taken in order to continue the algorithm, or the next step would result in an element of x having magnitude larger than Infinite Bound Size (default value=1020).
ifail=3
No feasible point was found, i.e., it was not possible to satisfy all the constraints to within the feasibility tolerance. In this case, the constraint violations at the final x will reveal a value of the tolerance for which a feasible point will exist – for example, when the feasibility tolerance for each violated constraint exceeds its Slack (see [Description of the Printed Output]) at the final point. The modified problem (with an altered feasibility tolerance) may then be solved using a Warm Start. You should check that there are no constraint redundancies. If the data for the constraints are accurate only to the absolute precision σ, you should ensure that the value of the optional parameter Feasibility Tolerance (default value=ε, where ε is the machine precision) is greater than σ. For example, if all elements of A are of order unity and are accurate only to three decimal places, the Feasibility Tolerance should be at least 10-3.
ifail=4
The limiting number of iterations was reached before normal termination occurred.
The values of the optional parameters Feasibility Phase Iteration Limit (default value=max50,5n+mL) and Optimality Phase Iteration Limit (default value=max50,5n+mL) may be too small. If the method appears to be making progress (e.g., the objective function is being satisfactorily reduced), either increase the iterations limit and rerun e04nf or, alternatively, rerun e04nf using the Warm Start facility to specify the initial working set.
ifail=5
The reduced Hessian exceeds its assigned dimension. The algorithm needed to expand the reduced Hessian when it was already at its maximum dimension, as specified by the optional parameter Maximum Degrees of Freedom (default value=n).
The value of the optional parameter Maximum Degrees of Freedom is too small. Rerun e04nf with a larger value (possibly using the Warm Start facility to specify the initial working set).
ifail=6
An input parameter is invalid.
ifail=7
The designated problem type was not FP, LP, QP1, QP2, QP3 or QP4. Rerun e04nf with the optional parameter Problem Type set to one of these values.
Overflow
If the printed output before the overflow error contains a warning about serious ill-conditioning in the working set when adding the jth constraint, it may be possible to avoid the difficulty by increasing the magnitude of the Feasibility Tolerance (default value=ε, where ε is the machine precision) and rerunning the program. If the message recurs even after this change, the offending linearly dependent constraint (with index ‘j’) must be removed from the problem.
ifail=-9000
An error occured, see message report.
ifail=-6000
Invalid Parameters value
ifail=-4000
Invalid dimension for array value
ifail=-8000
Negative dimension for array value
ifail=-6000
Invalid Parameters value

Accuracy

e04nf implements a numerically stable active set strategy and returns solutions that are as accurate as the condition of the problem warrants on the machine.

Parallelism and Performance

None.

Further Comments

This section contains some comments on scaling and a description of the printed output.

Scaling

Sensible scaling of the problem is likely to reduce the number of iterations required and make the problem less sensitive to perturbations in the data, thus improving the condition of the problem. In the absence of better information it is usually sensible to make the Euclidean lengths of each constraint of comparable magnitude. See the E04 class and Gill et al. (1981) for further information and advice.

Description of the Printed Output

This section describes the intermediate printout and final printout produced by e04nf. The intermediate printout is a subset of the monitoring information produced by the method at every iteration (see [Description of Monitoring Information]). You can control the level of printed output (see the description of the optional parameter Print Level). Note that the intermediate printout and final printout are produced only if Print Level10 (the default for e04nf, by default no output is produced by ).
The following line of summary output (<80 characters) is produced at every iteration. In all cases, the values of the quantities printed are those in effect on completion of the given iteration.
Itn is the iteration count.
Step is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.
Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective is the value of the current objective function. If x is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x is feasible, Objective is the value of the objective function of (1). The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
Norm Gz is ZRTgFR, the Euclidean norm of the reduced gradient with respect to ZR. During the optimality phase, this norm will be approximately zero after a unit step. (See [Definition of Search Direction] and [Main Iteration].)
The final printout includes a listing of the status of every variable and constraint.
The following describes the printout for each variable. A full stop (.) is printed for any numerical value that is zero.
Varbl gives the name (V) and index j, for j=1,2,,n, of the variable.
State gives the state of the variable (FR if neither bound is in the working set, EQ if a fixed variable, LL if on its lower bound, UL if on its upper bound, TF if temporarily fixed at its current value). If Value lies outside the upper or lower bounds by more than the Feasibility Tolerance, State will be ++ or -- respectively.
A key is sometimes printed before State.
A Alternative optimum possible. The variable is active at one of its bounds, but its Lagrange multiplier is essentially zero. This means that if the variable were allowed to start moving away from its bound then there would be no change to the objective function. The values of the other free variables might change, giving a genuine alternative solution. However, if there are any degenerate variables (labelled D), the actual change might prove to be zero, since one of them could encounter a bound immediately. In either case the values of the Lagrange multipliers might also change.
D Degenerate. The variable is free, but it is equal to (or very close to) one of its bounds.
I Infeasible. The variable is currently violating one of its bounds by more than the Feasibility Tolerance.
Value is the value of the variable at the final iteration.
Lower Bound is the lower bound specified for the variable. None indicates that bl[j-1]-bigbnd.
Upper Bound is the upper bound specified for the variable. None indicates that bu[j-1]bigbnd.
Lagr Mult is the Lagrange multiplier for the associated bound. This will be zero if State is FR unless bl[j-1]-bigbnd and bu[j-1]bigbnd, in which case the entry will be blank. If x is optimal, the multiplier should be non-negative if State is LL and non-positive if State is UL.
Slack is the difference between the variable Value and the nearer of its (finite) bounds bl[j-1] and bu[j-1]. A blank entry indicates that the associated variable is not bounded (i.e., bl[j-1]-bigbnd and bu[j-1]bigbnd).
The meaning of the printout for general constraints is the same as that given above for variables, with ‘variable’ replaced by ‘constraint’, bl[j-1] and bu[j-1] are replaced by bl[n+j-1] and bu[n+j-1] respectively, and with the following change in the heading:
L Con gives the name (L) and index j, for j=1,2,,nL, of the linear constraint.
Note that movement off a constraint (as opposed to a variable moving away from its bound) can be interpreted as allowing the entry in the Slack column to become positive.
Numerical values are output with a fixed number of digits; they are not guaranteed to be accurate to this precision.

Example

This example minimizes the quadratic function fx=cTx+12xTHx, where
c=-0.02,-0.2,-0.2,-0.2,-0.2,0.04,0.04T
H=20000-0-002000-0-000220-0-000220-0-000002-0-000000-2-200000-2-2
subject to the bounds
-0.01x10.01-0.10x20.15-0.01x30.03-0.04x40.02-0.10x50.05-0.01x60.00-0.01x70.00
and to the general constraints
x1+x2+x3+x4+x5+x6+x7=-0.130.15x1+0.04x2+0.02x3+0.04x4+0.02x5+0.01x6+0.03x7-0.00490.03x1+0.05x2+0.08x3+0.02x4+0.06x5+0.01x6-0.006400.02x1+0.04x2+0.01x3+0.02x4+0.02x5-0.00370.02x1+0.03x2+0.01x5-0.0012-0.09920.70x1+0.75x2+0.80x3+0.75x4+0.80x5+0.97x6-0.0030.02x1+0.06x2+0.08x3+0.12x4+0.02x5+0.01x6+0.97x7-0.002
The initial point, which is infeasible, is
x0=-0.01,-0.03,0.0,-0.01,-0.1,0.02,0.01T.
The optimal solution (to five figures) is
x*=-0.01,-0.069865,0.018259,-0.24261,-0.62006,0.013805,0.0040665T.
One bound constraint and four general constraints are active at the solution.

Example program (C#): e04nfe.cs

Example program data: e04nfe.d

Example program results: e04nfe.r

Algorithmic Details

This section contains a detailed description of the method used by e04nf.

Overview

e04nf is based on an inertia-controlling method that maintains a Cholesky factorization of the reduced Hessian (see below). The method is based on that of Gill and Murray (1978), and is described in detail by Gill et al. (1991). Here we briefly summarise the main features of the method. Where possible, explicit reference is made to the names of variables that are parameters of e04nf or appear in the printed output. e04nf has two phases:
(i) finding an initial feasible point by minimizing the sum of infeasibilities (the feasibility phase), and
(ii) minimizing the quadratic objective function within the feasible region (the optimality phase).
The computations in both phases are performed by the same methods. The two-phase nature of the algorithm is reflected by changing the function being minimized from the sum of infeasibilities to the quadratic objective function. The feasibility phase does not perform the standard simplex method (i.e., it does not necessarily find a vertex), except in the LP case when mLn. Once any iterate is feasible, all subsequent iterates remain feasible.
e04nf has been designed to be efficient when used to solve a sequence of related problems – for example, within a sequential quadratic programming method for nonlinearly constrained optimization (e.g., e04uf or e04wd). In particular, you may specify an initial working set (the indices of the constraints believed to be satisfied exactly at the solution); see the discussion of the optional parameter Warm Start.
In general, an iterative process is required to solve a quadratic program. (For simplicity, we shall always consider a typical iteration and avoid reference to the index of the iteration.) Each new iterate x- is defined by
x-=x+αp (1)
where the step length α is a non-negative scalar and p is called the search direction.
At each point x, a working set of constraints is defined to be a linearly independent subset of the constraints that are satisfied ‘exactly’ (to within the tolerance defined by the optional parameter Feasibility Tolerance). The working set is the current prediction of the constraints that hold with equality at the solution of a linearly constrained QP problem. The search direction is constructed so that the constraints in the working set remain unaltered for any value of the step length. For a bound constraint in the working set, this property is achieved by setting the corresponding element of the search direction to zero. Thus, the associated variable is fixed, and specification of the working set induces a partition of x into fixed and free variables. During a given iteration, the fixed variables are effectively removed from the problem; since the relevant elements of the search direction are zero, the columns of A corresponding to fixed variables may be ignored.
Let mW denote the number of general constraints in the working set and let nFX denote the number of variables fixed at one of their bounds (mW and nFX are the quantities Lin and Bnd in the monitoring file output from e04nf; see [Description of Monitoring Information]). Similarly, let nFR (nFR=n-nFX) denote the number of free variables. At every iteration, the variables are reordered so that the last nFX variables are fixed, with all other relevant vectors and matrices ordered accordingly.

Definition of Search Direction

Let AFR denote the mW by nFR sub-matrix of general constraints in the working set corresponding to the free variables and let pFR denote the search direction with respect to the free variables only. The general constraints in the working set will be unaltered by any move along p if
AFRpFR=0. (2)
In order to compute pFR, the TQ factorization of AFR is used:
AFRQFR=0T, (3)
where T is a nonsingular mW by mW upper triangular matrix (i.e., tij=0 if i>j), and the nonsingular nFR by nFR matrix QFR is the product of orthogonal transformations (see Gill et al. (1984)). If the columns of QFR are partitioned so that
QFR=ZY,
where Y is nFR by mW, then the nZ nZ=nFR-mW columns of Z form a basis for the null space of AFR. Let nR be an integer such that 0nRnZ, and let ZR denote a matrix whose nR columns are a subset of the columns of Z. (The integer nR is the quantity Zr in the monitoring output from e04nf. In many cases, ZR will include all the columns of Z.) The direction pFR will satisfy (2) if
pFR=ZRpR, (4)
where pR is any nR-vector.
Let Q denote the n by n matrix
Q=QFRIFX,
where IFX is the identity matrix of order nFX. Let HQ and gQ denote the n by n transformed Hessian and transformed gradient
HQ=QTHQ  and  gQ=QTc+Hx
and let the matrix of first nR rows and columns of HQ be denoted by HR and the vector of the first nR elements of gQ be denoted by gR. The quantities HR and gR are known as the reduced Hessian and reduced gradient of fx, respectively. Roughly speaking, gR and HR describe the first and second derivatives of an unconstrained problem for the calculation of pR.
At each iteration, a triangular factorization of HR is available. If HR is positive definite, HR=RTR, where R is the upper triangular Cholesky factor of HR. If HR is not positive definite, HR=RTDR, where D=diag1,1,,1,μ, with μ0.
The computation is arranged so that the reduced-gradient vector is a multiple of eR, a vector of all zeros except in the last (i.e., nRth) position. This allows the vector pR in (4) to be computed from a single back-substitution
RpR=γeR (5)
where γ is a scalar that depends on whether or not the reduced Hessian is positive definite at x. In the positive definite case, x+p is the minimizer of the objective function subject to the constraints (bounds and general) in the working set treated as equalities. If HR is not positive definite pR satisfies the conditions
pRTHRpR<0  and  gRTpR0,
which allow the objective function to be reduced by any positive step of the form x+αp.

Main Iteration

If the reduced gradient is zero, x is a constrained stationary point in the subspace defined by Z. During the feasibility phase, the reduced gradient will usually be zero only at a vertex (although it may be zero at non-vertices in the presence of constraint dependencies). During the optimality phase a zero reduced gradient implies that x minimizes the quadratic objective when the constraints in the working set are treated as equalities. At a constrained stationary point, Lagrange multipliers λC and λB for the general and bound constraints are defined from the equations
AFRTλC=gFR  and  λB=gFX-AFXTλC. (6)
Given a positive constant δ of the order of the machine precision, a Lagrange multiplier λj corresponding to an inequality constraint in the working set is said to be optimal if λjδ when the associated constraint is at its upper bound, or if λj-δ when the associated constraint is at its lower bound. If a multiplier is nonoptimal, the objective function (either the true objective or the sum of infeasibilities) can be reduced by deleting the corresponding constraint (with index Jdel; see [Description of Monitoring Information]) from the working set.
If optimal multipliers occur during the feasibility phase and the sum of infeasibilities is nonzero, there is no feasible point, and you can force e04nf to continue until the minimum value of the sum of infeasibilities has been found; see the discussion of the optional parameter Minimum Sum of Infeasibilities. At such a point, the Lagrange multiplier λj corresponding to an inequality constraint in the working set will be such that -1+δλjδ when the associated constraint is at its upper bound, and -δλj1+δ when the associated constraint is at its lower bound. Lagrange multipliers for equality constraints will satisfy λj1+δ.
If the reduced gradient is not zero, Lagrange multipliers need not be computed and the nonzero elements of the search direction p are given by ZRpR (see (4) and (5)). The choice of step length is influenced by the need to maintain feasibility with respect to the satisfied constraints. If HR is positive definite and x+p is feasible, α will be taken as unity. In this case, the reduced gradient at x- will be zero, and Lagrange multipliers are computed. Otherwise, α is set to αM, the step to the ‘nearest’ constraint (with index Jadd; see [Description of Monitoring Information]), which is added to the working set at the next iteration.
Each change in the working set leads to a simple change to AFR: if the status of a general constraint changes, a row of AFR is altered; if a bound constraint enters or leaves the working set, a column of AFR changes. Explicit representations are recurred of the matrices T, QFR and R; and of vectors QTg, and QTc. The triangular factor R associated with the reduced Hessian is only updated during the optimality phase.
One of the most important features of e04nf is its control of the conditioning of the working set, whose nearness to linear dependence is estimated by the ratio of the largest to smallest diagonal elements of the TQ factor T (the printed value Cond T; see [Description of Monitoring Information]). In constructing the initial working set, constraints are excluded that would result in a large value of Cond T.
e04nf includes a rigorous procedure that prevents the possibility of cycling at a point where the active constraints are nearly linearly dependent (see Gill et al. (1989)). The main feature of the anti-cycling procedure is that the feasibility tolerance is increased slightly at the start of every iteration. This not only allows a positive step to be taken at every iteration, but also provides, whenever possible, a choice of constraints to be added to the working set. Let αM denote the maximum step at which x+αMp does not violate any constraint by more than its feasibility tolerance. All constraints at a distance α (ααM) along p from the current point are then viewed as acceptable candidates for inclusion in the working set. The constraint whose normal makes the largest angle with the search direction is added to the working set.

Choosing the Initial Working Set

At the start of the optimality phase, a positive definite HR can be defined if enough constraints are included in the initial working set. (The matrix with no rows and columns is positive definite by definition, corresponding to the case when AFR contains nFR constraints.) The idea is to include as many general constraints as necessary to ensure that the reduced Hessian is positive definite.
Let HZ denote the matrix of the first nZ rows and columns of the matrix HQ=QTHQ at the beginning of the optimality phase. A partial Cholesky factorization is used to find an upper triangular matrix R that is the factor of the largest positive definite leading sub-matrix of HZ. The use of interchanges during the factorization of HZ tends to maximize the dimension of R. (The condition of R may be controlled using the optional parameter Rank Tolerance.) Let ZR denote the columns of Z corresponding to R, and let Z be partitioned as Z=ZRZA. A working set for which ZR defines the null space can be obtained by including the rows of ZAT as ‘artificial constraints’. Minimization of the objective function then proceeds within the subspace defined by ZR, as described in [Definition of Search Direction].
The artificially augmented working set is given by
A-FR=ZATAFR, (7)
so that pFR will satisfy AFRpFR=0 and ZATpFR=0. By definition of the TQ factorization, A-FR automatically satisfies the following:
A-FRQFR=ZATAFRQFR=ZATAFRZRZAY=0T-,
where
T-=I00T,
and hence the TQ factorization of (7) is available trivially from T and QFR without additional expense.
The matrix ZA is not kept fixed, since its role is purely to define an appropriate null space; the TQ factorization can therefore be updated in the normal fashion as the iterations proceed. No work is required to ‘delete’ the artificial constraints associated with ZA when ZRTgFR=0, since this simply involves repartitioning QFR. The ‘artificial’ multiplier vector associated with the rows of ZAT is equal to ZATgFR, and the multipliers corresponding to the rows of the ‘true’ working set are the multipliers that would be obtained if the artificial constraints were not present. If an artificial constraint is ‘deleted’ from the working set, an A appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).
The number of columns in ZA and ZR, the Euclidean norm of ZRTgFR, and the condition estimator of R appear in the monitoring file output as Art, Zr, Norm Gz and Cond Rz respectively (see [Description of Monitoring Information]).
Under some circumstances, a different type of artificial constraint isused when solving a linear program. Although the algorithm of e04nf does not usually perform simplex steps (in the traditional sense), there is one exception: a linear program with fewer general constraints than variables (i.e., mLn). Use of the simplex method in this situation leads to savings in storage. At the starting point, the ‘natural’ working set (the set of constraints exactly or nearly satisfied at the starting point) is augmented with a suitable number of ‘temporary’ bounds, each of which has the effect of temporarily fixing a variable at its current value. In subsequent iterations, a temporary bound is treated as a standard constraint until it is deleted from the working set, in which case it is never added again. If a temporary bound is ‘deleted’ from the working set, an F (for ‘Fixed’) appears alongside the entry in the Jdel column of the monitoring file output (see [Description of Monitoring Information]).

Description of Monitoring Information

This section describes the long line of output (>80 characters) which forms part of the monitoring information produced by e04nf. (See also the description of the optional parameters Monitoring File and Print Level.) You can control the level of printed output.
To aid interpretation of the printed results the following convention is used for numbering the constraints: indices 1 through n refer to the bounds on the variables and indices n+1 through n+mL refer to the general constraints. When the status of a constraint changes, the index of the constraint is printed, along with the designation L (lower bound), U (upper bound), E (equality), F (temporarily fixed variable) or A (artificial constraint).
When Print Level5 and Monitoring File0, the following line of output is produced at every iteration on the unit number specified by the Monitoring File. In all cases the values of the quantities printed are those in effect on completion of the given iteration.
Itn is the iteration count.
Jdel is the index of the constraint deleted from the working set. If Jdel is zero, no constraint was deleted.
Jadd is the index of the constraint added to the working set. If Jadd is zero, no constraint was added.
Step is the step taken along the computed search direction. If a constraint is added during the current iteration, Step will be the step to the nearest constraint. When the problem is of type LP, the step can be greater than one during the optimality phase.
Ninf is the number of violated constraints (infeasibilities). This will be zero during the optimality phase.
Sinf/Objective is the value of the current objective function. If x is not feasible, Sinf gives a weighted sum of the magnitudes of constraint violations. If x is feasible, Objective is the value of the objective function of (1). The output line for the final iteration of the feasibility phase (i.e., the first iteration for which Ninf is zero) will give the value of the true objective at the first feasible point.
During the optimality phase the value of the objective function will be nonincreasing. During the feasibility phase the number of constraint infeasibilities will not increase until either a feasible point is found or the optimality of the multipliers implies that no feasible point exists. Once optimal multipliers are obtained the number of infeasibilities can increase, but the sum of infeasibilities will either remain constant or be reduced until the minimum sum of infeasibilities is found.
Bnd is the number of simple bound constraints in the current working set.
Lin is the number of general linear constraints in the current working set.
Art is the number of artificial constraints in the working set, i.e., the number of columns of ZA (see [Choosing the Initial Working Set]).
Zr is the number of columns of Z1 (see [Definition of Search Direction]). Zr is the dimension of the subspace in which the objective function is currently being minimized. The value of Zr is the number of variables minus the number of constraints in the working set; i.e., Zr=n-Bnd+Lin+Art.
The value of nZ, the number of columns of Z (see [Definition of Search Direction]) can be calculated as nZ=n-Bnd+Lin. A zero value of nZ implies that x lies at a vertex of the feasible region.
Norm Gz is ZRTgFR, the Euclidean norm of the reduced gradient with respect to ZR. During the optimality phase, this norm will be approximately zero after a unit step.
NOpt is the number of nonoptimal Lagrange multipliers at the current point. NOpt is not printed if the current x is infeasible or no multipliers have been calculated. At a minimizer, NOpt will be zero.
Min Lm is the value of the Lagrange multiplier associated with the deleted constraint. If Min Lm is negative, a lower bound constraint has been deleted, if Min Lm is positive, an upper bound constraint has been deleted. If no multipliers are calculated during a given iteration Min Lm will be zero.
Cond T is a lower bound on the condition number of the working set.
Cond Rz is a lower bound on the condition number of the triangular factor R (the Cholesky factor of the current reduced Hessian; see [Definition of Search Direction]). If the problem is specified to be of type LP then Cond Rz is not printed.
Rzz is the last diagonal element μ of the matrix D associated with the RTDR factorization of the reduced Hessian HR (see [Definition of Search Direction]). Rzz is only printed if HR is not positive definite (in which case μ1). If the printed value of Rzz is small in absolute value then HR is approximately singular. A negative value of Rzz implies that the objective function has negative curvature on the current working set.

See Also