NAG Library Chapter Introduction

F08 (lapack)
Least Squares and Eigenvalue Problems (LAPACK)

 Contents

2  Background to the Problems
3  Recommendations on Choice and Use of Available Routines

1
Scope of the Chapter

This chapter provides routines for the solution of linear least squares problems, eigenvalue problems and singular value problems, as well as associated computations. It provides routines for:
Routines are provided for both real and complex data.
For a general introduction to the solution of linear least squares problems, you should turn first to Chapter F04. The decision trees, at the end of Chapter F04, direct you to the most appropriate routines in Chapters F04 or F08. Chapters F04 and F08 contain Black Box (or driver) routines which enable standard linear least squares problems to be solved by a call to a single routine.
For a general introduction to eigenvalue and singular value problems, you should turn first to Chapter F02. The decision trees, at the end of Chapter F02, direct you to the most appropriate routines in Chapters F02 or F08. Chapters F02 and F08 contain Black Box (or driver) routines which enable standard types of problem to be solved by a call to a single routine. Often routines in Chapter F02 call Chapter F08 routines to perform the necessary computational tasks.
The routines in this chapter (Chapter F08) handle only dense, band, tridiagonal and Hessenberg matrices (not matrices with more specialised structures, or general sparse matrices). The tables in Section 3 and the decision trees in Section 4 direct you to the most appropriate routines in Chapter F08.
The routines in this chapter have all been derived from the LAPACK project (see Anderson et al. (1999)). They have been designed to be efficient on a wide range of high-performance computers, without compromising efficiency on conventional serial machines.

2
Background to the Problems

This section is only a brief introduction to the numerical solution of linear least squares problems, eigenvalue and singular value problems. Consult a standard textbook for a more thorough discussion, for example Golub and Van Loan (2012).

2.1
Linear Least Squares Problems

The linear least squares problem is
minimize x b-Ax2, (1)
where A is an m by n matrix, b is a given m element vector and x is an n-element solution vector.
In the most usual case mn and rankA=n, so that A has full rank and in this case the solution to problem (1) is unique; the problem is also referred to as finding a least squares solution to an overdetermined system of linear equations.
When m<n and rankA=m, there are an infinite number of solutions x which exactly satisfy b-Ax=0. In this case it is often useful to find the unique solution x which minimizes x2, and the problem is referred to as finding a minimum norm solution to an underdetermined system of linear equations.
In the general case when we may have rankA<minm,n – in other words, A may be rank-deficient – we seek the minimum norm least squares solution x which minimizes both x2 and b-Ax2.
This chapter (Chapter F08) contains driver routines to solve these problems with a single call, as well as computational routines that can be combined with routines in Chapter F07 to solve these linear least squares problems. Chapter F04 also contains Black Box routines to solve these linear least squares problems in standard cases. The next two sections discuss the factorizations that can be used in the solution of linear least squares problems.

2.2
Orthogonal Factorizations and Least Squares Problems

A number of routines are provided for factorizing a general rectangular m by n matrix A, as the product of an orthogonal matrix (unitary if complex) and a triangular (or possibly trapezoidal) matrix.
A real matrix Q is orthogonal if QTQ=I; a complex matrix Q is unitary if QHQ=I. Orthogonal or unitary matrices have the important property that they leave the 2-norm of a vector invariant, so that
x2 =Qx2,  
if Q is orthogonal or unitary. They usually help to maintain numerical stability because they do not amplify rounding errors.
Orthogonal factorizations are used in the solution of linear least squares problems. They may also be used to perform preliminary steps in the solution of eigenvalue or singular value problems, and are useful tools in the solution of a number of other problems.

2.2.1
QR factorization

The most common, and best known, of the factorizations is the QR factorization given by
A =Q R 0 ,   if ​mn,  
where R is an n by n upper triangular matrix and Q is an m by m orthogonal (or unitary) matrix. If A is of full rank n, then R is nonsingular. It is sometimes convenient to write the factorization as
A =Q1Q2 R 0  
which reduces to
A =Q1R,  
where Q1 consists of the first n columns of Q, and Q2 the remaining m-n columns.
If m<n, R is trapezoidal, and the factorization can be written
A =Q R1R2 ,   if ​m<n,  
where R1 is upper triangular and R2 is rectangular.
The QR factorization can be used to solve the linear least squares problem (1) when mn and A is of full rank, since
b-Ax 2= QTb-QTAx 2= c1-Rx c2 2,  
where
c c1 c2 = Q1T b Q2T b =QTb;  
and c1 is an n-element vector. Then x is the solution of the upper triangular system
Rx=c1.  
The residual vector r is given by
r =b-Ax=Q 0 c2 .  
The residual sum of squares r22 may be computed without forming r explicitly, since
r2 =b-Ax2=c22.  

2.2.2
LQ factorization

The LQ factorization is given by
A =L0 Q=L0 Q1 Q2 =LQ1,   if ​mn,  
where L is m by m lower triangular, Q is n by n orthogonal (or unitary), Q1 consists of the first m rows of Q, and Q2 the remaining n-m rows.
The LQ factorization of A is essentially the same as the QR factorization of AT (AH if A is complex), since
A =L0 QAT=QT LT 0 .  
The LQ factorization may be used to find a minimum norm solution of an underdetermined system of linear equations Ax=b where A is m by n with m<n and has rank m. The solution is given by
x =QT L-1b 0 .  

2.2.3
QR factorization with column pivoting

To solve a linear least squares problem (1) when A is not of full rank, or the rank of A is in doubt, we can perform either a QR factorization with column pivoting or a singular value decomposition.
The QR factorization with column pivoting is given by
A =Q R 0 PT,  mn,  
where Q and R are as before and P is a (real) permutation matrix, chosen (in general) so that
r11r22rnn  
and moreover, for each k,
rkkRk:j,j2,  j=k+1,,n.  
If we put
R = R11 R12 0 R22  
where R11 is the leading k by k upper triangular sub-matrix of R then, in exact arithmetic, if rankA=k, the whole of the sub-matrix R22 in rows and columns k+1 to n would be zero. In numerical computation, the aim must be to determine an index k, such that the leading sub-matrix R11 is well-conditioned, and R22 is negligible, so that
R = R11 R12 0 R22 R11 R12 0 0 .  
Then k is the effective rank of A. See Golub and Van Loan (2012) for a further discussion of numerical rank determination.
The so-called basic solution to the linear least squares problem (1) can be obtained from this factorization as
x =P R11-1c^1 0 ,  
where c^1 consists of just the first k elements of c=QTb.

2.2.4
Complete orthogonal factorization

The Q R  factorization with column pivoting does not enable us to compute a minimum norm solution to a rank-deficient linear least squares problem, unless R12 = 0 . However, by applying for further orthogonal (or unitary) transformations from the right to the upper trapezoidal matrix R11 R12 , R12  can be eliminated:
R11 R12 Z = T11 0 .  
This gives the complete orthogonal factorization
AP = Q T11 0 0 0 ZT  
from which the minimum norm solution can be obtained as
x = P Z T11-1 c^1 0 .  

2.2.5
Updating a QR factorization

Section 2.2.1 gave the forms of the QR factorization of an m by n matrix A for the two cases mn and m<n. Taking first the case mn, the least squares solution of
Ax = b = nb1m-nb2()  
is the solution of
Rx = Q1Tb .  
If the original system is now augmented by the addition of p rows so that we require the solution of
A B x = mbpb3()  
where B is p by n, then this is equivalent to finding the least squares solution of
A^ x = nnRpB() x = Q1Tb b3 = b^ .  
This now requires the QR factorization of the n+p by n triangular-rectangular matrix A^.
For the case m<nm+p, the least squares solution of the augmented system reduces to
A^x = B R1 R2 x = b3 QTb = b^ ,  
where A^ is pentagonal.
In both cases A^ can be written as a special case of a triangular-pentagonal matrix consisting of an upper triangular part on top of a rectangular part which is itself on top of a trapezoidal part. In the first case there is no trapezoidal part, in the second case a zero upper triangular part can be added, and more generally the two cases can be combined.

2.2.6
Other factorizations

The QL and RQ factorizations are given by
A = Q 0 L ,  if ​ m n ,  
and
A = 0 R Q ,  if ​ m n .  
The factorizations are less commonly used than either the QR or LQ factorizations described above, but have applications in, for example, the computation of generalized QR factorizations.

2.3
The Singular Value Decomposition

The singular value decomposition (SVD) of an m by n matrix A is given by
A =UΣVT,  A=UΣVHin the complex case  
where U and V are orthogonal (unitary) and Σ is an m by n diagonal matrix with real diagonal elements, σi, such that
σ1σ2σminm,n0.  
The σi are the singular values of A and the first minm,n columns of U and V are the left and right singular vectors of A. The singular values and singular vectors satisfy
Avi=σiui  and  ATui=σivior ​AHui=σivi  
where ui and vi are the ith columns of U and V respectively.
The computation proceeds in the following stages.
1. The matrix A is reduced to bidiagonal form A=U1BV1T if A is real (A=U1BV1H if A is complex), where U1 and V1 are orthogonal (unitary if A is complex), and B is real and upper bidiagonal when mn and lower bidiagonal when m<n, so that B is nonzero only on the main diagonal and either on the first superdiagonal (if mn) or the first subdiagonal (if m<n).
2. The SVD of the bidiagonal matrix B is computed as B=U2Σ V2T , where U2 and V2 are orthogonal and Σ is diagonal as described above. The singular vectors of A are then U=U1U2 and V=V1V2.
If mn, it may be more efficient to first perform a QR factorization of A, and then compute the SVD of the n by n matrix R, since if A=QR and R=UΣVT, then the SVD of A is given by A=QUΣVT.
Similarly, if mn, it may be more efficient to first perform an LQ factorization of A.
This chapter supports two primary algorithms for computing the SVD of a bidiagonal matrix. They are:
(i) the divide and conquer algorithm;
(ii) the QR algorithm.
The divide and conquer algorithm is much faster than the QR algorithm if singular vectors of large matrices are required.

2.4
The Singular Value Decomposition and Least Squares Problems

The SVD may be used to find a minimum norm solution to a (possibly) rank-deficient linear least squares problem (1). The effective rank, k, of A can be determined as the number of singular values which exceed a suitable threshold. Let Σ^ be the leading k by k sub-matrix of Σ, and V^ be the matrix consisting of the first k columns of V. Then the solution is given by
x =V^Σ^-1c^1,  
where c^1 consists of the first k elements of c=UTb= U2T U1T b.

2.5
Generalized Linear Least Squares Problems

The simple type of linear least squares problem described in Section 2.1 can be generalized in various ways.
1. Linear least squares problems with equality constraints:
find ​x​ to minimize ​S=c-Ax22  subject to  Bx=d,  
where A is m by n and B is p by n, with pnm+p. The equations Bx=d may be regarded as a set of equality constraints on the problem of minimizing S. Alternatively the problem may be regarded as solving an overdetermined system of equations
A B x= c d ,  
where some of the equations (those involving B) are to be solved exactly, and the others (those involving A) are to be solved in a least squares sense. The problem has a unique solution on the assumptions that B has full row rank p and the matrix A B  has full column rank n. (For linear least squares problems with inequality constraints, refer to Chapter E04.)
2. General Gauss–Markov linear model problems:
minimize ​y2  subject to  d=Ax+By,  
where A is m by n and B is m by p, with nmn+p. When B=I, the problem reduces to an ordinary linear least squares problem. When B is square and nonsingular, it is equivalent to a weighted linear least squares problem:
find ​x​ to minimize ​B-1d-Ax2.  
The problem has a unique solution on the assumptions that A has full column rank n, and the matrix A,B has full row rank m. Unless B is diagonal, for numerical stability it is generally preferable to solve a weighted linear least squares problem as a general Gauss–Markov linear model problem.

2.6
Generalized Orthogonal Factorization and Generalized Linear Least Squares Problems

2.6.1
Generalized QR Factorization

The generalized QR (GQR) factorization of an n by m matrix A and an n by p matrix B is given by the pair of factorizations
A = QR   and   B = QTZ ,  
where Q and Z are respectively n by n and p by p orthogonal matrices (or unitary matrices if A and B are complex). R has the form
R = mmR11n-m0() , if ​ n m ,  
or
R = nm-nnR11R12() , if ​ n < m ,  
where R11  is upper triangular. T has the form
T = p-nnn0T12() , if ​ n p ,  
or
T = pn-pT11pT21() , if ​ n > p ,  
where T12  or T21  is upper triangular.
Note that if B is square and nonsingular, the GQR factorization of A and B implicitly gives the QR factorization of the matrix B-1 A :
B-1A = ZT T-1 R  
without explicitly computing the matrix inverse B-1  or the product B-1 A  (remembering that the inverse of an invertible upper triangular matrix and the product of two upper triangular matrices is an upper triangular matrix).
The GQR factorization can be used to solve the general (Gauss–Markov) linear model problem (GLM) (see Section 2.5, but note that A and B are dimensioned differently there as m by n and p by n respectively). Using the GQR factorization of A and B, we rewrite the equation d = Ax + By  as
QTd = QTAx + QTBy = Rx + TZy.  
We partition this as
d1 d2 = mmR11n-m0() x + p-n+mn-mmT11T12n-m0T22() y1 y2  
where
d1 d2 QTd ,  and   y1 y2 Zy .  
The GLM problem is solved by setting
y1 = 0   and   y2 = T22-1 d2  
from which we obtain the desired solutions
x = R11-1 d1 - T12 y2   and   y = ZT 0 y2 .  

2.6.2
Generalized RQ Factorization

The generalized RQ (GRQ) factorization of an m by n matrix A and a p by n matrix B is given by the pair of factorizations
A = R Q ,   B = Z T Q  
where Q and Z are respectively n by n and p by p orthogonal matrices (or unitary matrices if A and B are complex). R has the form
R = n-mmm0R12() ,  if ​ mn ,  
or
R = nm-nR11nR21() ,  if ​ m>n ,  
where R12  or R21  is upper triangular. T has the form
T = nnT11p-n0() ,   if ​ pn ,  
or
T = pn-ppT11T12() ,   if ​ p<n ,  
where T11  is upper triangular.
Note that if B is square and nonsingular, the GRQ factorization of A and B implicitly gives the RQ factorization of the matrix AB-1 :
AB-1 = RT-1 ZT  
without explicitly computing the matrix B-1  or the product AB-1  (remembering that the inverse of an invertible upper triangular matrix and the product of two upper triangular matrices is an upper triangular matrix).
The GRQ factorization can be used to solve the linear equality-constrained least squares problem (LSE) (see Section 2.5). We use the GRQ factorization of B and A (note that B and A have swapped roles), written as
B = T Q   and   A = Z R Q .  
We write the linear equality constraints Bx=d  as
T Q x = d ,  
which we partition as:
n-ppp0T12() x1 x2 = d   where   x1 x2 Qx .  
Therefore x2  is the solution of the upper triangular system
T12 x2 = d .  
Furthermore,
Ax-c2 = ZT Ax - ZT c 2 = RQx - ZT c 2 .  
We partition this expression as:
n-ppn-pR11R12p+m-n0R22() x1 x2 - c1 c2 ,  
where c1 c2 ZTc .
To solve the LSE problem, we set
R11 x1 + R12 x2 - c1 = 0  
which gives x1  as the solution of the upper triangular system
R11 x1 = c1 - R12 x2 .  
Finally, the desired solution is given by
x = QT x1 x2 .  

2.6.3
Generalized Singular Value Decomposition (GSVD)

The generalized (or quotient) singular value decomposition of an m by n matrix A and a p by n matrix B is given by the pair of factorizations
A = U Σ1 0,R QT   and   B = V Σ2 0,R QT .  
The matrices in these factorizations have the following properties:
U is m by m, V is p by p, Q is n by n, and all three matrices are orthogonal. If A and B are complex, these matrices are unitary instead of orthogonal, and QT  should be replaced by QH  in the pair of factorizations.
R is r by r, upper triangular and nonsingular. 0,R  is r by n (in other words, the 0 is an r by n-r zero matrix). The integer r is the rank of A B , and satisfies rn.
Σ1  is m by r, Σ2  is p by r, both are real, non-negative and diagonal, and Σ1T Σ1 + Σ2T Σ2 = I . Write Σ1T Σ1 = diagα12,,αr2  and Σ2T Σ2 = diagβ12,,βr2 , where αi  and βi  lie in the interval from 0 to 1. The ratios α1 / β1 ,, αr / βr  are called the generalized singular values of the pair A, B. If βi = 0 , then the generalized singular value αi / βi  is infinite.
Σ1  and Σ2  have the following detailed structures, depending on whether mr  or m<r . In the first case, mr , then
Σ1 = klkI0l0Cm-k-l00()   and   Σ2 = kll0Sp-l00() .  
Here l is the rank of B, k=r-l , C and S are diagonal matrices satisfying C2 + S2 = I , and S is nonsingular. We may also identify α1 = = αk = 1 , αk+i = cii , for i=1,2,, l, β1 = = βk = 0 , and βk+i = sii , for i=1,2,, l . Thus, the first k generalized singular values α1 / β1 ,, αk / βk  are infinite, and the remaining l generalized singular values are finite.
In the second case, when m<r ,
Σ1 = km-kk+l-mkI00m-k0C0()  
and
Σ2 = km-kk+l-mm-k0S0k+l-m00Ip-l000() .  
Again, l is the rank of B, k=r-l , C and S are diagonal matrices satisfying C2 + S2 = I , and S is nonsingular, and we may identify α1 = = αk = 1 , αk+i = cii , for i=1,2,, m-k , αm+1 = = αr = 0 , β1 = = βk = 0 , βk+i = sii , for i=1,2,, m-k  and βm+1 = = βr = 1 . Thus, the first k generalized singular values α1 / β1 ,, αk / βk  are infinite, and the remaining l generalized singular values are finite.
Here are some important special cases of the generalized singular value decomposition. First, if B is square and nonsingular, then r=n and the generalized singular value decomposition of A and B is equivalent to the singular value decomposition of AB-1 , where the singular values of AB-1  are equal to the generalized singular values of the pair A, B:
AB-1 = U Σ1 R QT V Σ2 R QT -1 = U Σ1 Σ2-1 VT .  
Second, for the matrix C, where
C A B  
if the columns of C are orthonormal, then r=n, R=I and the generalized singular value decomposition of A and B is equivalent to the CS (Cosine–Sine) decomposition of C:
A B = U 0 0 V Σ1 Σ2 QT .  
Third, the generalized eigenvalues and eigenvectors of ATA - λ BTB  can be expressed in terms of the generalized singular value decomposition: Let
X = Q I 0 0 R-1 .  
Then
XT AT AX = 0 0 0 Σ1TΣ1   and   XT BT BX = 0 0 0 Σ2TΣ2 .  
Therefore, the columns of X are the eigenvectors of ATA - λ BTB , and ‘nontrivial’ eigenvalues are the squares of the generalized singular values (see also Section 2.8). ‘Trivial’ eigenvalues are those corresponding to the leading n-r columns of X, which span the common null space of ATA  and BTB . The ‘trivial eigenvalues’ are not well defined.

2.6.4
The Full CS Decomposition of Orthogonal Matrices

In Section 2.6.3 the CS (Cosine-Sine) decomposition of an orthogonal matrix partitioned into two submatrices A and B was given by
A B = U 0 0 V Σ1 Σ2 QT .  
The full CS decomposition of an m by m orthogonal matrix X partitions X into four submatrices and factorizes as
X11 X12 X21 X22 = U1 0 0 U2 Σ11 -Σ12 Σ21 Σ22 V1 0 0 V2 T  
where, X11 is a p by q submatrix (which implies the dimensions of X12, X21 and X22); U1, U2, V1 and V2 are orthogonal matrices of dimensions p, m-p, q and m-q respectively; Σ11 is the p by q single-diagonal matrix
Σ11 = k11-rrq-k11k11-rI00r0C0p-k1100() ,  k11 = minp,q  
Σ12 is the p by m-q single-diagonal matrix
Σ12 = m-q-k12rk12-rp-k1200r0S0k12-r00I() ,  k12 = minp,m-q ,  
Σ21 is the m-p by q single-diagonal matrix
Σ21 = q-k21rk21-rm-p-k2100r0S0k21-r00I() ,  k21 = minm-p,q ,  
and, Σ21 is the m-p by q single-diagonal matrix
Σ22 = k22-rrm-q-k22k22-rI00r0C0m-p-k2200() ,  k22 = minm-p,m-q  
where r= minp,m-p,q,m-q  and the missing zeros remind us that either the column or the row is missing. The r by r diagonal matrices C and S are such that C2 + S2 = I .
This is equivalent to the simultaneous singular value decomposition of the four submatrices X11, X12, X21 and X22.

2.7
Symmetric Eigenvalue Problems

The symmetric eigenvalue problem is to find the eigenvalues, λ, and corresponding eigenvectors, z0, such that
Az=λz,  A=AT,   where ​A​ is real.  
For the Hermitian eigenvalue problem we have
Az=λ z,   A=AH,   where ​ A​ is complex.  
For both problems the eigenvalues λ are real.
When all eigenvalues and eigenvectors have been computed, we write
A =ZΛZTor ​A=ZΛZH​ if complex,  
where Λ is a diagonal matrix whose diagonal elements are the eigenvalues, and Z is an orthogonal (or unitary) matrix whose columns are the eigenvectors. This is the classical spectral factorization of A.
The basic task of the symmetric eigenproblem routines is to compute values of λ and, optionally, corresponding vectors z for a given matrix A. This computation proceeds in the following stages.
1. The real symmetric or complex Hermitian matrix A is reduced to real tridiagonal form T. If A is real symmetric this decomposition is A=QTQT with Q orthogonal and T symmetric tridiagonal. If A is complex Hermitian, the decomposition is A=QTQH with Q unitary and T, as before, real symmetric tridiagonal.
2. Eigenvalues and eigenvectors of the real symmetric tridiagonal matrix T are computed. If all eigenvalues and eigenvectors are computed, this is equivalent to factorizing T as T=SΛST, where S is orthogonal and Λ is diagonal. The diagonal entries of Λ are the eigenvalues of T, which are also the eigenvalues of A, and the columns of S are the eigenvectors of T; the eigenvectors of A are the columns of Z=QS, so that A=ZΛZT (ZΛZH when A is complex Hermitian).
This chapter supports four primary algorithms for computing eigenvalues and eigenvectors of real symmetric matrices and complex Hermitian matrices. They are:
(i) the divide-and-conquer algorithm;
(ii) the QR algorithm;
(iii) bisection followed by inverse iteration;
(iv) the Relatively Robust Representation (RRR).
The divide-and-conquer algorithm is generally more efficient than the traditional QR algorithm for computing all eigenvalues and eigenvectors, but the RRR algorithm tends to be fastest of all. For further information and references see Anderson et al. (1999).

2.8
Generalized Symmetric-definite Eigenvalue Problems

This section is concerned with the solution of the generalized eigenvalue problems Az=λBz, ABz=λz, and BAz=λz, where A and B are real symmetric or complex Hermitian and B is positive definite. Each of these problems can be reduced to a standard symmetric eigenvalue problem, using a Cholesky factorization of B as either B=LLT or B=UTU (LLH or UHU in the Hermitian case).
With B=LLT, we have
Az=λBzL-1AL-TLTz=λLTz.  
Hence the eigenvalues of Az=λBz are those of Cy=λy, where C is the symmetric matrix C=L-1AL-T and y=LTz. In the complex case C is Hermitian with C=L-1AL-H and y=LHz.
Table 1 summarises how each of the three types of problem may be reduced to standard form Cy=λy, and how the eigenvectors z of the original problem may be recovered from the eigenvectors y of the reduced problem. The table applies to real problems; for complex problems, transposed matrices must be replaced by conjugate-transposes.
  Type of problem Factorization of B Reduction Recovery of eigenvectors
1. Az=λBz B=LLT,
B=UTU
C=L-1AL-T,
C=U-TAU-1
z=L-Ty,
z=U-1y
2. ABz=λz B=LLT,
B=UTU
C=LTAL,
C=UAUT
z=L-Ty,
z=U-1y
3. BAz=λz B=LLT,
B=UTU
C=LTAL,
C=UAUT
z=Ly,
z=UTy
Table 1
Reduction of generalized symmetric-definite eigenproblems to standard problems
When the generalized symmetric-definite problem has been reduced to the corresponding standard problem Cy=λy, this may then be solved using the routines described in the previous section. No special routines are needed to recover the eigenvectors z of the generalized problem from the eigenvectors y of the standard problem, because these computations are simple applications of Level 2 or Level 3 BLAS (see Chapter F06).

2.9
Packed Storage for Symmetric Matrices

Routines which handle symmetric matrices are usually designed so that they use either the upper or lower triangle of the matrix; it is not necessary to store the whole matrix. If either the upper or lower triangle is stored conventionally in the upper or lower triangle of a two-dimensional array, the remaining elements of the array can be used to store other useful data. However, that is not always convenient, and if it is important to economize on storage, the upper or lower triangle can be stored in a one-dimensional array of length nn+1/2; that is, the storage is almost halved.
This storage format is referred to as packed storage; it is described in Section 3.3.2 in the F07 Chapter Introduction.
Routines designed for packed storage are usually less efficient, especially on high-performance computers, so there is a trade-off between storage and efficiency.

2.10
Band Matrices

A band matrix is one whose elements are confined to a relatively small number of subdiagonals or superdiagonals on either side of the main diagonal. Algorithms can take advantage of bandedness to reduce the amount of work and storage required. The storage scheme for band matrices is described in Section 3.3.4 in the F07 Chapter Introduction.
If the problem is the generalized symmetric definite eigenvalue problem Az=λBz and the matrices A and B are additionally banded, the matrix C as defined in Section 2.8 is, in general, full. We can reduce the problem to a banded standard problem by modifying the definition of C thus:
C =XTAX,   where  X=U-1Q  or ​L-TQ,  
where Q is an orthogonal matrix chosen to ensure that C has bandwidth no greater than that of A.
A further refinement is possible when A and B are banded, which halves the amount of work required to form C. Instead of the standard Cholesky factorization of B as UTU or LLT, we use a split Cholesky factorization B=STS, where
S = U11 M21 L22  
with U11 upper triangular and L22 lower triangular of order approximately n/2; S has the same bandwidth as B.

2.11
Nonsymmetric Eigenvalue Problems

The nonsymmetric eigenvalue problem is to find the eigenvalues, λ, and corresponding eigenvectors, v0, such that
Av=λv.  
More precisely, a vector v as just defined is called a right eigenvector of A, and a vector u0 satisfying
uTA=λuTuHA=λuH  when ​u​ is complex  
is called a left eigenvector of A.
A real matrix A may have complex eigenvalues, occurring as complex conjugate pairs.
This problem can be solved via the Schur factorization of A, defined in the real case as
A =ZTZT,  
where Z is an orthogonal matrix and T is an upper quasi-triangular matrix with 1 by 1 and 2 by 2 diagonal blocks, the 2 by 2 blocks corresponding to complex conjugate pairs of eigenvalues of A. In the complex case, the Schur factorization is
A =ZTZH,  
where Z is unitary and T is a complex upper triangular matrix.
The columns of Z are called the Schur vectors. For each k (1kn), the first k columns of Z form an orthonormal basis for the invariant subspace corresponding to the first k eigenvalues on the diagonal of T. Because this basis is orthonormal, it is preferable in many applications to compute Schur vectors rather than eigenvectors. It is possible to order the Schur factorization so that any desired set of k eigenvalues occupy the k leading positions on the diagonal of T.
The two basic tasks of the nonsymmetric eigenvalue routines are to compute, for a given matrix A, all n values of λ and, if desired, their associated right eigenvectors v and/or left eigenvectors u, and the Schur factorization.
These two basic tasks can be performed in the following stages.
1. A general matrix A is reduced to upper Hessenberg form H which is zero below the first subdiagonal. The reduction may be written A=QHQT with Q orthogonal if A is real, or A=QHQH with Q unitary if A is complex.
2. The upper Hessenberg matrix H is reduced to Schur form T, giving the Schur factorization H=STST (for H real) or H=STSH (for H complex). The matrix S (the Schur vectors of H) may optionally be computed as well. Alternatively S may be postmultiplied into the matrix Q determined in stage 1, to give the matrix Z=QS, the Schur vectors of A. The eigenvalues are obtained from the diagonal elements or diagonal blocks of T.
3. Given the eigenvalues, the eigenvectors may be computed in two different ways. Inverse iteration can be performed on H to compute the eigenvectors of H, and then the eigenvectors can be multiplied by the matrix Q in order to transform them to eigenvectors of A. Alternatively the eigenvectors of T can be computed, and optionally transformed to those of H or A if the matrix S or Z is supplied.
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix. This is discussed further in Section 2.14.6 below.

2.12
Generalized Nonsymmetric Eigenvalue Problem

The generalized nonsymmetric eigenvalue problem is to find the eigenvalues, λ, and corresponding eigenvectors, v0, such that
Av=λBv.  
More precisely, a vector v as just defined is called a right eigenvector of the matrix pair A,B, and a vector u0 satisfying
uTA=λuTBuHA=λuHB​ when ​u​ is complex  
is called a left eigenvector of the matrix pair A,B.
If B is singular then the problem has one or more infinite eigenvalues λ=, corresponding to Bv=0. Note that if A is nonsingular, then the equivalent problem μAv=Bv is perfectly well defined and an infinite eigenvalue corresponds to μ=0. To deal with both finite (including zero) and infinite eigenvalues, the routines in this chapter do not compute λ explicitly, but rather return a pair of numbers α,β such that if β0 
λ =α/β  
and if α0 and β=0 then λ=. β is always returned as real and non-negative. Of course, computationally an infinite eigenvalue may correspond to a small β rather than an exact zero.
For a given pair A,B the set of all the matrices of the form A-λB is called a matrix pencil and λ and v are said to be an eigenvalue and eigenvector of the pencil A-λB. If A and B are both singular and share a common null space then
detA-λB0  
so that the pencil A-λB is singular for all λ. In other words any λ can be regarded as an eigenvalue. In exact arithmetic a singular pencil will have α=β=0 for some α,β. Computationally if some pair α,β is small then the pencil is singular, or nearly singular, and no reliance can be placed on any of the computed eigenvalues. Singular pencils can also manifest themselves in other ways; see, in particular, Sections 2.3.5.2 and 4.11.1.4 of Anderson et al. (1999) for further details.
The generalized eigenvalue problem can be solved via the generalized Schur factorization of the pair A,B defined in the real case as
A =QSZT,  B=QTZT,  
where Q and Z are orthogonal, T is upper triangular with non-negative diagonal elements and S is upper quasi-triangular with 1 by 1 and 2 by 2 diagonal blocks, the 2 by 2 blocks corresponding to complex conjugate pairs of eigenvalues. In the complex case, the generalized Schur factorization is
A =QSZH,  B=QTZH,  
where Q and Z are unitary and S and T are upper triangular, with T having real non-negative diagonal elements. The columns of Q and Z are called respectively the left and right generalized Schur vectors and span pairs of deflating subspaces of A and B, which are a generalization of invariant subspaces.
It is possible to order the generalized Schur factorization so that any desired set of k eigenvalues correspond to the k leading positions on the diagonals of the pair S,T.
The two basic tasks of the generalized nonsymmetric eigenvalue routines are to compute, for a given pair A,B, all n values of λ and, if desired, their associated right eigenvectors v and/or left eigenvectors u, and the generalized Schur factorization.
These two basic tasks can be performed in the following stages.
1. The matrix pair A,B is reduced to generalized upper Hessenberg form H,R, where H is upper Hessenberg (zero below the first subdiagonal) and R is upper triangular. The reduction may be written as A=Q1HZ1T, B=Q1RZ1T in the real case with Q1 and Z1 orthogonal, and A=Q1H Z1H , B=Q1R Z1H  in the complex case with Q1 and Z1 unitary.
2. The generalized upper Hessenberg form H,R is reduced to the generalized Schur form S,T using the generalized Schur factorization H=Q2S Z2T, R=Q2T Z2T in the real case with Q2 and Z2 orthogonal, and H=Q2 SZ2H, R=Q2T Z2H in the complex case. The generalized Schur vectors of A,B are given by Q=Q1Q2, Z=Z1Z2. The eigenvalues are obtained from the diagonal elements (or blocks) of the pair S,T.
3. Given the eigenvalues, the eigenvectors of the pair S,T can be computed, and optionally transformed to those of H,R or A,B.
The accuracy with which eigenvalues can be obtained can often be improved by balancing a matrix pair. This is discussed further in Section 2.14.8 below.

2.13
The Sylvester Equation and the Generalized Sylvester Equation

The Sylvester equation is a matrix equation of the form
AX+XB=C,  
where A, B, and C are given matrices with A being m by m, B an n by n matrix and C, and the solution matrix X, m by n matrices. The solution of a special case of this equation occurs in the computation of the condition number for an invariant subspace, but a combination of routines in this chapter allows the solution of the general Sylvester equation.
Routines are also provided for solving a special case of the generalized Sylvester equations
AR-LB=C ,  DR-LE=F ,  
where A,D, B,E and C,F are given matrix pairs, and R and L are the solution matrices.

2.14
Error and Perturbation Bounds and Condition Numbers

In this section we discuss the effects of rounding errors in the solution process and the effects of uncertainties in the data, on the solution to the problem. A number of the routines in this chapter return information, such as condition numbers, that allow these effects to be assessed. First we discuss some notation used in the error bounds of later sections.
The bounds usually contain the factor pn (or pm,n), which grows as a function of the matrix dimension n (or matrix dimensions m and n). It measures how errors can grow as a function of the matrix dimension, and represents a potentially different function for each problem. In practice, it usually grows just linearly; pn10n is often true, although generally only much weaker bounds can be actually proved. We normally describe pn as a ‘modestly growing’ function of n. For detailed derivations of various pn, see Golub and Van Loan (2012) and Wilkinson (1965).
For linear equation (see Chapter F07) and least squares solvers, we consider bounds on the relative error x-x^/x in the computed solution x^, where x is the true solution. For eigenvalue problems we consider bounds on the error λi-λ^i in the ith computed eigenvalue λ^i, where λi is the true ith eigenvalue. For singular value problems we similarly consider bounds σi-σ^i.
Bounding the error in computed eigenvectors and singular vectors v^i is more subtle because these vectors are not unique: even though we restrict v^i2=1 and vi2=1, we may still multiply them by arbitrary constants of absolute value 1. So to avoid ambiguity we bound the angular difference between v^i and the true vector vi, so that
θvi,v^i = acute angle between ​vi​ and ​v^i = arccosviHv^i. (2)
Here arccosθ is in the standard range: 0arccosθ<π. When θvi,v^i is small, we can choose a constant α with absolute value 1 so that αvi-v^i2θvi,v^i.
In addition to bounds for individual eigenvectors, bounds can be obtained for the spaces spanned by collections of eigenvectors. These may be much more accurately determined than the individual eigenvectors which span them. These spaces are called invariant subspaces in the case of eigenvectors, because if v is any vector in the space, Av is also in the space, where A is the matrix. Again, we will use angle to measure the difference between a computed space S^ and the true space S:
θS,S^ = acute angle between ​S​ and ​S^ = max sS s0 min s^S^ s^0 θs,s^   or   max s^S^ s^0 min sS s0 θs,s^ (3)
θS,S^ may be computed as follows. Let S be a matrix whose columns are orthonormal and spanS. Similarly let S^ be an orthonormal matrix with columns spanning S^. Then
θS,S^=arccosσminSHS^.  
Finally, we remark on the accuracy of the bounds when they are large. Relative errors like x^-x/x and angular errors like θv^i,vi are only of interest when they are much less than 1. Some stated bounds are not strictly true when they are close to 1, but rigorous bounds are much more complicated and supply little extra information in the interesting case of small errors. These bounds are indicated by using the symbol , or ‘approximately less than’, instead of the usual . Thus, when these bounds are close to 1 or greater, they indicate that the computed answer may have no significant digits at all, but do not otherwise bound the error.
A number of routines in this chapter return error estimates and/or condition number estimates directly. In other cases Anderson et al. (1999) gives code fragments to illustrate the computation of these estimates, and a number of the Chapter F08 example programs, for the driver routines, implement these code fragments.

2.14.1
Least squares problems

The conventional error analysis of linear least squares problems goes as follows. The problem is to find the x minimizing Ax-b2. Let x^ be the solution computed using one of the methods described above. We discuss the most common case, where A is overdetermined (i.e., has more rows than columns) and has full rank.
Then the computed solution x^ has a small normwise backward error. In other words x^ minimizes A+Ex^-b+f2, where
max E2 A2 , f2 b2 pnε  
and pn is a modestly growing function of n and ε is the machine precision. Let κ2A=σmaxA/σminA, ρ=Ax-b2, and sinθ=ρ/b2. Then if pnε is small enough, the error x^-x is bounded by
x-x^2 x2 pnε 2κ2A cosθ +tanθκ22A .  
If A is rank-deficient, the problem can be regularized by treating all singular values less than a user-specified threshold as exactly zero. See Golub and Van Loan (2012) for error bounds in this case, as well as for the underdetermined case.
The solution of the overdetermined, full-rank problem may also be characterised as the solution of the linear system of equations
I A AT 0 r x = b 0 .  
By solving this linear system (see Chapter F07) component-wise error bounds can also be obtained (see Arioli et al. (1989)).

2.14.2
The singular value decomposition

The usual error analysis of the SVD algorithm is as follows (see Golub and Van Loan (2012)).
The computed SVD, U^Σ^V^T, is nearly the exact SVD of A+E, i.e., A+E=U^+δU^Σ^V^+δV^ is the true SVD, so that U^+δU^ and V^+δV^ are both orthogonal, where E2/A2pm,nε, δU^pm,nε, and δV^pm,nε. Here pm,n is a modestly growing function of m and n and ε is the machine precision. Each computed singular value σ^i differs from the true σi by an amount satisfying the bound
σ^i-σipm,nεσ1.  
Thus large singular values (those near σ1) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed left singular vector u^i and the true ui satisfies the approximate bound
θu^i,uipm,nεA2gapi  
where
gap i = min ji σi-σj  
is the absolute gap between σi and the nearest other singular value. Thus, if σi is close to other singular values, its corresponding singular vector ui may be inaccurate. The same bound applies to the computed right singular vector v^i and the true vector vi. The gaps may be easily obtained from the computed singular values.
Let S^ be the space spanned by a collection of computed left singular vectors u^i,iI, where I is a subset of the integers from 1 to n. Let S be the corresponding true space. Then
θS^,Spm,nεA2 gapI .  
where
gapI=minσi-σj   for ​ iI,jI  
is the absolute gap between the singular values in I and the nearest other singular value. Thus, a cluster of close singular values which is far away from any other singular value may have a well determined space S^ even if its individual singular vectors are ill-conditioned. The same bound applies to a set of right singular vectors v^i,iI.
In the special case of bidiagonal matrices, the singular values and singular vectors may be computed much more accurately (see Demmel and Kahan (1990)). A bidiagonal matrix B has nonzero entries only on the main diagonal and the diagonal immediately above it (or immediately below it). Reduction of a dense matrix to bidiagonal form B can introduce additional errors, so the following bounds for the bidiagonal case do not apply to the dense case.
Using the routines in this chapter, each computed singular value of a bidiagonal matrix is accurate to nearly full relative accuracy, no matter how tiny it is, so that
σ^i-σipm,nεσi.  
The computed left singular vector u^i has an angular error at most about
θu^i,uipm,nεrelgapi  
where
relgapi= min ji σi-σj / σi+σj  
is the relative gap between σi and the nearest other singular value. The same bound applies to the right singular vector v^i and vi. Since the relative gap may be much larger than the absolute gap, this error bound may be much smaller than the previous one. The relative gaps may be easily obtained from the computed singular values.

2.14.3
The symmetric eigenproblem

The usual error analysis of the symmetric eigenproblem is as follows (see Parlett (1998)).
The computed eigendecomposition Z^Λ^Z^T is nearly the exact eigendecomposition of A+E, i.e., A+E=Z^+δZ^Λ^Z^+δZ^T is the true eigendecomposition so that Z^+δZ^ is orthogonal, where E2/A2pnε and δZ^2pnε and pn is a modestly growing function of n and ε is the machine precision. Each computed eigenvalue λ^i differs from the true λi by an amount satisfying the bound
λ^i-λipnεA2.  
Thus large eigenvalues (those near max i λi = A2 ) are computed to high relative accuracy and small ones may not be.
The angular difference between the computed unit eigenvector z^i and the true zi satisfies the approximate bound
θz^i,zipnεA2gapi  
if pnε is small enough, where
gapi= min ji λi-λj  
is the absolute gap between λi and the nearest other eigenvalue. Thus, if λi is close to other eigenvalues, its corresponding eigenvector zi may be inaccurate. The gaps may be easily obtained from the computed eigenvalues.
Let S^ be the invariant subspace spanned by a collection of eigenvectors z^i,iI, where I is a subset of the integers from 1 to n. Let S be the corresponding true subspace. Then
θS^,SpnεA2 gapI  
where
gapI=minλi-λj   for ​ iI,jI  
is the absolute gap between the eigenvalues in I and the nearest other eigenvalue. Thus, a cluster of close eigenvalues which is far away from any other eigenvalue may have a well determined invariant subspace S^ even if its individual eigenvectors are ill-conditioned.
In the special case of a real symmetric tridiagonal matrix T, routines in this chapter can compute the eigenvalues and eigenvectors much more accurately. See Anderson et al. (1999) for further details.

2.14.4
The generalized symmetric-definite eigenproblem

The three types of problem to be considered are A-λB, AB-λI and BA-λI. In each case A and B are real symmetric (or complex Hermitian) and B is positive definite. We consider each case in turn, assuming that routines in this chapter are used to transform the generalized problem to the standard symmetric problem, followed by the solution of the symmetric problem. In all cases
gapi= min ji λi-λj  
is the absolute gap between λi and the nearest other eigenvalue.
1. A-λB. The computed eigenvalues λ^i can differ from the true eigenvalues λi by an amount
λ^i-λipnεB-12A2.  
The angular difference between the computed eigenvector z^i and the true eigenvector zi is
θ z^i,zi pn ε B-12 A2 κ2B 1/2 gapi .  
2. AB-λI or BA-λI. The computed eigenvalues λ^i can differ from the true eigenvalues λi by an amount
λ^i-λipnεB2A2.  
The angular difference between the computed eigenvector z^i and the true eigenvector zi is
θ z^i,zi pn ε B2 A2 κ2B 1/2 gapi .  
These error bounds are large when B is ill-conditioned with respect to inversion (κ2B is large). It is often the case that the eigenvalues and eigenvectors are much better conditioned than indicated here. One way to get tighter bounds is effective when the diagonal entries of B differ widely in magnitude, as for example with a graded matrix.
1. A-λB. Let D = diag b11-1/2 ,, b nn -1/2  be a diagonal matrix. Then replace B by DBD and A by DAD in the above bounds.
2. AB-λI or BA-λI. Let D=diagb11-1/2,,bnn -1/2 be a diagonal matrix. Then replace B by DBD and A by D-1AD-1 in the above bounds.
Further details can be found in Anderson et al. (1999).

2.14.5
The nonsymmetric eigenproblem

The nonsymmetric eigenvalue problem is more complicated than the symmetric eigenvalue problem. In this section, we just summarise the bounds. Further details can be found in Anderson et al. (1999).
We let λ^i be the ith computed eigenvalue and λi the ith true eigenvalue. Let v^i be the corresponding computed right eigenvector, and vi the true right eigenvector (so Avi=λi vi). If I is a subset of the integers from 1 to n, we let λI denote the average of the selected eigenvalues: λI=iI λi/ iI1, and similarly for λ^I. We also let SI denote the subspace spanned by vi,iI; it is called a right invariant subspace because if v is any vector in SI then Av is also in SI. S^I is the corresponding computed subspace.
The algorithms for the nonsymmetric eigenproblem are normwise backward stable: they compute the exact eigenvalues, eigenvectors and invariant subspaces of slightly perturbed matrices A+E , where E p n ε A . Some of the bounds are stated in terms of E2 and others in terms of EF; one may use pnε for either quantity.
Routines are provided so that, for each (λ^i,v^i) pair the two values si and sepi, or for a selected subset I of eigenvalues the values sI and sepI can be obtained, for which the error bounds in Table 2 are true for sufficiently small E, (which is why they are called asymptotic):
Simple eigenvalue λ^i-λiE2/si
Eigenvalue cluster λ^I-λIE2/sI
Eigenvector θv^i,viEF/sepi
Invariant subspace θS^I,SIEF/sepI
Table 2
Asymptotic error bounds for the nonsymmetric eigenproblem
If the problem is ill-conditioned, the asymptotic bounds may only hold for extremely small E. The global error bounds of Table 3 are guaranteed to hold for all EF<s×sep/4:
Simple
eigenvalue
λ^i-λinE2/si Holds for all E
Eigenvalue
cluster
λ^I-λI2E2/sI Requires EF<sI×sepI/4
Eigenvector θv^i,viarctan2EF/sepi-4EF/si Requires EF<si×sepi/4
Invariant
subspace
θS^I,SIarctan2EF/sepI-4EF/sI Requires EF<sI×sepI/4
Table 3
Global error bounds for the nonsymmetric eigenproblem

2.14.6
Balancing and condition for the nonsymmetric eigenproblem

There are two preprocessing steps one may perform on a matrix A in order to make its eigenproblem easier. The first is permutation, or reordering the rows and columns to make A more nearly upper triangular (closer to Schur form): A=PAPT, where P is a permutation matrix. If A is permutable to upper triangular form (or close to it), then no floating-point operations (or very few) are needed to reduce it to Schur form. The second is scaling by a diagonal matrix D to make the rows and columns of A more nearly equal in norm: A=DAD-1. Scaling can make the matrix norm smaller with respect to the eigenvalues, and so possibly reduce the inaccuracy contributed by roundoff (see Chapter 11 of Wilkinson and Reinsch (1971)). We refer to these two operations as balancing.
Permuting has no effect on the condition numbers or their interpretation as described previously. Scaling, however, does change their interpretation and further details can be found in Anderson et al. (1999).

2.14.7
The generalized nonsymmetric eigenvalue problem

The algorithms for the generalized nonsymmetric eigenvalue problem are normwise backward stable: they compute the exact eigenvalues (as the pairs α,β), eigenvectors and deflating subspaces of slightly perturbed pairs A+E,B+F, where
E,FFpnεA,BF.  
Asymptotic and global error bounds can be obtained, which are generalizations of those given in Tables 2 and 3. See Section 4.11 of Anderson et al. (1999) for details. Routines are provided to compute estimates of reciprocal conditions numbers for eigenvalues and eigenspaces.

2.14.8
Balancing the generalized eigenvalue problem

As with the standard nonsymmetric eigenvalue problem, there are two preprocessing steps one may perform on a matrix pair A,B in order to make its eigenproblem easier; permutation and scaling, which together are referred to as balancing, as indicated in the following two steps.
1. The balancing routine first attempts to permute A and B to block upper triangular form by a similarity transformation:
PAPT=F= F11 F12 F13 F22 F23 F33 , PBPT=G= G11 G12 G13 G22 G23 G33 ,  
where P is a permutation matrix, F11, F33, G11 and G33 are upper triangular. Then the diagonal elements of the matrix F11,G11 and G33,H33 are generalized eigenvalues of A,B. The rest of the generalized eigenvalues are given by the matrix pair F22,G22. Subsequent operations to compute the eigenvalues of A,B need only be applied to the matrix F22,G22; this can save a significant amount of work if F22,G22 is smaller than the original matrix pair A,B. If no suitable permutation exists (as is often the case), then there is no gain in efficiency or accuracy.
2. The balancing routine applies a diagonal similarity transformation to F,G, to make the rows and columns of F22,G22 as close as possible in the norm:
DFD-1= I D22 I F11 F12 F13 F22 F23 F33 I D22-1 I , DGD-1= I D22 I G11 G12 G13 G22 G23 G33 I D22-1 I .  
This transformation usually improves the accuracy of computed generalized eigenvalues and eigenvectors. However, there are exceptional occasions when this transformation increases the norm of the pencil; in this case accuracy could be lower with diagonal balancing.
See Anderson et al. (1999) for further details.

2.14.9
Other problems

Error bounds for other problems such as the generalized linear least squares problem and generalized singular value decomposition can be found in Anderson et al. (1999).

2.15
Block Partitioned Algorithms

A number of the routines in this chapter use what is termed a block partitioned algorithm. This means that at each major step of the algorithm a block of rows or columns is updated, and much of the computation is performed by matrix-matrix operations on these blocks. These matrix-matrix operations make efficient use of computer memory and are key to achieving high performance. See Golub and Van Loan (2012) or Anderson et al. (1999) for more about block partitioned algorithms.
The performance of a block partitioned algorithm varies to some extent with the block size – that is, the number of rows or columns per block. This is a machine-dependent constant, which is set to a suitable value when the library is implemented on each range of machines. Block size affects the amount of workspace that should be supplied to a particular routine. This is discussed in Section 3.4.3.

3
Recommendations on Choice and Use of Available Routines

3.1
Available Routines

The tables in the following sub-sections show the routines which are provided for performing different computations on different types of matrices. Each entry in the table gives the NAG routine name and the LAPACK double precision name(see Section 3.2).
Black box (or driver) routines are provided for the solution of most problems. In a number of cases there are simple drivers, which just return the solution to the problem, as well as expert drivers, which return additional information, such as condition number estimates, and may offer additional facilities such as balancing. The following sub-sections give tables for the driver routines.

3.1.1
Driver routines

3.1.1.1
Linear least squares problems (LLS)
Operation real complex
solve LLS using QR or LQ factorization
solve LLS using complete orthogonal factorization
solve LLS using SVD
solve LLS using divide-and-conquer SVD
f08aaf (dgels)
f08baf (dgelsy)
f08kaf (dgelss)
f08kcf (dgelsd)
f08anf (zgels)
f08bnf (zgelsy)
f08knf (zgelss)
f08kqf (zgelsd)
3.1.1.2
Generalized linear least squares problems (LSE and GLM)
Operation real complex
solve LSE problem using GRQ
solve GLM problem using GQR
f08zaf (dgglse)
f08zbf (dggglm)
f08znf (zgglse)
f08zpf (zggglm)
3.1.1.3
Symmetric eigenvalue problems (SEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
RRR driver
f08faf (dsyev)
f08fcf (dsyevd)
f08fbf (dsyevx)
f08fdf (dsyevr)
f08fnf (zheev)
f08fqf (zheevd)
f08fpf (zheevx)
f08frf (zheevr)
packed storage
simple driver
divide-and-conquer driver
expert driver

f08gaf (dspev)
f08gcf (dspevd)
f08gbf (dspevx)

f08gnf (zhpev)
f08gqf (zhpevd)
f08gpf (zhpevx)
band matrix
simple driver
divide-and-conquer driver
expert driver

f08haf (dsbev)
f08hcf (dsbevd)
f08hbf (dsbevx)

f08hnf (zhbev)
f08hqf (zhbevd)
f08hpf (zhbevx)
tridiagonal matrix
simple driver
divide-and-conquer driver
expert driver
RRR driver

f08jaf (dstev)
f08jcf (dstevd)
f08jbf (dstevx)
f08jdf (dstevr)
 
3.1.1.4
Nonsymmetric eigenvalue problem (NEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
f08paf (dgees)
f08pbf (dgeesx)
f08naf (dgeev)
f08nbf (dgeevx)
f08pnf (zgees)
f08ppf (zgeesx)
f08nnf (zgeev)
f08npf (zgeevx)
3.1.1.5
Singular value decomposition (SVD)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
simple driver for one-sided Jacobi SVD
expert driver for one-sided Jacobi SVD
f08kbf (dgesvd)
f08kdf (dgesdd)
f08kjf (dgesvj)
f08khf (dgejsv)
f08kpf (zgesvd)
f08krf (zgesdd)
3.1.1.6
Generalized symmetric definite eigenvalue problems (GSEP)
Function and storage scheme real complex
simple driver
divide-and-conquer driver
expert driver
f08saf (dsygv)
f08scf (dsygvd)
f08sbf (dsygvx)
f08snf (zhegv)
f08sqf (zhegvd)
f08spf (zhegvx)
packed storage
simple driver
divide-and-conquer driver
expert driver

f08taf (dspgv)
f08tcf (dspgvd)
f08tbf (dspgvx)

f08tnf (zhpgv)
f08tqf (zhpgvd)
f08tpf (zhpgvx)
band matrix
simple driver
divide-and-conquer driver
expert driver

f08uaf (dsbgv)
f08ucf (dsbgvd)
f08ubf (dsbgvx)

f08unf (zhbgv)
f08uqf (zhbgvd)
f08upf (zhbgvx)
3.1.1.7
Generalized nonsymmetric eigenvalue problem (GNEP)
Function and storage scheme real complex
simple driver for Schur factorization
expert driver for Schur factorization
simple driver for eigenvalues/vectors
expert driver for eigenvalues/vectors
f08xcf (dgges3)
f08xbf (dggesx)
f08wcf (dggev3)
f08wbf (dggevx)
f08xqf (zgges3)
f08xpf (zggesx)
f08wqf (zggev3)
f08wpf (zggevx)
3.1.1.8
Generalized singular value decomposition (GSVD)
Function and storage scheme real complex
singular values/vectors f08vcf (dggsvd3) f08vqf (zggsvd3)

3.1.2
Computational routines

It is possible to solve problems by calling two or more routines in sequence. Some common sequences of routines are indicated in the tables in the following sub-sections; an asterisk (*) against a routine name means that the sequence of calls is illustrated in the example program for that routine.
3.1.2.1
Orthogonal factorizations
Routines are provided for QR factorization (with and without column pivoting), and for LQ, QL and RQ factorizations (without pivoting only), of a general real or complex rectangular matrix. A routine is also provided for the RQ factorization of a real or complex upper trapezoidal matrix. (LAPACK refers to this as the RZ factorization.)
The factorization routines do not form the matrix Q explicitly, but represent it as a product of elementary reflectors (see Section 3.3.6). Additional routines are provided to generate all or part of Q explicitly if it is required, or to apply Q in its factored form to another matrix (specifically to compute one of the matrix products QC, QTC, CQ or CQT with QT replaced by QH if C and Q are complex).
  Factorize
without
pivoting
Factorize
with
pivoting
Factorize
(blocked)
Generate
matrix Q
Apply
matrix Q
Apply
Q (blocked)
QR factorization,
real matrices
f08aef (dgeqrf) f08bff (dgeqp3) f08abf (dgeqrt) f08aff (dorgqr) f08agf (dormqr) f08acf (dgemqrt)
QR factorization,
real triangular-pentagonal
    f08bbf (dtpqrt)     f08bcf (dtpmqrt)
LQ factorization,
real matrices
f08ahf (dgelqf)     f08ajf (dorglq) f08akf (dormlq)  
QL factorization,
real matrices
f08cef (dgeqlf)     f08cff (dorgql) f08cgf (dormql)  
RQ factorization,
real matrices
f08chf (dgerqf)     f08cjf (dorgrq) f08ckf (dormrq)  
RQ factorization,
real upper trapezoidal matrices
f08bhf (dtzrzf)       f08bkf (dormrz)  
QR factorization,
complex matrices
f08asf (zgeqrf) f08btf (zgeqp3) f08apf (zgeqrt) f08atf (zungqr) f08auf (zunmqr) f08aqf (zgemqrt)
QR factorization,
complex triangular-pentagonal
    f08bpf (ztpqrt)     f08bqf (ztpmqrt)
LQ factorization,
complex matrices
f08avf (zgelqf)     f08awf (zunglq) f08axf (zunmlq)  
QL factorization,
complex matrices
f08csf (zgeqlf)     f08ctf (zungql) f08cuf (zunmql)  
RQ factorization,
complex matrices
f08cvf (zgerqf)     f08cwf (zungrq) f08cxf (zunmrq)  
RQ factorization,
complex upper trapezoidal matrices
f08bvf (ztzrzf)       f08bxf (zunmrz)  
To solve linear least squares problems, as described in Sections 2.2.1 or 2.2.3, routines based on the QR factorization can be used:
real data, full-rank problem f08aaf (dgels), f08aef (dgeqrf) and f08agf (dormqr), f08abf (dgeqrt) and f08acf (dgemqrt), f06yjf (dtrsm)
complex data, full-rank problem f08anf (zgels), f08asf (zgeqrf) and f08auf (zunmqr), f08apf (zgeqrt) and f08aqf (zgemqrt), f06zjf (ztrsm)
real data, rank-deficient problem f08bff (dgeqp3)*, f06yjf (dtrsm), f08agf (dormqr)
complex data, rank-deficient problem f08btf (zgeqp3)*, f06zjf (ztrsm), f08auf (zunmqr)
To find the minimum norm solution of under-determined systems of linear equations, as described in Section 2.2.2, routines based on the LQ factorization can be used:
real data, full-rank problem f08ahf (dgelqf)*, f06yjf (dtrsm), f08akf (dormlq)
complex data, full-rank problem f08avf (zgelqf)*, f06zjf (ztrsm), f08axf (zunmlq)
3.1.2.2
Generalized orthogonal factorizations
Routines are provided for the generalized QR and RQ factorizations of real and complex matrix pairs.
  Factorize
Generalized QR factorization, real matrices f08zef (dggqrf)
Generalized RQ factorization, real matrices f08zff (dggrqf)
Generalized QR factorization, complex matrices f08zsf (zggqrf)
Generalized RQ factorization, complex matrices f08ztf (zggrqf)
3.1.2.3
Singular value problems
Routines are provided to reduce a general real or complex rectangular matrix A to real bidiagonal form B by an orthogonal transformation A=QBPT (or by a unitary transformation A=QBPH if A is complex). Different routines allow a full matrix A to be stored conventionally (see Section 3.3.1), or a band matrix to use band storage (see Section 3.3.4 in the F07 Chapter Introduction).
The routines for reducing full matrices do not form the matrix Q or P explicitly; additional routines are provided to generate all or part of them, or to apply them to another matrix, as with the routines for orthogonal factorizations. Explicit generation of Q or P is required before using the bidiagonal QR algorithm to compute left or right singular vectors of A.
The routines for reducing band matrices have options to generate Q or P if required.
Further routines are provided to compute all or part of the singular value decomposition of a real bidiagonal matrix; the same routines can be used to compute the singular value decomposition of a real or complex matrix that has been reduced to bidiagonal form.
  Reduce to
bidiagonal
form
Generate
matrix Q
or PT
Apply
matrix Q
or P
Reduce band
matrix to
bidiagonal
form
SVD of
bidiagonal
form (QR
algorithm)
SVD of
bidiagonal
form (divide and conquer)
real matrices f08kef (dgebrd) f08kff (dorgbr) f08kgf (dormbr) f08lef (dgbbrd) f08mef (dbdsqr) f08mdf (dbdsdc)
complex matrices f08ksf (zgebrd) f08ktf (zungbr) f08kuf (zunmbr) f08lsf (zgbbrd) f08msf (zbdsqr)  
Given the singular values, f08flf (ddisna) is provided to compute the reciprocal condition numbers for the left or right singular vectors of a real or complex matrix.
To compute the singular values and vectors of a rectangular matrix, as described in Section 2.3, use the following sequence of calls:
Rectangular matrix (standard storage)
real matrix, singular values and vectors f08kef (dgebrd), f08kff (dorgbr)*, f08mef (dbdsqr)
complex matrix, singular values and vectors f08ksf (zgebrd), f08ktf (zungbr)*, f08msf (zbdsqr)
Rectangular matrix (banded)
real matrix, singular values and vectors f08lef (dgbbrd), f08kff (dorgbr), f08mef (dbdsqr)
complex matrix, singular values and vectors f08lsf (zgbbrd), f08ktf (zungbr), f08msf (zbdsqr)
To use the singular value decomposition to solve a linear least squares problem, as described in Section 2.4, the following routines are required:
real data f06yaf (dgemm), f08kef (dgebrd), f08kff (dorgbr), f08kgf (dormbr), f08mef (dbdsqr)
complex data f06zaf (zgemm), f08ksf (zgebrd), f08ktf (zungbr), f08kuf (zunmbr), f08msf (zbdsqr)
3.1.2.4
Generalized singular value decomposition
Routines are provided to compute the generalized SVD of a real or complex matrix pair A,B in upper trapezoidal form. Routines are also provided to reduce a general real or complex matrix pair to the required upper trapezoidal form.
  Reduce to
trapezoidal form
Generalized SVD
of trapezoidal form
real matrices f08vgf (dggsvp3) f08yef (dtgsja)
complex matrices f08vuf (zggsvp3) f08ysf (ztgsja)
Routines are provided for the full CS decomposition of orthogonal and unitary matrices expressed as 2 by 2 partitions of submatrices. For real orthogonal matrices the CS decomposition is performed by f08raf (dorcsd), while for unitary matrices the equivalent routine is f08rnf (zuncsd).
3.1.2.5
Symmetric eigenvalue problems
Routines are provided to reduce a real symmetric or complex Hermitian matrix A to real tridiagonal form T by an orthogonal similarity transformation A=QTQT (or by a unitary transformation A=QTQH if A is complex). Different routines allow a full matrix A to be stored conventionally (see Section 3.3.1 in the F07 Chapter Introduction) or in packed storage (see Section 3.3.2 in the F07 Chapter Introduction); or a band matrix to use band storage (see Section 3.3.4 in the F07 Chapter Introduction).
The routines for reducing full matrices do not form the matrix Q explicitly; additional routines are provided to generate Q, or to apply it to another matrix, as with the routines for orthogonal factorizations. Explicit generation of Q is required before using the QR algorithm to find all the eigenvectors of A; application of Q to another matrix is required after eigenvectors of T have been found by inverse iteration, in order to transform them to eigenvectors of A.
The routines for reducing band matrices have an option to generate Q if required.
  Reduce to
tridiagonal
form
Generate
matrix Q
Apply
matrix Q
real symmetric matrices f08fef (dsytrd) f08fff (dorgtr) f08fgf (dormtr)
real symmetric matrices (packed storage) f08gef (dsptrd) f08gff (dopgtr) f08ggf (dopmtr)
real symmetric band matrices f08hef (dsbtrd)    
complex Hermitian
matrices
f08fsf (zhetrd) f08ftf (zungtr) f08fuf (zunmtr)
complex Hermitian matrices (packed storage) f08gsf (zhptrd) f08gtf (zupgtr) f08guf (zupmtr)
complex Hermitian band matrices f08hsf (zhbtrd)    
Given the eigenvalues, f08flf (ddisna) is provided to compute the reciprocal condition numbers for the eigenvectors of a real symmetric or complex Hermitian matrix.
A variety of routines are provided to compute eigenvalues and eigenvectors of the real symmetric tridiagonal matrix T, some computing all eigenvalues and eigenvectors, some computing selected eigenvalues and eigenvectors. The same routines can be used to compute eigenvalues and eigenvectors of a real symmetric or complex Hermitian matrix which has been reduced to tridiagonal form.
Eigenvalues and eigenvectors of real symmetric tridiagonal matrices:
The original (non-reduced) matrix is Real Symmetric or Complex Hermitian
all eigenvalues (root-free QR algorithm) f08jff (dsterf)
all eigenvalues (root-free QR algorithm called by divide-and-conquer) f08jcf (dstevd) or f08jhf (dstedc)
selected eigenvalues (bisection) f08jjf (dstebz)
selected eigenvalues (RRR) f08jlf (dstegr)
The original (non-reduced) matrix is Real Symmetric
all eigenvalues and eigenvectors (QR algorithm) f08jef (dsteqr)
all eigenvalues and eigenvectors (divide-and-conquer) f08jcf (dstevd) or f08jhf (dstedc)
all eigenvalues and eigenvectors (positive definite case) f08jgf (dpteqr)
selected eigenvectors (inverse iteration) f08jkf (dstein)
selected eigenvalues and eigenvectors (RRR) f08jlf (dstegr)
The original (non-reduced) matrix is Complex Hermitian
all eigenvalues and eigenvectors (QR algorithm) f08jsf (zsteqr)
all eigenvalues and eigenvectors (divide and conquer) f08jvf (zstedc)
all eigenvalues and eigenvectors (positive definite case) f08juf (zpteqr)
selected eigenvectors (inverse iteration) f08jxf (zstein)
selected eigenvalues and eigenvectors (RRR) f08jyf (zstegr)
The following sequences of calls may be used to compute various combinations of eigenvalues and eigenvectors, as described in Section 2.7.
Sequences for computing eigenvalues and eigenvectors
Real Symmetric matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08fcf (dsyevd)
all eigenvalues and eigenvectors (using QR algorithm) f08fef (dsytrd), f08fff (dorgtr)*, f08jef (dsteqr)
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08fef (dsytrd), f08fgf (dormtr), f08jjf (dstebz), f08jkf (dstein)*
selected eigenvalues and eigenvectors (RRR) f08fef (dsytrd), f08fgf (dormtr), f08jlf (dstegr)
Real Symmetric matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08gcf (dspevd)
all eigenvalues and eigenvectors (using QR algorithm) f08gef (dsptrd), f08gff (dopgtr) and f08jef (dsteqr)
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08gef (dsptrd), f08ggf (dopmtr), f08jjf (dstebz), f08jkf (dstein)*
selected eigenvalues and eigenvectors (RRR) f08gef (dsptrd), f08ggf (dopmtr), f08jlf (dstegr)
Real Symmetric banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) f08hcf (dsbevd)
all eigenvalues and eigenvectors (using QR algorithm) f08hef (dsbtrd)*, f08jef (dsteqr)
Complex Hermitian matrix (standard storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08fqf (zheevd)
all eigenvalues and eigenvectors (using QR algorithm) f08fsf (zhetrd), f08ftf (zungtr)*, f08jsf (zsteqr)
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08fsf (zhetrd), f08fuf (zunmtr), f08jjf (dstebz), f08jxf (zstein)*
selected eigenvalues and eigenvectors (RRR) f08fsf (zhetrd), f08fuf (zunmtr), f08jyf (zstegr)
Complex Hermitian matrix (packed storage)
all eigenvalues and eigenvectors (using divide-and-conquer) f08gqf (zhpevd)
all eigenvalues and eigenvectors (using QR algorithm) f08gsf (zhptrd), f08gtf (zupgtr)*, f08jsf (zsteqr)
selected eigenvalues and eigenvectors (bisection and inverse iteration) f08gsf (zhptrd), f08guf (zupmtr), f08jjf (dstebz), f08jxf (zstein)*
selected eigenvalues and eigenvectors (RRR) f08gsf (zhptrd), f08guf (zupmtr) and f08jyf (zstegr)
Complex Hermitian banded matrix
all eigenvalues and eigenvectors (using divide-and-conquer) f08hqf (zhbevd)
all eigenvalues and eigenvectors (using QR algorithm) f08hsf (zhbtrd)*, f08jsf (zsteqr)
3.1.2.6
Generalized symmetric-definite eigenvalue problems
Routines are provided for reducing each of the problems Ax=λBx, ABx=λx or BAx=λx to an equivalent standard eigenvalue problem Cy=λy. Different routines allow the matrices to be stored either conventionally or in packed storage. The positive definite matrix B must first be factorized using a routine from Chapter F07. There is also a routine which reduces the problem Ax=λBx where A and B are banded, to an equivalent banded standard eigenvalue problem; this uses a split Cholesky factorization for which a routine in Chapter F08 is provided.
  Reduce to
standard problem
Reduce to
standard problem
(packed storage)
Reduce to
standard problem
(band matrices)
real symmetric matrices f08sef (dsygst) f08tef (dspgst) f08uef (dsbgst)
complex Hermitian matrices f08ssf (zhegst) f08tsf (zhpgst) f08usf (zhbgst)
The equivalent standard problem can then be solved using the routines discussed in Section 3.1.2.5. For example, to compute all the eigenvalues, the following routines must be called:
real symmetric-definite problem f07fdf (dpotrf), f08sef (dsygst)*, f08fef (dsytrd), f08jff (dsterf)
real symmetric-definite problem, packed storage f07gdf (dpptrf), f08tef (dspgst)*, f08gef (dsptrd), f08jff (dsterf)
real symmetric-definite banded problem f08uff (dpbstf)*, f08uef (dsbgst)*, f08hef (dsbtrd), f08jff (dsterf)
complex Hermitian-definite problem f07frf (zpotrf), f08ssf (zhegst)*, f08fsf (zhetrd), f08jff (dsterf)
complex Hermitian-definite problem, packed storage f07grf (zpptrf), f08tsf (zhpgst)*, f08gsf (zhptrd), f08jff (dsterf)
complex Hermitian-definite banded problem f08utf (zpbstf)*, f08usf (zhbgst)*, f08hsf (zhbtrd), f08jff (dsterf)
If eigenvectors are computed, the eigenvectors of the equivalent standard problem must be transformed back to those of the original generalized problem, as indicated in Section 2.8; routines from Chapter F06 may be used for this.
3.1.2.7
Nonsymmetric eigenvalue problems
Routines are provided to reduce a general real or complex matrix A to upper Hessenberg form H by an orthogonal similarity transformation A=QHQT (or by a unitary transformation A=QHQH if A is complex).
These routines do not form the matrix Q explicitly; additional routines are provided to generate Q, or to apply it to another matrix, as with the routines for orthogonal factorizations. Explicit generation of Q is required before using the QR algorithm on H to compute the Schur vectors; application of Q to another matrix is needed after eigenvectors of H have been computed by inverse iteration, in order to transform them to eigenvectors of A.
Routines are also provided to balance the matrix before reducing it to Hessenberg form, as described in Section 2.14.6. Companion routines are required to transform Schur vectors or eigenvectors of the balanced matrix to those of the original matrix.
  Reduce to
Hessenberg
form
Generate
matrix Q
Apply
matrix Q
Balance Back­transform
vectors after
balancing
real matrices f08nef (dgehrd) f08nff (dorghr) f08ngf (dormhr) f08nhf (dgebal) f08njf (dgebak)
complex matrices f08nsf (zgehrd) f08ntf (zunghr) f08nuf (zunmhr) f08nvf (zgebal) f08nwf (zgebak)
Routines are provided to compute the eigenvalues and all or part of the Schur factorization of an upper Hessenberg matrix. Eigenvectors may be computed either from the upper Hessenberg form by inverse iteration, or from the Schur form by back-substitution; these approaches are equally satisfactory for computing individual eigenvectors, but the latter may provide a more accurate basis for a subspace spanned by several eigenvectors.
Additional routines estimate the sensitivities of computed eigenvalues and eigenvectors, as discussed in Section 2.14.5.
  Eigenvalues and
Schur factorization
(QR algorithm)
Eigenvectors from
Hessenberg form
(inverse iteration)
Eigenvectors from
Schur factorization
Sensitivities of
eigenvalues and
eigenvectors
real matrices f08pef (dhseqr) f08pkf (dhsein) f08qkf (dtrevc) f08qlf (dtrsna)
complex matrices f08psf (zhseqr) f08pxf (zhsein) f08qxf (ztrevc) f08qyf (ztrsna)
Finally routines are provided for reordering the Schur factorization, so that eigenvalues appear in any desired order on the diagonal of the Schur form. The routines f08qff (dtrexc) and f08qtf (ztrexc) simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. The routines f08qgf (dtrsen) and f08quf (ztrsen) perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the invariant subspace corresponding to the specified cluster of eigenvalues. These routines can also compute the sensitivities of the cluster of eigenvalues and the invariant subspace.
  Reorder
Schur factorization
Reorder
Schur factorization,
find basis for invariant
subspace and estimate
sensitivities
real matrices f08qff (dtrexc) f08qgf (dtrsen)
complex matrices f08qtf (ztrexc) f08quf (ztrsen)
The following sequences of calls may be used to compute various combinations of eigenvalues, Schur vectors and eigenvectors, as described in Section 2.11:
real matrix, all eigenvalues and Schur factorization f08nef (dgehrd), f08nff (dorghr)*, f08pef (dhseqr)
real matrix, all eigenvalues and selected eigenvectors f08nef (dgehrd), f08ngf (dormhr), f08pef (dhseqr), f08pkf (dhsein)
real matrix, all eigenvalues and eigenvectors (with balancing) f08nhf (dgebal)*, f08nef (dgehrd), f08nff (dorghr), f08njf (dgebak), f08pef (dhseqr), f08pkf (dhsein)
complex matrix, all eigenvalues and Schur factorization f08nsf (zgehrd), f08ntf (zunghr)*, f08psf (zhseqr)
complex matrix, all eigenvalues and selected eigenvectors f08nsf (zgehrd), f08nuf (zunmhr), f08psf (zhseqr), f08pxf (zhsein)*
complex matrix, all eigenvalues and eigenvectors (with balancing) f08nvf (zgebal)*, f08nsf (zgehrd), f08ntf (zunghr), f08nwf (zgebak), f08psf (zhseqr), f08pxf (zhsein)
3.1.2.8
Generalized nonsymmetric eigenvalue problems
Routines are provided to reduce a real or complex matrix pair A1,R1, where A1 is general and R1 is upper triangular, to generalized upper Hessenberg form by orthogonal transformations A1=Q1HZ1T, R1=Q1RZ1T, (or by unitary transformations A1=Q1HZ1H, R=Q1R1Z1H, in the complex case). These routines can optionally return Q1 and/or Z1. Note that to transform a general matrix pair A,B to the form A1,R1 a QR factorization of B (B=Q~R1) should first be performed and the matrix A1 obtained as A1=Q~TA (see Section 3.1.2.1 above).
Routines are also provided to balance a general matrix pair before reducing it to generalized Hessenberg form, as described in Section 2.14.8. Companion routines are provided to transform vectors of the balanced pair to those of the original matrix pair.
  Reduce to
generalized
Hessenberg form
Balance Backtransform
vectors after
balancing
real matrices f08wff (dgghd3) f08whf (dggbal) f08wjf (dggbak)
complex matrices f08wtf (zgghd3) f08wvf (zggbal) f08wwf (zggbak)
Routines are provided to compute the eigenvalues (as the pairs α,β) and all or part of the generalized Schur factorization of a generalized upper Hessenberg matrix pair. Eigenvectors may be computed from the generalized Schur form by back-substitution.
Additional routines estimate the sensitivities of computed eigenvalues and eigenvectors.
  Eigenvalues and
generalized Schur
factorization
(QZ algorithm)
Eigenvectors from
generalized Schur
factorization
Sensitivities of
eigenvalues and
eigenvectors
real matrices f08xef (dhgeqz) f08ykf (dtgevc) f08ylf (dtgsna)
complex matrices f08xsf (zhgeqz) f08yxf (ztgevc) f08yyf (ztgsna)
Finally, routines are provided for reordering the generalized Schur factorization so that eigenvalues appear in any desired order on the diagonal of the generalized Schur form. f08yff (dtgexc) and f08ytf (ztgexc) simply swap two diagonal elements or blocks, and may need to be called repeatedly to achieve a desired order. f08ygf (dtgsen) and f08yuf (ztgsen) perform the whole reordering process for the important special case where a specified cluster of eigenvalues is to appear at the top of the generalized Schur form; if the Schur vectors are reordered at the same time, they yield an orthonormal basis for the deflating subspace corresponding to the specified cluster of eigenvalues. These routines can also compute the sensitivities of the cluster of eigenvalues and the deflating subspace.
  Reorder generalized Schur
factorization
Reorder generalized Schur
factorization, find basis for
deflating subspace and
estimate sensitivites
real matrices f08yff (dtgexc) f08ygf (dtgsen)
complex matrices f08ytf (ztgexc) f08yuf (ztgsen)
The following sequences of calls may be used to compute various combinations of eigenvalues, generalized Schur vectors and eigenvectors
real matrix pair, all eigenvalues (with balancing) f08aef (dgeqrf), f08agf (dormqr) (or f08abf (dgeqrt), f08acf (dgemqrt)), f08wff (dgghd3), f08whf (dggbal), f08xef (dhgeqz)*
real matrix pair, all eigenvalues and generalized Schur factorization f08aef (dgeqrf), f08aff (dorgqr), f08agf (dormqr) (or f08abf (dgeqrt), f08acf (dgemqrt)), f08wff (dgghd3), f08xef (dhgeqz)
real matrix pair, all eigenvalues and eigenvectors (with balancing) f06qff, f06qhf, f08aef (dgeqrf), f08aff (dorgqr), f08agf (dormqr) (or f08abf (dgeqrt), f08acf (dgemqrt)), f08wff (dgghd3), f08whf (dggbal), f08xef (dhgeqz), f08ykf (dtgevc)*, f08wjf (dggbak)
complex matrix pair, all eigenvalues (with balancing) f08asf (zgeqrf), f08auf (zunmqr) (or f08apf (zgeqrt), f08aqf (zgemqrt)), f08wtf (zgghd3), f08wvf (zggbal), f08xsf (zhgeqz)*
complex matrix pair, all eigenvalues and generalized Schur factorization f08asf (zgeqrf), f08atf (zungqr), f08auf (zunmqr) (or f08apf (zgeqrt), f08aqf (zgemqrt)), f08wtf (zgghd3), f08xsf (zhgeqz)
complex matrix pair, all eigenvalues and eigenvectors (with balancing) f06tff, f06thf, f08asf (zgeqrf), f08atf (zungqr), f08auf (zunmqr) (or f08apf (zgeqrt), f08aqf (zgemqrt)), f08wtf (zgghd3), f08wvf (zggbal), f08xsf (zhgeqz), f08yxf (ztgevc)*, f08wwf (zggbak)
3.1.2.9
The Sylvester equation and the generalized Sylvester equation
Routines are provided to solve the real or complex Sylvester equation AX±XB=C, where A and B are upper quasi-triangular if real, or upper triangular if complex. To solve the general form of the Sylvester equation in which A and B are general square matrices, A and B must be reduced to upper (quasi-) triangular form by the Schur factorization, using routines described in Section 3.1.2.7. For more details, see the documents for the routines listed below.
  Solve the Sylvester equation
real matrices f08qhf (dtrsyl)
complex matrices f08qvf (ztrsyl)
Routines are also provided to solve the real or complex generalized Sylvester equations
AR-LB=C ,   ​ DR-LE=F ,  
where the pairs A,D and B,E are in generalized Schur form. To solve the general form of the generalized Sylvester equation in which A,D and B,E are general matrix pairs, A,D and B,E must first be reduced to generalized Schur form.
  Solve the generalized Sylvester equation
real matrices f08yhf (dtgsyl)
complex matrices f08yvf (ztgsyl)

3.2
NAG Names and LAPACK Names

As well as the NAG routine name (beginning F08), the tables in Section 3.1 show the LAPACK routine names in double precision.
The routines may be called either by their NAG or LAPACK names. When using the NAG Library, the double precision form of the LAPACK name must be used (beginning with D- or Z-).
References to Chapter F08 routines in the manual normally include the LAPACK double precision names, for example f08aef (dgeqrf). The LAPACK routine names follow a simple scheme (which is similar to that used for the BLAS in Chapter F06). Each name has the structure XYYZZZ, where the components have the following meanings:
the initial letter X indicates the data type (real or complex) and precision:
S – real, single precision (in Fortran 77, REAL)
D – real, double precision (in Fortran 77, DOUBLE PRECISION)
C – complex, single precision (in Fortran 77, COMPLEX)
Z – complex, double precision (in Fortran 77, COMPLEX*16 or DOUBLE COMPLEX)
the second and third letters YY indicate the type of the matrix A or matrix pair A,B (and in some cases the storage scheme):
BD – bidiagonal
DI – diagonal
GB – general band
GE – general
GG – general pair (B may be triangular)
HB – (complex) Hermitian band
HE – Hermitian
HG – generalized upper Hessenberg
HP – Hermitian (packed storage)
HS – upper Hessenberg
OP – (real) orthogonal (packed storage)
OR – (real) orthogonal
PT – symmetric or Hermitian positive definite tridiagonal
SB – (real) symmetric band
SP – symmetric (packed storage)
ST – (real) symmetric tridiagonal
SY – symmetric
TG – triangular pair (one may be quasi-triangular)
TP – triangular-pentagonal
TR – triangular (or quasi-triangular)
UN – (complex) unitary
UP – (complex) unitary (packed storage)
the last three letters ZZZ indicate the computation performed. For example, QRF is a QR factorization.
Thus the routine DGEQRF performs a QR factorization of a real general matrix; the corresponding routine for a complex general matrix is ZGEQRF.

3.3
Matrix Storage Schemes

In this chapter the following storage schemes are used for matrices:
These storage schemes are compatible with those used in Chapters F06 and F07, but different schemes for packed, band and tridiagonal storage are used in a few older routines in Chapters F01, F02, F03 and F04.

3.3.1
Conventional storage

Please see Section 3.3.1 in the F07 Chapter Introduction for full details.

3.3.2
Packed storage

Please see Section 3.3.2 in the F07 Chapter Introduction for full details.

3.3.3
Band storage

Please see Section 3.3.4 in the F07 Chapter Introduction for full details.

3.3.4
Tridiagonal and bidiagonal matrices

A symmetric tridiagonal or bidiagonal matrix is stored in two one-dimensional arrays, one of length n containing the diagonal elements, and one of length n-1 containing the off-diagonal elements. (Older routines in Chapter F02 store the off-diagonal elements in elements 2:n of a vector of length n.)

3.3.5
Real diagonal elements of complex matrices

Please see Section 3.3.6 in the F07 Chapter Introduction for full details.

3.3.6
Representation of orthogonal or unitary matrices

A real orthogonal or complex unitary matrix (usually denoted Q) is often represented in the NAG Library as a product of elementary reflectors – also referred to as elementary Householder matrices (usually denoted Hi). For example,
Q =H1H2Hk.  
You need not be aware of the details, because routines are provided to work with this representation, either to generate all or part of Q explicitly, or to multiply a given matrix by Q or QT (QH in the complex case) without forming Q explicitly.
Nevertheless, the following further details may occasionally be useful.
An elementary reflector (or elementary Householder matrix) H of order n is a unitary matrix of the form
H=I-τvvH (4)
where τ is a scalar, and v is an n-element vector, with τ2v22=2×Reτ; v is often referred to as the Householder vector. Often v has several leading or trailing zero elements, but for the purpose of this discussion assume that H has no such special structure.
There is some redundancy in the representation 4, which can be removed in various ways. The representation used in Chapter F08 and in LAPACK (which differs from those used in some of the routines in Chapters F01, F02, F04 and F06) sets v1=1; hence v1 need not be stored. In real arithmetic, 1τ2, except that τ=0 implies H=I.
In complex arithmetic, τ may be complex, and satisfies 1Reτ2 and τ-11. Thus a complex H is not Hermitian (as it is in other representations), but it is unitary, which is the important property. The advantage of allowing τ to be complex is that, given an arbitrary complex vector x,H can be computed so that
HHx=β1,0,,0T  
with real β. This is useful, for example, when reducing a complex Hermitian matrix to real symmetric tridiagonal form, or a complex rectangular matrix to real bidiagonal form.

3.4
Argument Conventions

3.4.1
Option Arguments

Most routines in this chapter have one or more option arguments, of type CHARACTER. The descriptions in Section 5 of the routine documents refer only to upper case values (for example uplo='U' or uplo='L'); however in every case, the corresponding lower case characters may be supplied (with the same meaning). Any other value is illegal.
A longer character string can be passed as the actual argument, making the calling program more readable, but only the first character is significant. (This is a feature of Fortran 77.) For example:
Call DSYTRD('Upper',...)

3.4.2
Problem dimensions

It is permissible for the problem dimensions (for example, m or n) to be passed as zero, in which case the computation (or part of it) is skipped. Negative dimensions are regarded as an error.

3.4.3
Length of work arrays

A number of routines implementing block algorithms require workspace sufficient to hold one block of rows or columns of the matrix if they are to achieve optimum levels of performance – for example, workspace of size n×nb, where nb is the optimal block size. In such cases, the actual declared length of the work array must be passed as a separate argument lwork, which immediately follows work in the argument-list.
The blocked routines in this chapter allow you to perform a workspace query. In this case the routine only calculates the optimal size of the work array, and returns this value as the first entry of the work array. You are strongly encouraged to perform such a query before using a particular routine. The routine will still perform correctly when less workspace is provided: it simply uses the largest block size allowed by the amount of workspace supplied, as long as this is likely to give better performance than the unblocked algorithm.
If lwork indicates that there is insufficient workspace to perform the unblocked algorithm, this is regarded as an illegal value of lwork, and is treated like any other illegal argument value (see Section 3.4.4).

3.4.4
Error-handling and the Diagnostic Argument INFO

Routines in this chapter do not use the usual NAG Library error-handling mechanism, involving the argument IFAIL. Instead they have a diagnostic argument INFO. (Thus they preserve complete compatibility with the LAPACK specification.)
Whereas IFAIL is an Input/Output argument and must be set before calling a routine, INFO is purely an Output argument and need not be set before entry.
INFO indicates the success or failure of the computation, as follows:
If the routine document specifies that the routine may terminate with info>0, then it is essential to test info on exit from the routine. (This corresponds to a soft failure in terms of the usual NAG error-handling terminology.) No error message is output.
All routines check that input arguments such as n or lda or option arguments of type CHARACTER have permitted values. If an illegal value of the ith argument is detected, info is set to -i, a message is output, and execution of the program is terminated. (This corresponds to a hard failure in the usual NAG terminology.) In some implementations, especially when linking to vendor versions of LAPACK, execution of the program may continue, in which case, it is essential to test info on exit from the routine.

3.5
Normalizing Output Vectors

In cases where a routine computes a set of orthogonal or unitary vectors, e.g., eigenvectors or an orthogonal matrix factorization, it is possible for these vectors to differ between implementations, but still be correct. Under a strict normalization that enforces uniqueness of solution, these different solutions can be shown to be the same under that normalization. For example, an eigenvector v is computed such that v2=1. However, the vector αv, where α is a scalar such that α2=1, is also an eigenvector. So for symmetric eigenproblems where eigenvectors are real valued, α=1, or -1; and for complex eigenvectors, α can lie anywhere on the unit circle on the complex plane, α=expiθ.
Another example is in the computation of the singular valued decomposition of a matrix. Consider the factorization
A = UKΣKH VH ,  
where K is a diagonal matrix with elements on the unit circle. Then UK and VK are corresponding left and right singular vectors of A for any such choice of K.
The example programs for routines in Chapter F08 take care to perform post-processing normalizations, in such cases as those highlighted above, so that a unique set of results can be displayed over many implementations of the NAG Library (see Section 10 in f08yxf (ztgevc)). Similar care should be taken to obtain unique vectors and matrices when calling routines in Chapter F08, particularly when these are used in equivalence tests.

4
Decision Trees

The following decision trees are principally for the computation (general purpose) routines. See Section 3.1.1.1 for tables of the driver (black box) routines.

4.1
General Purpose Routines (eigenvalues and eigenvectors)

Tree 1: Real Symmetric Eigenvalue Problems

Are eigenvalues only required?   Are all the eigenvalues required?   Is A tridiagonal?   f08jcf or f08jff
yesyesyes
  no   no   no
Is A band matrix?   (f08hef and f08jff) or f08hcf
yes
  no
Is one triangle of A stored as a linear array?   (f08gef and f08jff) or f08gcf
yes
  no
(f08fef and f08jff) or f08faf or f08fcf
Is A tridiagonal?   f08jjf
yes
  no
Is A a band matrix?   f08hef and f08jjf
yes
  no
Is one triangle of A stored as a linear array?   f08gef and f08jjf
yes
  no
(f08fef and f08jjf) or f08fbf
Are all eigenvalues and eigenvectors required?   Is A tridiagonal?   f08jef, f08jcf, f08jhf or f08jlf
yesyes
  no   no
Is A a band matrix?   (f08hef and f08jef) or f08hcf
yes
  no
Is one triangle of A stored as a linear array?   (f08gef, f08gff and f08jef) or f08gcf
yes
  no
(f08fef, f08fff and f08jef) or f08faf or f08fcf
Is A tridiagonal?   f08jjf, f08jkf or f08jlf
yes
  no
Is one triangle of A stored as a linear array?   f08gef, f08jjf, f08jkf and f08ggf
yes
  no
(f08fef, f08jjf, f08jkf and f08fgf) or f08fbf

Tree 2: Real Generalized Symmetric-definite Eigenvalue Problems

Are eigenvalues only required?   Are all the eigenvalues required?   Are A and B band matrices?   f08uff, f08uef, f08hef and f08jff
yesyesyes
  no   no   no
Are A and B stored with one triangle as a linear array?   f07gdf, f08tef, f08gef and f08jff
yes
  no
f07fdf, f08sef, f08fef and f08jff
Are A and B band matrices?   f08uff, f08uef, f08hef and f08jjf
yes
  no
Are A and B stored with one triangle as a linear array?   f07gdf, f08tef, f08gef and f08jjf
yes
  no
f07fdf, f08sef, f08gef and f08jjf
Are all eigenvalues and eigenvectors required?   Are A and B stored with one triangle as a linear array?   f07gdf, f08tef, f08gef, f08gff, f08jef and f06plf
yesyes
  no   no
f07fdf, f08sef, f08fef, f08fff, f08jef and f06yjf
Are A and B band matrices?   f08uff, f08uef, f08hef, f08jkf and f06yjf
yes
  no
Are A and B stored with one triangle as a linear array?   f07gdf, f08tef, f08gef, f08jjf, f08jkf, f08ggf and f06plf
yes
  no
f07fdf, f08sef, f08fef, f08jjf, f08jkf, f08fgf and f06yjf
Note: the routines for band matrices only handle the problem Ax=λBx; the other routines handle all three types of problems (Ax=λBx, ABx=λx or BAx=λx) except that, if the problem is BAx=λx and eigenvectors are required, f06phf must be used instead of f06plf and f06yff instead of f06yjf.

Tree 3: Real Nonsymmetric Eigenvalue Problems

Are eigenvalues required?   Is A an upper Hessenberg matrix?   f08pef
yesyes
  no   no
f08naf or f08nbf or (f08nhf, f08nef and f08pef)
Is the Schur factorization of A required?   Is A an upper Hessenberg matrix?   f08pef
yesyes
  no   no
f08nbf or (f08nef, f08nff, f08pef or f08njf)
Are all eigenvectors required?   Is A an upper Hessenberg matrix?   f08pef or f08qkf
yesyes
  no   no
f08naf or f08nbf or (f08nhf, f08nef, f08nff, f08pef, f08qkf or f08njf)
Is A an upper Hessenberg matrix?   f08pef or f08pkf
yes
  no
f08nhf, f08nef, f08pef, f08pkf, f08ngf or f08njf

Tree 4: Real Generalized Nonsymmetric Eigenvalue Problems

Are eigenvalues only required?   Are A and B in generalized upper Hessenberg form?   f08xef
yesyes
  no   no
f08wbf, or f08whf and f08wcf
Is the generalized Schur factorization of A and B required?   Are A and B in generalized upper Hessenberg form?   f08xef
yesyes
  no   no
f08xbf or f08xcf
Are A and B in generalized upper Hessenberg form?   f08xef and f08ykf
yes
  no
f08wbf, or f08whf, f08wcf and f08wjf

Tree 5: Complex Hermitian Eigenvalue Problems

Are eigenvalues only required?   Are all the eigenvalues required?   Is A a band matrix?   (f08hsf and f08jff) or f08hqf
yesyesyes
  no   no   no
Is one triangle of A stored as a linear array?   (f08gsf and f08jff) or f08gqf
yes
  no
(f08fsf and f08jff) or f08fqf
Is A a band matrix?   f08hsf and f08jjf
yes
  no
Is one triangle of A stored as a linear array?   f08gsf and f08jjf
yes
  no
f08fsf and f08jjf
Are all eigenvalues and eigenvectors required?   Is A a band matrix?   (f08hsf and f08jsf) or f08hqf
yesyes
  no   no
Is one triangle of A stored as a linear array?   (f08gsf, f08gtf and f08jsf) or f08gqf
yes
  no
(f08fsf, f08ftf and f08jsf) or f08fqf
Is one triangle of A stored as a linear array?   f08gsf, f08jjf, f08jxf and f08guf
yes
  no
f08fsf, f08jjf, f08jxf and f08fuf

Tree 6: Complex Generalized Hermitian-definite Eigenvalue Problems

Are eigenvalues only required?   Are all eigenvalues required?   Are A and B stored with one triangle as a linear array?   f07grf, f08tsf, f08gsf and f08jff
yesyesyes
  no   no   no
f07frf, f08ssf, f08fsf and f08jff
Are A and B stored with one triangle as a linear array?   f07grf, f08tsf, f08gsf and f08jjf
yes
  no
f07frf, f08ssf, f08gsf and f08jjf
Are all eigenvalues and eigenvectors required?   Are A and B stored with one triangle as a linear array?   f07grf, f08tsf, f08gsf, f08gtf and f06psf
yesyes
  no   no
f07frf, f08ssf, f08fsf, f08ftf, f08jsf and f06zjf
Are A and B stored with one triangle as a linear array?   f07grf, f08tsf, f08gsf, f08jjf, f08jxf, f08guf and f06slf
yes
  no
f07frf, f08ssf, f08fsf, f08jjf, f08jxf, f08fuf and f06zjf

Tree 7: Complex non-Hermitian Eigenvalue Problems

Are eigenvalues only required?   Is A an upper Hessenberg matrix?   f08psf
yesyes
  no   no
f08nvf, f08nsf and f08psf
Is the Schur factorization of A required?   Is A an upper Hessenberg matrix?   f08psf
yesyes
  no   no
f08nsf, f08ntf, f08psf and f08nwf
Are all eigenvectors required?   Is A an upper Hessenberg matrix?   f08psf and f08qxf
yesyes
  no   no
f08nvf, f08nsf, f08ntf, f08psf, f08qxf and f08nwf
Is A an upper Hessenberg matrix?   f08psf and f08pxf
yes
  no
f08nvf, f08nsf, f08psf, f08pxf, f08nuf and f08nwf

Tree 8: Complex Generalized non-Hermitian Eigenvalue Problems

Are eigenvalues only required?   Are A and B in generalized upper Hessenberg form?   f08xsf
yesyes
  no   no
f08wpf, or f08wqf and f08wvf
Is the generalized Schur factorization of A and B required?   Are A and B in generalized upper Hessenberg form?   f08xsf
yesyes
  no   no
f08xpf or f08xqf
Are A and B in generalized upper Hessenberg form?   f08xsf and f08yxf
yes
  no
f08wpf, or f08wvf, f08wqf and f08wwf

4.2
General Purpose Routines (singular value decomposition)

Tree 9: Singular Value Decomposition of a Matrix

Is A a complex matrix?   Is A banded?   f08lsf and f08msf
yesyes
  no   no
Are singular values only required?   f08ksf and f08msf
yes
  no
f08ksf, f08ktf and f08msf
Is A bidiagonal?   f08mef
yes
  no
Is A banded?   f08lef and f08mef
yes
  no
Are singular values only required?   f08kef and f08mef
yes
  no
f08kef, f08kff and f08mef

Tree 10: Singular Value Decompositon of a Matrix Pair

Are A and B complex matrices?   f08vqf
yes
  no
f08vcf

5
Functionality Index

Backtransformation of eigenvectors from those of balanced forms, 
    complex matrix f08nwf (zgebak)
    real matrix f08njf (dgebak)
Backtransformation of generalized eigenvectors from those of balanced forms, 
    complex matrix f08wwf (zggbak)
    real matrix f08wjf (dggbak)
Balancing, 
    complex general matrix f08nvf (zgebal)
    complex general matrix pair f08wvf (zggbal)
    real general matrix f08nhf (dgebal)
    real general matrix pair f08whf (dggbal)
Eigenvalue problems for condensed forms of matrices, 
    complex Hermitian matrix, 
        eigenvalues and eigenvectors, 
            band matrix, 
                all/selected eigenvalues and eigenvectors by root-free QR algorithm f08hpf (zhbevx)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08hqf (zhbevd)
                all eigenvalues and eigenvectors by root-free QR algorithm f08hnf (zhbev)
            general matrix, 
                all/selected eigenvalues and eigenvectors by root-free QR algorithm f08fpf (zheevx)
                all/selected eigenvalues and eigenvectors by root-free QR algorithm, using packed storage f08gpf (zhpevx)
                all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08frf (zheevr)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08fqf (zheevd)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08gqf (zhpevd)
                all eigenvalues and eigenvectors by root-free QR algorithm f08fnf (zheev)
                all eigenvalues and eigenvectors by root-free QR algorithm, using packed storage f08gnf (zhpev)
        eigenvalues only, 
            band matrix, 
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08hpf (zhbevx)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08hnf (zhbev)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage f08hqf (zhbevd)
            general matrix, 
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08fpf (zheevx)
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage f08gpf (zhpevx)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08fnf (zheev)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage f08gnf (zhpev)
    complex upper Hessenberg matrix, reduced from complex general matrix, 
        eigenvalues and Schur factorization f08psf (zhseqr)
        selected right and/or left eigenvectors by inverse iteration f08pxf (zhsein)
    real bidiagonal matrix, 
        singular value decomposition, 
            after reduction from complex general matrix f08msf (zbdsqr)
            after reduction from real general matrix f08mef (dbdsqr)
            after reduction from real general matrix, using divide-and-conquer f08mdf (dbdsdc)
    real symmetric matrix, 
        eigenvalues and eigenvectors, 
            band matrix, 
                all/selected eigenvalues and eigenvectors by root-free QR algorithm f08hbf (dsbevx)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08hcf (dsbevd)
                all eigenvalues and eigenvectors by root-free QR algorithm f08haf (dsbev)
            general matrix, 
                all/selected eigenvalues and eigenvectors by root-free QR algorithm f08fbf (dsyevx)
                all/selected eigenvalues and eigenvectors by root-free QR algorithm, using packed storage f08gbf (dspevx)
                all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08fdf (dsyevr)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08fcf (dsyevd)
                all eigenvalues and eigenvectors by a divide-and-conquer algorithm, using packed storage f08gcf (dspevd)
                all eigenvalues and eigenvectors by root-free QR algorithm f08faf (dsyev)
                all eigenvalues and eigenvectors by root-free QR algorithm, using packed storage f08gaf (dspev)
        eigenvalues only, 
            band matrix, 
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08hbf (dsbevx)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08haf (dsbev)
            general matrix, 
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08fbf (dsyevx)
                all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage f08gbf (dspevx)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08faf (dsyev)
                all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm, using packed storage f08gaf (dspev)
    real symmetric tridiagonal matrix, 
        eigenvalues and eigenvectors, 
            after reduction from complex Hermitian matrix, 
                all/selected eigenvalues and eigenvectors, using Relatively Robust Representations f08jyf (zstegr)
                all eigenvalues and eigenvectors f08jsf (zsteqr)
                all eigenvalues and eigenvectors, positive definite matrix f08juf (zpteqr)
                all eigenvalues and eigenvectors, using divide-and-conquer f08jvf (zstedc)
                selected eigenvectors by inverse iteration f08jxf (zstein)
            all/selected eigenvalues and eigenvectors, using Relatively Robust Representations f08jlf (dstegr)
            all/selected eigenvalues and eigenvectors by root-free QR algorithm f08jbf (dstevx)
            all/selected eigenvalues and eigenvectors using Relatively Robust Representations f08jdf (dstevr)
            all eigenvalues and eigenvectors f08jef (dsteqr)
            all eigenvalues and eigenvectors, by divide-and-conquer f08jhf (dstedc)
            all eigenvalues and eigenvectors, positive definite matrix f08jgf (dpteqr)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08jcf (dstevd)
            all eigenvalues and eigenvectors by root-free QR algorithm f08jaf (dstev)
            selected eigenvectors by inverse iteration f08jkf (dstein)
        eigenvalues only, 
            all/selected eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08jbf (dstevx)
            all eigenvalues by root-free QR algorithm f08jff (dsterf)
            all eigenvalues by the Pal–Walker–Kahan variant of the QL or QR algorithm f08jaf (dstev)
        selected eigenvalues only f08jjf (dstebz)
    real upper Hessenberg matrix, reduced from real general matrix, 
        eigenvalues and Schur factorization f08pef (dhseqr)
        selected right and/or left eigenvectors by inverse iteration f08pkf (dhsein)
Eigenvalue problems for nonsymmetric matrices, 
    complex matrix, 
        all eigenvalues, Schur form, Schur vectors and reciprocal condition numbers f08ppf (zgeesx)
        all eigenvalues, Schur form and Schur vectors f08pnf (zgees)
        all eigenvalues and left/right eigenvectors f08nnf (zgeev)
        all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers f08npf (zgeevx)
    real matrix, 
        all eigenvalues, real Schur form, Schur vectors and reciprocal condition numbers f08pbf (dgeesx)
        all eigenvalues, real Schur form and Schur vectors f08paf (dgees)
        all eigenvalues and left/right eigenvectors f08naf (dgeev)
        all eigenvalues and left/right eigenvectors, plus balancing transformation and reciprocal condition numbers  f08nbf (dgeevx)
Eigenvalues and generalized Schur factorization, 
    complex generalized upper Hessenberg form f08xsf (zhgeqz)
    real generalized upper Hessenberg form f08xef (dhgeqz)
General Gauss–Markov linear model, 
    solves a complex general Gauss–Markov linear model problem f08zpf (zggglm)
    solves a real general Gauss–Markov linear model problem f08zbf (dggglm)
Generalized eigenvalue problems for condensed forms of matrices, 
    complex Hermitian-definite eigenproblems, 
        banded matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08uqf (zhbgvd)
            all eigenvalues and eigenvectors by reduction to tridiagonal form f08unf (zhbgv)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form f08upf (zhbgvx)
        general matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08sqf (zhegvd)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format f08tqf (zhpgvd)
            all eigenvalues and eigenvectors by reduction to tridiagonal form f08snf (zhegv)
            all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tnf (zhpgv)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form f08spf (zhegvx)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tpf (zhpgvx)
    real symmetric-definite eigenproblems, 
        banded matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08ucf (dsbgvd)
            all eigenvalues and eigenvectors by reduction to tridiagonal form f08uaf (dsbgv)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form f08ubf (dsbgvx)
        general matrices, 
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm f08scf (dsygvd)
            all eigenvalues and eigenvectors by a divide-and-conquer algorithm, packed storage format f08tcf (dspgvd)
            all eigenvalues and eigenvectors by reduction to tridiagonal form f08saf (dsygv)
            all eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08taf (dspgv)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form f08sbf (dsygvx)
            selected eigenvalues and eigenvectors by reduction to tridiagonal form, packed storage format f08tbf (dspgvx)
Generalized eigenvalue problems for nonsymmetric matrix pairs, 
    complex nonsymmetric matrix pairs, 
        all eigenvalues, generalized Schur form, Schur vectors and reciprocal condition numbers f08xpf (zggesx)
        all eigenvalues, generalized Schur form and Schur vectors, deprecated f08xnf (zgges)
        all eigenvalues, generalized Schur form and Schur vectors, using level 3 BLAS f08xqf (zgges3)
        all eigenvalues and left/right eigenvectors, deprecated f08wnf (zggev)
        all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers f08wpf (zggevx)
        all eigenvalues and left/right eigenvectors, using level 3 BLAS f08wqf (zggev3)
    real nonsymmetric matrix pairs, 
        all eigenvalues, generalized real Schur form and left/right Schur vectors, deprecated f08xaf (dgges)
        all eigenvalues, generalized real Schur form and left/right Schur vectors, plus reciprocal condition numbers f08xbf (dggesx)
        all eigenvalues, generalized real Schur form and left/right Schur vectors, using level 3 BLAS f08xcf (dgges3)
        all eigenvalues and left/right eigenvectors, deprecated f08waf (dggev)
        all eigenvalues and left/right eigenvectors, plus the balancing transformation and reciprocal condition numbers f08wbf (dggevx)
        all eigenvalues and left/right eigenvectors, using level 3 BLAS f08wcf (dggev3)
Generalized QR factorization, 
    complex matrices f08zsf (zggqrf)
    real matrices f08zef (dggqrf)
Generalized RQ factorization, 
    complex matrices f08ztf (zggrqf)
    real matrices f08zff (dggrqf)
Generalized singular value decomposition, 
    after reduction from complex general matrix, 
        complex triangular or trapezoidal matrix pair f08ysf (ztgsja)
    after reduction from real general matrix, 
        real triangular or trapezoidal matrix pair f08yef (dtgsja)
    complex matrix pair, deprecated f08vnf (zggsvd)
    complex matrix pair, using level 3 BLAS f08vqf (zggsvd3)
    partitioned orthogonal matrix (CS decomposition) f08raf (dorcsd)
    partitioned unitary matrix (CS decomposition) f08rnf (zuncsd)
    real matrix pair, deprecated f08vaf (dggsvd)
    real matrix pair, using level 3 BLAS f08vcf (dggsvd3)
    reduction of a pair of general matrices to triangular or trapezoidal form, 
        complex matrices, deprecated f08vsf (zggsvp)
        complex matrices, using level 3 BLAS f08vuf (zggsvp3)
        real matrices, deprecated f08vef (dggsvp)
        real matrices, using level 3 BLAS f08vgf (dggsvp3)
least squares problems, 
    complex matrices, 
        apply orthogonal matrix f08bxf (zunmrz)
        minimum norm solution using a complete orthogonal factorization f08bnf (zgelsy)
        minimum norm solution using the singular value decomposition f08knf (zgelss)
        minimum norm solution using the singular value decomposition (divide-and-conquer) f08kqf (zgelsd)
        reduction of upper trapezoidal matrix to upper triangular form f08bvf (ztzrzf)
    real matrices, 
        apply orthogonal matrix f08bkf (dormrz)
        minimum norm solution using a complete orthogonal factorization f08baf (dgelsy)
        minimum norm solution using the singular value decomposition f08kaf (dgelss)
        minimum norm solution using the singular value decomposition (divide-and-conquer) f08kcf (dgelsd)
        reduction of upper trapezoidal matrix to upper triangular form f08bhf (dtzrzf)
least squares problems with linear equality constraints, 
    complex matrices, 
        minimum norm solution subject to linear equality constraints using a generalized RQ factorization f08znf (zgglse)
    real matrices, 
        minimum norm solution subject to linear equality constraints using a generalized RQ factorization f08zaf (dgglse)
Left and right eigenvectors of a pair of matrices, 
    complex upper triangular matrices f08yxf (ztgevc)
    real quasi-triangular matrices f08ykf (dtgevc)
LQ factorization and related operations, 
    complex matrices, 
        apply unitary matrix f08axf (zunmlq)
        factorization f08avf (zgelqf)
        form all or part of unitary matrix f08awf (zunglq)
    real matrices, 
        apply orthogonal matrix f08akf (dormlq)
        factorization f08ahf (dgelqf)
        form all or part of orthogonal matrix f08ajf (dorglq)
Operations on eigenvectors of a real symmetric or complex Hermitian matrix, or singular vectors of a general matrix, 
    estimate condition numbers f08flf (ddisna)
Operations on generalized Schur factorization of a general matrix pair, 
    complex matrix, 
        estimate condition numbers of eigenvalues and/or eigenvectors f08yyf (ztgsna)
        re-order Schur factorization f08ytf (ztgexc)
        re-order Schur factorization, compute generalized eigenvalues and condition numbers f08yuf (ztgsen)
    real matrix, 
        estimate condition numbers of eigenvalues and/or eigenvectors f08ylf (dtgsna)
        re-order Schur factorization f08yff (dtgexc)
        re-order Schur factorization, compute generalized eigenvalues and condition numbers f08ygf (dtgsen)
Operations on Schur factorization of a general matrix, 
    complex matrix, 
        compute left and/or right eigenvectors f08qxf (ztrevc)
        estimate sensitivities of eigenvalues and/or eigenvectors f08qyf (ztrsna)
        re-order Schur factorization f08qtf (ztrexc)
        re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities f08quf (ztrsen)
    real matrix, 
        compute left and/or right eigenvectors f08qkf (dtrevc)
        estimate sensitivities of eigenvalues and/or eigenvectors f08qlf (dtrsna)
        re-order Schur factorization f08qff (dtrexc)
        re-order Schur factorization, compute basis of invariant subspace, and estimate sensitivities f08qgf (dtrsen)
Overdetermined and underdetermined linear systems, 
    complex matrices, 
        solves an overdetermined or undetermined complex linear system f08anf (zgels)
    real matrices, 
        solves an overdetermined or undetermined real linear system f08aaf (dgels)
Performs a reduction of eigenvalue problems to condensed forms, and related operations, 
    real rectangular band matrix to upper bidiagonal form f08lef (dgbbrd)
QL factorization and related operations, 
    complex matrices, 
        apply unitary matrix f08cuf (zunmql)
        factorization f08csf (zgeqlf)
        form all or part of unitary matrix f08ctf (zungql)
    real matrices, 
        apply orthogonal matrix f08cgf (dormql)
        factorization f08cef (dgeqlf)
        form all or part of orthogonal matrix f08cff (dorgql)
QR factorization and related operations, 
    complex matrices, 
        general matrices, 
            apply unitary matrix f08auf (zunmqr)
            apply unitary matrix, explicitly blocked f08aqf (zgemqrt)
            factorization f08asf (zgeqrf)
            factorization, 
                with column pivoting, using BLAS-3 f08btf (zgeqp3)
            factorization, explicitly blocked f08apf (zgeqrt)
            factorization, with column pivoting f08bsf (zgeqpf)
            form all or part of unitary matrix f08atf (zungqr)
        triangular-pentagonal matrices, 
            apply unitary matrix f08bqf (ztpmqrt)
            factorization f08bpf (ztpqrt)
    real matrices, 
        general matrices, 
            apply orthogonal matrix f08agf (dormqr)
            apply orthogonal matrix, explicitly blocked f08acf (dgemqrt)
            factorization, 
                with column pivoting, using BLAS-3 f08bff (dgeqp3)
            factorization, orthogonal matrix f08aef (dgeqrf)
            factorization, with column pivoting f08bef (dgeqpf)
            factorization, with explicit blocking f08abf (dgeqrt)
            form all or part of orthogonal matrix f08aff (dorgqr)
        triangular-pentagonal matrices, 
            apply orthogonal matrix f08bbf (dtpqrt)
            factorization f08bcf (dtpmqrt)
Reduction of a pair of general matrices to generalized upper Hessenberg form, 
    orthogonal reduction, real matrices, deprecated f08wef (dgghrd)
    orthogonal reduction, real matrices, using level 3 BLAS f08wff (dgghd3)
    unitary reduction, complex matrices, deprecated f08wsf (zgghrd)
    unitary reduction, complex matrices, using level 3 BLAS f08wtf (zgghd3)
Reduction of eigenvalue problems to condensed forms, and related operations, 
    complex general matrix to upper Hessenberg form, 
        apply orthogonal matrix f08nuf (zunmhr)
        form orthogonal matrix f08ntf (zunghr)
        reduce to Hessenberg form f08nsf (zgehrd)
    complex Hermitian band matrix to real symmetric tridiagonal form f08hsf (zhbtrd)
    complex Hermitian matrix to real symmetric tridiagonal form, 
        apply unitary matrix f08fuf (zunmtr)
        apply unitary matrix, packed storage f08guf (zupmtr)
        form unitary matrix f08ftf (zungtr)
        form unitary matrix, packed storage f08gtf (zupgtr)
        reduce to tridiagonal form f08fsf (zhetrd)
        reduce to tridiagonal form, packed storage f08gsf (zhptrd)
    complex rectangular band matrix to real upper bidiagonal form f08lsf (zgbbrd)
    complex rectangular matrix to real bidiagonal form, 
        apply unitary matrix f08kuf (zunmbr)
        form unitary matrix f08ktf (zungbr)
        reduce to bidiagonal form f08ksf (zgebrd)
    real general matrix to upper Hessenberg form, 
        apply orthogonal matrix f08ngf (dormhr)
        form orthogonal matrix f08nff (dorghr)
        reduce to Hessenberg form f08nef (dgehrd)
    real rectangular matrix to bidiagonal form, 
        apply orthogonal matrix f08kgf (dormbr)
        form orthogonal matrix f08kff (dorgbr)
        reduce to bidiagonal form f08kef (dgebrd)
    real symmetric band matrix to symmetric tridiagonal form f08hef (dsbtrd)
    real symmetric matrix to symmetric tridiagonal form, 
        apply orthogonal matrix f08fgf (dormtr)
        apply orthogonal matrix, packed storage f08ggf (dopmtr)
        form orthogonal matrix f08fff (dorgtr)
        form orthogonal matrix, packed storage f08gff (dopgtr)
        reduce to tridiagonal form f08fef (dsytrd)
        reduce to tridiagonal form, packed storage f08gef (dsptrd)
Reduction of generalized eigenproblems to standard eigenproblems, 
    complex Hermitian-definite banded generalized eigenproblem Ax = λBx f08usf (zhbgst)
    complex Hermitian-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx f08ssf (zhegst)
    complex Hermitian-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx, packed storage f08tsf (zhpgst)
    real symmetric-definite banded generalized eigenproblem Ax = λBx f08uef (dsbgst)
    real symmetric-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx f08sef (dsygst)
    real symmetric-definite generalized eigenproblem Ax = λBx, ABx = λx or BAx = λx, packed storage f08tef (dspgst)
RQ factorization and related operations, 
    complex matrices, 
        apply unitary matrix f08cxf (zunmrq)
        factorization f08cvf (zgerqf)
        form all or part of unitary matrix f08cwf (zungrq)
    real matrices, 
        apply orthogonal matrix f08ckf (dormrq)
        factorization f08chf (dgerqf)
        form all or part of orthogonal matrix f08cjf (dorgrq)
Singular value decomposition, 
    complex matrix, 
        using a divide-and-conquer algorithm f08krf (zgesdd)
        using bidiagonal QR iteration f08kpf (zgesvd)
    real matrix, 
        preconditioned Jacobi SVD using fast scaled rotations and de Rijks pivoting f08khf (dgejsv)
        using a divide-and-conquer algorithm f08kdf (dgesdd)
        using bidiagonal QR iteration f08kbf (dgesvd)
        using fast scaled rotation and de Rijks pivoting f08kjf (dgesvj)
Solve generalized Sylvester equation, 
    complex matrices f08yvf (ztgsyl)
    real matrices f08yhf (dtgsyl)
Solve reduced form of Sylvester matrix equation, 
    complex matrices f08qvf (ztrsyl)
    real matrices f08qhf (dtrsyl)
Split Cholesky factorization, 
    complex Hermitian positive definite band matrix f08utf (zpbstf)
    real symmetric positive definite band matrix f08uff (dpbstf)

6
Auxiliary Routines Associated with Library Routine Arguments

f08paznagf_lapack_dgees_dummy_select
See the description of the argument select in f08paf (dgees) and f08pbf (dgeesx).
f08pnznagf_lapack_zgees_dummy_select
See the description of the argument select in f08pnf (zgees) and f08ppf (zgeesx).
f08xaznagf_lapack_dgges_dummy_selctg
See the description of the argument selctg in f08xaf (dgges), f08xbf (dggesx) and f08xcf (dgges3).
f08xnznagf_lapack_zgges_dummy_selctg
See the description of the argument selctg in f08xnf (zgges), f08xpf (zggesx) and f08xqf (zgges3).

7
Routines Withdrawn or Scheduled for Withdrawal

None.

8
References

Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia
Arioli M, Duff I S and de Rijk P P M (1989) On the augmented system approach to sparse least squares problems Numer. Math. 55 667–684
Demmel J W and Kahan W (1990) Accurate singular values of bidiagonal matrices SIAM J. Sci. Statist. Comput. 11 873–912
Golub G H and Van Loan C F (2012) Matrix Computations (4th Edition) Johns Hopkins University Press, Baltimore
Moler C B and Stewart G W (1973) An algorithm for generalized matrix eigenproblems SIAM J. Numer. Anal. 10 241–256
Parlett B N (1998) The Symmetric Eigenvalue Problem SIAM, Philadelphia
Stewart G W and Sun J-G (1990) Matrix Perturbation Theory Academic Press, London
Ward R C (1981) Balancing the generalized eigenvalue problem SIAM J. Sci. Stat. Comp. 2 141–152
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H and Reinsch C (1971) Handbook for Automatic Computation II, Linear Algebra Springer–Verlag
© The Numerical Algorithms Group Ltd, Oxford, UK. 2017