NAG Library Routine Document
F02SDF
1 Purpose
F02SDF finds the eigenvector corresponding to a given real eigenvalue for the generalized problem $Ax=\lambda Bx$, or for the standard problem $Ax=\lambda x$, where $A$ and $B$ are real band matrices.
2 Specification
SUBROUTINE F02SDF ( 
N, MA1, MB1, A, LDA, B, LDB, SYM, RELEP, RMU, VEC, D, IWORK, WORK, LWORK, IFAIL) 
INTEGER 
N, MA1, MB1, LDA, LDB, IWORK(N), LWORK, IFAIL 
REAL (KIND=nag_wp) 
A(LDA,N), B(LDB,N), RELEP, RMU, VEC(N), D(30), WORK(LWORK) 
LOGICAL 
SYM 

3 Description
Given an approximation $\mu $ to a real eigenvalue $\lambda $ of the generalized eigenproblem $Ax=\lambda Bx$, F02SDF attempts to compute the corresponding eigenvector by inverse iteration.
F02SDF first computes lower and upper triangular factors, $L$ and $U$, of $A\mu B$, using Gaussian elimination with interchanges, and then solves the equation $Ux=e$, where $e={\left(1,1,1,\dots ,1\right)}^{\mathrm{T}}$ – this is the first half iteration.
There are then three possible courses of action depending on the input value of
${\mathbf{D}}\left(1\right)$.
 ${\mathbf{D}}\left(1\right)=0$.
This setting should be used if $\lambda $ is an illconditioned eigenvalue (provided the matrix elements do not vary widely in order of magnitude). In this case it is essential to accept only a vector found after one half iteration, and $\mu $ must be a very good approximation to $\lambda $. If acceptable growth is achieved in the solution of $Ux=e$, then the normalized $x$ is accepted as the eigenvector. If not, columns of an orthogonal matrix are tried in turn in place of $e$. If none of these give acceptable growth, the routine fails, indicating that $\mu $ was not a sufficiently good approximation to $\lambda $.
 ${\mathbf{D}}\left(1\right)>0$.
This setting should be used if $\mu $ is moderately close to an eigenvalue which is not illconditioned (provided the matrix elements do not differ widely in order of magnitude). If acceptable growth is achieved in the solution of $Ux=e$, the normalized $x$ is accepted as the eigenvector. If not, inverse iteration is performed. Up to $30$ iterations are allowed to achieve a vector and a correction to $\mu $ which together give acceptably small residuals.
 ${\mathbf{D}}\left(1\right)<0$.
This setting should be used if the elements of $A$ and $B$ vary widely in order of magnitude. Inverse iteration is performed, but a different convergence criterion is used.
See
Section 8.3 for further details.
Note that the bandwidth of the matrix $A$ must not be less than the bandwidth of $B$. If this is not so, either $A$ must be filled out with zeros, or matrices $A$ and $B$ may be reversed and $1/\mu $ supplied as an approximation to the eigenvalue $1/\lambda $. Also it is assumed that $A$ and $B$ each have the same number of subdiagonals as superdiagonals. If this is not so, they must be filled out with zeros. If $A$ and $B$ are both symmetric, only the upper triangles need be supplied.
4 References
Peters G and Wilkinson J H (1979) Inverse iteration, illconditioned equations and Newton's method SIAM Rev. 21 339–360
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1972) Inverse iteration in theory and practice Symposia Mathematica Volume X 361–379 Istituto Nazionale di Alta Matematica, Monograf, Bologna
Wilkinson J H (1974) Notes on inverse iteration and illconditioned eigensystems Acta Univ. Carolin. Math. Phys. 1–2 173–177
Wilkinson J H (1979) Kronecker's canonical form and the $QZ$ algorithm Linear Algebra Appl. 28 285–303
5 Parameters
 1: N – INTEGERInput
On entry: $n$, the order of the matrices $A$ and $B$.
Constraint:
${\mathbf{N}}\ge 1$.
 2: MA1 – INTEGERInput
On entry: the value ${m}_{A}+1$, where ${m}_{A}$ is the number of nonzero lines on each side of the diagonal of $A$. Thus the total bandwidth of $A$ is $2{m}_{A}+1$.
Constraint:
$1\le {\mathbf{MA1}}\le {\mathbf{N}}$.
 3: MB1 – INTEGERInput
On entry: if
${\mathbf{MB1}}\le 0$,
$B$ is assumed to be the unit matrix. Otherwise
MB1 must specify the value
${m}_{B}+1$, where
${m}_{B}$ is the number of nonzero lines on each side of the diagonal of
$B$. Thus the total bandwidth of
$B$ is
$2{m}_{B}+1$.
Constraint:
${\mathbf{MB1}}\le {\mathbf{MA1}}$.
 4: A(LDA,N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: the
$n$ by
$n$ band matrix
$A$. The
${m}_{A}$ subdiagonals must be stored in the first
${m}_{A}$ rows of the array; the diagonal in the (
${m}_{A}+1$)th row; and the
${m}_{A}$ superdiagonals in rows
${m}_{A}+2$ to
$2{m}_{A}+1$. Each row of the matrix must be stored in the corresponding column of the array. For example, if
$n=6$ and
${m}_{A}=2$ the storage scheme is:
Elements of the array marked
$*$ need not be set. The following code assigns the matrix elements within the band to the correct elements of the array:
DO 20 J = 1, N DO 10 I = MAX(1,JMA1+1), MIN(N,J+MA11) A(IJ+MA1,J) = matrix(J,I) 10 CONTINUE 20 CONTINUE
If
${\mathbf{SYM}}=\mathrm{.TRUE.}$ (i.e., both
$A$ and
$B$ are symmetric), only the lower triangle of
$A$ need be stored in the first
MA1 rows of the array.
On exit: details of the factorization of $A\stackrel{}{\lambda}B$, where $\stackrel{}{\lambda}$ is an estimate of the eigenvalue.
 5: LDA – INTEGERInput
On entry: the first dimension of the array
A as declared in the (sub)program from which F02SDF is called.
Constraint:
${\mathbf{LDA}}\ge 2\times {\mathbf{MA1}}1$.
 6: B(LDB,N) – REAL (KIND=nag_wp) arrayInput/Output
On entry: if
${\mathbf{MB1}}>0$,
B must contain the
$n$ by
$n$ band matrix
$B$, stored in the same way as
$A$. If
${\mathbf{SYM}}=\mathrm{.TRUE.}$, only the lower triangle of
$B$ need be stored in the first
MB1 rows of the array.
If ${\mathbf{MB1}}\le 0$, the array is not used.
On exit: elements in the topleft corner, and in the bottom right corner if ${\mathbf{SYM}}=\mathrm{.FALSE.}$, are set to zero; otherwise the array is unchanged.
 7: LDB – INTEGERInput
On entry: the first dimension of the array
B as declared in the (sub)program from which F02SDF is called.
Constraints:
 if ${\mathbf{SYM}}=\mathrm{.FALSE.}$, ${\mathbf{LDB}}\ge 2\times {\mathbf{MB1}}1$;
 if ${\mathbf{SYM}}=\mathrm{.TRUE.}$, ${\mathbf{LDB}}\ge {\mathbf{MB1}}$.
 8: SYM – LOGICALInput
On entry: if
${\mathbf{SYM}}=\mathrm{.TRUE.}$, both
$A$ and
$B$ are assumed to be symmetric and only their upper triangles need be stored. Otherwise
SYM must be set to .FALSE..
 9: RELEP – REAL (KIND=nag_wp)Input
On entry: the relative error of the coefficients of the given matrices
$A$ and
$B$. If the value of
RELEP is less than the
machine precision, the
machine precision is used instead.
 10: RMU – REAL (KIND=nag_wp)Input
On entry: $\mu $, an approximation to the eigenvalue for which the corresponding eigenvector is required.
 11: VEC(N) – REAL (KIND=nag_wp) arrayOutput
On exit: the eigenvector, normalized so that the largest element is unity, corresponding to the improved eigenvalue ${\mathbf{RMU}}+{\mathbf{D}}\left(30\right)$.
 12: D($30$) – REAL (KIND=nag_wp) arrayInput/Output
On entry:
${\mathbf{D}}\left(1\right)$ must be set to indicate the type of problem (see
Section 3):
 ${\mathbf{D}}\left(1\right)>0.0$
 Indicates a wellconditioned eigenvalue.
 ${\mathbf{D}}\left(1\right)=0.0$
 Indicates an illconditioned eigenvalue.
 ${\mathbf{D}}\left(1\right)<0.0$
 Indicates that the matrices have elements varying widely in order of magnitude.
On exit: if
${\mathbf{D}}\left(1\right)\ne 0.0$ on entry, the successive corrections to
$\mu $ are given in
${\mathbf{D}}\left(\mathit{i}\right)$, for
$\mathit{i}=1,2,\dots ,k$, where
$k+1$ is the total number of iterations performed. The final correction is also given in the last position,
${\mathbf{D}}\left(30\right)$, of the array. The remaining elements of
D are set to zero.
If ${\mathbf{D}}\left(1\right)=0.0$ on entry, no corrections to $\mu $ are computed and
${\mathbf{D}}\left(\mathit{i}\right)$ is set to $0.0$, for $\mathit{i}=1,2,\dots ,30$. Thus in all three cases the best available approximation to the eigenvalue is ${\mathbf{RMU}}+{\mathbf{D}}\left(30\right)$.
 13: IWORK(N) – INTEGER arrayWorkspace
 14: WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace
 15: LWORK – INTEGERInput
On entry: the dimension of the array
WORK as declared in the (sub)program from which F02SDF is called.
Constraints:
 if ${\mathbf{D}}\left(1\right)\ne 0.0$, ${\mathbf{LWORK}}\ge {\mathbf{N}}\times \left({\mathbf{MA1}}+1\right)$;
 if ${\mathbf{D}}\left(1\right)=0.0$, ${\mathbf{LWORK}}\ge 2\times {\mathbf{N}}$.
 16: IFAIL – INTEGERInput/Output

On entry:
IFAIL must be set to
$0$,
$1\text{ or}1$. If you are unfamiliar with this parameter you should refer to
Section 3.3 in the Essential Introduction for details.
For environments where it might be inappropriate to halt program execution when an error is detected, the value
$1\text{ or}1$ is recommended. If the output of error messages is undesirable, then the value
$1$ is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is
$0$.
When the value $\mathbf{1}\text{ or}\mathbf{1}$ is used it is essential to test the value of IFAIL on exit.
On exit:
${\mathbf{IFAIL}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see
Section 6).
6 Error Indicators and Warnings
If on entry
${\mathbf{IFAIL}}={\mathbf{0}}$ or
${{\mathbf{1}}}$, explanatory error messages are output on the current error message unit (as defined by
X04AAF).
Errors or warnings detected by the routine:
 ${\mathbf{IFAIL}}=1$
On entry,  ${\mathbf{N}}<1$, 
or  ${\mathbf{MA1}}<1$, 
or  ${\mathbf{MA1}}>{\mathbf{N}}$, 
or  ${\mathbf{LDA}}<2\times {\mathbf{MA1}}1$, 
or  ${\mathbf{LDB}}<{\mathbf{MB1}}$ when ${\mathbf{SYM}}=\mathrm{.TRUE.}$, 
or  ${\mathbf{LDB}}<2\times {\mathbf{MB1}}1$ when ${\mathbf{SYM}}=\mathrm{.FALSE.}$ (LDB is not checked if ${\mathbf{MB1}}\le 0$). 
 ${\mathbf{IFAIL}}=2$
On entry,  ${\mathbf{MA1}}<{\mathbf{MB1}}$. Either fill out A with zeros, or reverse the roles of A and B, and replace RMU by its reciprocal, i.e., solve $Bx={\lambda}^{1}Ax\text{.}$ 
 ${\mathbf{IFAIL}}=3$
On entry,  ${\mathbf{LWORK}}<2\times {\mathbf{N}}$ when ${\mathbf{D}}\left(1\right)=0.0$, 
or  ${\mathbf{LWORK}}<{\mathbf{N}}\times \left({\mathbf{MA1}}+1\right)$ when ${\mathbf{D}}\left(1\right)\ne 0.0$. 
 ${\mathbf{IFAIL}}=4$
$A$ is null. If
$B$ is nonsingular, all the eigenvalues are zero and any set of
N orthogonal vectors forms the eigensolution.
 ${\mathbf{IFAIL}}=5$
$B$ is null. If $A$ is nonsingular, all the eigenvalues are infinite, and the columns of the unit matrix are eigenvectors.
 ${\mathbf{IFAIL}}=6$
On entry,  $A$ and $B$ are both null. The eigensolution is arbitrary. 
 ${\mathbf{IFAIL}}=7$
${\mathbf{D}}\left(1\right)\ne 0.0$ on entry and convergence is not achieved in
$30$ iterations. Either the eigenvalue is illconditioned or
RMU is a poor approximation to the eigenvalue. See
Section 8.3.
 ${\mathbf{IFAIL}}=8$
${\mathbf{D}}\left(1\right)=0.0$ on entry and no eigenvector has been found after
$\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},5\right)$ backsubstitutions.
RMU is not a sufficiently good approximation to the eigenvalue.
 ${\mathbf{IFAIL}}=9$
${\mathbf{D}}\left(1\right)<0.0$ on entry and
RMU is too inaccurate for the solution to converge.
7 Accuracy
The eigensolution is exact for some problem
where
$\Vert E\Vert ,\Vert F\Vert $ are of the order of
$\eta \left(\Vert A\Vert +\mu \Vert B\Vert \right)$, where
$\eta $ is the value used for
RELEP.
The time taken by F02SDF is approximately proportional to $n{\left(2{m}_{A}+1\right)}^{2}$ for factorization, and to $n\left(2{m}_{A}+1\right)$ for each iteration.
The storage of the matrices $A$ and $B$ is designed for efficiency on a paged machine.
F02SDF will work with full matrices but it will do so inefficiently, particularly in respect of storage requirements.
Inverse iteration is performed according to the rule
where
${\alpha}_{r+1}$ is the element of
${y}_{r+1}$ of largest magnitude.
Thus:
Hence the residual corresponding to
${x}_{r+1}$ is very small if
$\left{\alpha}_{r+1}\right$ is very large (see
Peters and Wilkinson (1979)). The first half iteration,
$U{y}_{1}=e$, corresponds to taking
${L}^{1}PB{x}_{0}=e$.
If
$\mu $ is a very accurate eigenvalue, then there should always be an initial vector
${x}_{0}$ such that one half iteration gives a small residual and thus a good eigenvector. If the eigenvalue is illconditioned, then second and subsequent iterated vectors may not be even remotely close to an eigenvector of a neighbouring problem (see pages 374–376 of
Wilkinson (1972) and
Wilkinson (1974)). In this case it is essential to accept only a vector obtained after one half iteration.
However, for wellconditioned eigenvalues, there is no loss in performing more than one iteration (see page 376 of
Wilkinson (1972)), and indeed it will be necessary to iterate if
$\mu $ is not such a good approximation to the eigenvalue. When the iteration has converged,
${y}_{r+1}$ will be some multiple of
${x}_{r}$,
${y}_{r+1}={\beta}_{r+1}{x}_{r}$, say.
Therefore
giving
Thus
$\mu +\frac{1}{{\beta}_{r+1}}$ is a better approximation to the eigenvalue.
${\beta}_{r+1}$ is obtained as the element of
${y}_{r+1}$ which corresponds to the element of largest magnitude,
$+1$, in
${x}_{r}$. The routine terminates when
$\Vert \left(A\left(\mu +\frac{1}{{\beta}_{r}}\right)B\right){x}_{r}\Vert $ is of the order of the
machine precision relative to
$\Vert A\Vert +\left\mu \right\Vert B\Vert $.
If the elements of $A$ and $B$ vary widely in order of magnitude, then $\Vert A\Vert $ and $\Vert B\Vert $ are excessively large and a different convergence test is required. The routine terminates when the difference between successive corrections to $\mu $ is small relative to $\mu $.
In practice one does not necessarily know if the given problem is wellconditioned or illconditioned. In order to provide some information on the condition of the eigenvalue or the accuracy of
$\mu $ in the event of failure, successive values of
$\frac{1}{{\beta}_{r}}$ are stored in the vector
D when
${\mathbf{D}}\left(1\right)$ is nonzero on input. If these values appear to be converging steadily, then it is likely that
$\mu $ was a poor approximation to the eigenvalue and it is worth trying again with
${\mathbf{RMU}}+{\mathbf{D}}\left(30\right)$ as the initial approximation. If the values in
D vary considerably in magnitude, then the eigenvalue is illconditioned.
A discussion of the significance of the singularity of
$A$ and/or
$B$ is given in relation to the
$QZ$ algorithm in
Wilkinson (1979).
9 Example
Given the generalized eigenproblem
$Ax=\lambda Bx$ where
find the eigenvector corresponding to the approximate eigenvalue
$12.33$.
Although
$B$ is symmetric,
$A$ is not, so
SYM must be set to .FALSE. and all the elements of
$B$ in the band must be supplied to the routine.
$A$ (as written above) has
$1$ subdiagonal and
$2$ superdiagonals, so
MA1 must be set to
$3$ and
$A$ filled out with an additional subdiagonal of zeros. Each row of the matrices is read in as data in turn.
9.1 Program Text
Program Text (f02sdfe.f90)
9.2 Program Data
Program Data (f02sdfe.d)
9.3 Program Results
Program Results (f02sdfe.r)