Integer type:  int32  int64  nag_int  show int32  show int32  show int64  show int64  show nag_int  show nag_int

Chapter Contents
Chapter Introduction
NAG Toolbox

# NAG Toolbox: nag_eigen_real_band_geneig (f02sd)

## Purpose

nag_eigen_real_band_geneig (f02sd) finds the eigenvector corresponding to a given real eigenvalue for the generalized problem Ax = λBx$Ax=\lambda Bx$, or for the standard problem Ax = λx$Ax=\lambda x$, where A$A$ and B$B$ are real band matrices.

## Syntax

[a, b, vec, d, ifail] = f02sd(ma1, mb1, a, b, sym, relep, rmu, d, 'n', n)
[a, b, vec, d, ifail] = nag_eigen_real_band_geneig(ma1, mb1, a, b, sym, relep, rmu, d, 'n', n)

## Description

Given an approximation μ$\mu$ to a real eigenvalue λ$\lambda$ of the generalized eigenproblem Ax = λBx$Ax=\lambda Bx$, nag_eigen_real_band_geneig (f02sd) attempts to compute the corresponding eigenvector by inverse iteration.
nag_eigen_real_band_geneig (f02sd) first computes lower and upper triangular factors, L$L$ and U$U$, of AμB$A-\mu B$, using Gaussian elimination with interchanges, and then solves the equation Ux = e$Ux=e$, where e = (1,1,1,,1)T$e={\left(1,1,1,\dots ,1\right)}^{\mathrm{T}}$ – this is the first half iteration.
There are then three possible courses of action depending on the input value of d(1)${\mathbf{d}}\left(1\right)$.
1. d(1) = 0${\mathbf{d}}\left(1\right)=0$.
This setting should be used if λ$\lambda$ is an ill-conditioned eigenvalue (provided the matrix elements do not vary widely in order of magnitude). In this case it is essential to accept only a vector found after one half iteration, and μ$\mu$ must be a very good approximation to λ$\lambda$. If acceptable growth is achieved in the solution of Ux = e$Ux=e$, then the normalized x$x$ is accepted as the eigenvector. If not, columns of an orthogonal matrix are tried in turn in place of e$e$. If none of these give acceptable growth, the function fails, indicating that μ$\mu$ was not a sufficiently good approximation to λ$\lambda$.
2. d(1) > 0${\mathbf{d}}\left(1\right)>0$.
This setting should be used if μ$\mu$ is moderately close to an eigenvalue which is not ill-conditioned (provided the matrix elements do not differ widely in order of magnitude). If acceptable growth is achieved in the solution of Ux = e$Ux=e$, the normalized x$x$ is accepted as the eigenvector. If not, inverse iteration is performed. Up to 30$30$ iterations are allowed to achieve a vector and a correction to μ$\mu$ which together give acceptably small residuals.
3. d(1) < 0${\mathbf{d}}\left(1\right)<0$.
This setting should be used if the elements of A$A$ and B$B$ vary widely in order of magnitude. Inverse iteration is performed, but a different convergence criterion is used.
See Section [Algorithmic Details] for further details.
Note that the bandwidth of the matrix A$A$ must not be less than the bandwidth of B$B$. If this is not so, either A$A$ must be filled out with zeros, or matrices A$A$ and B$B$ may be reversed and 1 / μ$1/\mu$ supplied as an approximation to the eigenvalue 1 / λ$1/\lambda$. Also it is assumed that A$A$ and B$B$ each have the same number of subdiagonals as superdiagonals. If this is not so, they must be filled out with zeros. If A$A$ and B$B$ are both symmetric, only the upper triangles need be supplied.

## References

Peters G and Wilkinson J H (1979) Inverse iteration, ill-conditioned equations and Newton's method SIAM Rev. 21 339–360
Wilkinson J H (1965) The Algebraic Eigenvalue Problem Oxford University Press, Oxford
Wilkinson J H (1972) Inverse iteration in theory and practice Symposia Mathematica Volume X 361–379 Istituto Nazionale di Alta Matematica, Monograf, Bologna
Wilkinson J H (1974) Notes on inverse iteration and ill-conditioned eigensystems Acta Univ. Carolin. Math. Phys. 1–2 173–177
Wilkinson J H (1979) Kronecker's canonical form and the QZ$QZ$ algorithm Linear Algebra Appl. 28 285–303

## Parameters

### Compulsory Input Parameters

1:     ma1 – int64int32nag_int scalar
The value mA + 1${m}_{A}+1$, where mA${m}_{A}$ is the number of nonzero lines on each side of the diagonal of A$A$. Thus the total bandwidth of A$A$ is 2mA + 1$2{m}_{A}+1$.
Constraint: 1ma1n$1\le {\mathbf{ma1}}\le {\mathbf{n}}$.
2:     mb1 – int64int32nag_int scalar
If mb10${\mathbf{mb1}}\le 0$, B$B$ is assumed to be the unit matrix. Otherwise mb1 must specify the value mB + 1${m}_{B}+1$, where mB${m}_{B}$ is the number of nonzero lines on each side of the diagonal of B$B$. Thus the total bandwidth of B$B$ is 2mB + 1$2{m}_{B}+1$.
Constraint: ${\mathbf{mb1}}\le {\mathbf{ma1}}$.
3:     a(lda,n) – double array
lda, the first dimension of the array, must satisfy the constraint lda2 × ma11$\mathit{lda}\ge 2×{\mathbf{ma1}}-1$.
The n$n$ by n$n$ band matrix A$A$. The mA${m}_{A}$ subdiagonals must be stored in the first mA${m}_{A}$ rows of the array; the diagonal in the (mA + 1${m}_{A}+1$)th row; and the mA${m}_{A}$ superdiagonals in rows mA + 2${m}_{A}+2$ to 2mA + 1$2{m}_{A}+1$. Each row of the matrix must be stored in the corresponding column of the array. For example, if n = 6$n=6$ and mA = 2${m}_{A}=2$ the storage scheme is:
 * * a31 a42 a53 a64 * a21 a32 a43 a54 a65 a11 a22 a33 a44 a55 a66 a12 a23 a34 a45 a56 * a13 a24 a35 a46 * *
.
$* * a31 a42 a53 a64 * a21 a32 a43 a54 a65 a11 a22 a33 a44 a55 a66 a12 a23 a34 a45 a56 * a13 a24 a35 a46 * * .$
Elements of the array marked * $*$ need not be set. The following code assigns the matrix elements within the band to the correct elements of the array:
` for j=1:n for i=max(1,j-ma1+1):min(n,j+ma1-1) a(i-j+ma1,j) = matrix(i,j); end end `
If sym = true${\mathbf{sym}}=\mathbf{true}$ (i.e., both A$A$ and B$B$ are symmetric), only the lower triangle of A$A$ need be stored in the first ma1 rows of the array.
4:     b(ldb,n) – double array
ldb, the first dimension of the array, must satisfy the constraint
• if sym = false${\mathbf{sym}}=\mathbf{false}$, ldb2 × mb11$\mathit{ldb}\ge 2×{\mathbf{mb1}}-1$;
• if sym = true${\mathbf{sym}}=\mathbf{true}$, ldbmb1$\mathit{ldb}\ge {\mathbf{mb1}}$.
If mb1 > 0${\mathbf{mb1}}>0$, b must contain the n$n$ by n$n$ band matrix B$B$, stored in the same way as A$A$. If sym = true${\mathbf{sym}}=\mathbf{true}$, only the lower triangle of B$B$ need be stored in the first mb1 rows of the array.
If mb10${\mathbf{mb1}}\le 0$, the array is not used.
5:     sym – logical scalar
If sym = true${\mathbf{sym}}=\mathbf{true}$, both A$A$ and B$B$ are assumed to be symmetric and only their upper triangles need be stored. Otherwise sym must be set to false.
6:     relep – double scalar
The relative error of the coefficients of the given matrices A$A$ and B$B$. If the value of relep is less than the machine precision, the machine precision is used instead.
7:     rmu – double scalar
μ$\mu$, an approximation to the eigenvalue for which the corresponding eigenvector is required.
8:     d(30$30$) – double array
d(1)${\mathbf{d}}\left(1\right)$ must be set to indicate the type of problem (see Section [Description]):
d(1) > 0.0${\mathbf{d}}\left(1\right)>0.0$
Indicates a well-conditioned eigenvalue.
d(1) = 0.0${\mathbf{d}}\left(1\right)=0.0$
Indicates an ill-conditioned eigenvalue.
d(1) < 0.0${\mathbf{d}}\left(1\right)<0.0$
Indicates that the matrices have elements varying widely in order of magnitude.

### Optional Input Parameters

1:     n – int64int32nag_int scalar
Default: The second dimension of the arrays a, b. (An error is raised if these dimensions are not equal.)
n$n$, the order of the matrices A$A$ and B$B$.
Constraint: n1${\mathbf{n}}\ge 1$.

### Input Parameters Omitted from the MATLAB Interface

lda ldb iwork work lwork

### Output Parameters

1:     a(lda,n) – double array
lda2 × ma11$\mathit{lda}\ge 2×{\mathbf{ma1}}-1$.
Details of the factorization of AλB$A-\stackrel{-}{\lambda }B$, where λ$\stackrel{-}{\lambda }$ is an estimate of the eigenvalue.
2:     b(ldb,n) – double array
Elements in the top-left corner, and in the bottom right corner if sym = false${\mathbf{sym}}=\mathbf{false}$, are set to zero; otherwise the array is unchanged.
3:     vec(n) – double array
The eigenvector, normalized so that the largest element is unity, corresponding to the improved eigenvalue rmu + d(30)${\mathbf{rmu}}+{\mathbf{d}}\left(30\right)$.
4:     d(30$30$) – double array
If d(1)0.0${\mathbf{d}}\left(1\right)\ne 0.0$ on entry, the successive corrections to μ$\mu$ are given in d(i)${\mathbf{d}}\left(\mathit{i}\right)$, for i = 1,2,,k$\mathit{i}=1,2,\dots ,k$, where k + 1$k+1$ is the total number of iterations performed. The final correction is also given in the last position, d(30)${\mathbf{d}}\left(30\right)$, of the array. The remaining elements of d are set to zero.
If d(1) = 0.0${\mathbf{d}}\left(1\right)=0.0$ on entry, no corrections to μ$\mu$ are computed and d(i)${\mathbf{d}}\left(\mathit{i}\right)$ is set to 0.0$0.0$, for i = 1,2,,30$\mathit{i}=1,2,\dots ,30$. Thus in all three cases the best available approximation to the eigenvalue is rmu + d(30)${\mathbf{rmu}}+{\mathbf{d}}\left(30\right)$.
5:     ifail – int64int32nag_int scalar
${\mathrm{ifail}}={\mathbf{0}}$ unless the function detects an error (see [Error Indicators and Warnings]).

## Error Indicators and Warnings

Errors or warnings detected by the function:
ifail = 1${\mathbf{ifail}}=1$
 On entry, n < 1${\mathbf{n}}<1$, or ma1 < 1${\mathbf{ma1}}<1$, or ma1 > n${\mathbf{ma1}}>{\mathbf{n}}$, or lda < 2 × ma1 − 1$\mathit{lda}<2×{\mathbf{ma1}}-1$, or ldb < mb1$\mathit{ldb}<{\mathbf{mb1}}$ when sym = true${\mathbf{sym}}=\mathbf{true}$, or ldb < 2 × mb1 − 1$\mathit{ldb}<2×{\mathbf{mb1}}-1$ when sym = false${\mathbf{sym}}=\mathbf{false}$ (ldb is not checked if mb1 ≤ 0${\mathbf{mb1}}\le 0$).
ifail = 2${\mathbf{ifail}}=2$
 On entry, ${\mathbf{ma1}}<{\mathbf{mb1}}$. Either fill out a with zeros, or reverse the roles of a and b, and replace rmu by its reciprocal, i.e., solve Bx = λ − 1Ax.$Bx={\lambda }^{-1}Ax\text{.}$
ifail = 3${\mathbf{ifail}}=3$
 On entry, lwork < 2 × n$\mathit{lwork}<2×{\mathbf{n}}$ when d(1) = 0.0${\mathbf{d}}\left(1\right)=0.0$, or lwork < n × (ma1 + 1)$\mathit{lwork}<{\mathbf{n}}×\left({\mathbf{ma1}}+1\right)$ when d(1) ≠ 0.0${\mathbf{d}}\left(1\right)\ne 0.0$.
ifail = 4${\mathbf{ifail}}=4$
A$A$ is null. If B$B$ is nonsingular, all the eigenvalues are zero and any set of n orthogonal vectors forms the eigensolution.
ifail = 5${\mathbf{ifail}}=5$
B$B$ is null. If A$A$ is nonsingular, all the eigenvalues are infinite, and the columns of the unit matrix are eigenvectors.
ifail = 6${\mathbf{ifail}}=6$
 On entry, A$A$ and B$B$ are both null. The eigensolution is arbitrary.
ifail = 7${\mathbf{ifail}}=7$
d(1)0.0${\mathbf{d}}\left(1\right)\ne 0.0$ on entry and convergence is not achieved in 30$30$ iterations. Either the eigenvalue is ill-conditioned or rmu is a poor approximation to the eigenvalue. See Section [Algorithmic Details].
ifail = 8${\mathbf{ifail}}=8$
d(1) = 0.0${\mathbf{d}}\left(1\right)=0.0$ on entry and no eigenvector has been found after min (n,5)$\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{n}},5\right)$ back-substitutions. rmu is not a sufficiently good approximation to the eigenvalue.
ifail = 9${\mathbf{ifail}}=9$
d(1) < 0.0${\mathbf{d}}\left(1\right)<0.0$ on entry and rmu is too inaccurate for the solution to converge.

## Accuracy

The eigensolution is exact for some problem
 (A + E)x = μ(B + F)x, $(A+E)x=μ(B+F)x,$
where E,F$‖E‖,‖F‖$ are of the order of η(A + μB)$\eta \left(‖A‖+\mu ‖B‖\right)$, where η$\eta$ is the value used for relep.

### Timing

The time taken by nag_eigen_real_band_geneig (f02sd) is approximately proportional to n(2mA + 1)2$n{\left(2{m}_{A}+1\right)}^{2}$ for factorization, and to n(2mA + 1)$n\left(2{m}_{A}+1\right)$ for each iteration.

### Storage

The storage of the matrices A$A$ and B$B$ is designed for efficiency on a paged machine.
nag_eigen_real_band_geneig (f02sd) will work with full matrices but it will do so inefficiently, particularly in respect of storage requirements.

### Algorithmic Details

Inverse iteration is performed according to the rule
 (A − μB)yr + 1 = Bxr $(A-μB)yr+1=Bxr$
 xr + 1 = 1/(αr + 1)yr + 1 $xr+ 1=1αr+ 1yr+ 1$
where αr + 1${\alpha }_{r+1}$ is the element of yr + 1${y}_{r+1}$ of largest magnitude.
Thus:
 (A − μB)xr + 1 = 1/(αr + 1)Bxr. $(A-μB)xr+1=1αr+1Bxr.$
Hence the residual corresponding to xr + 1${x}_{r+1}$ is very small if |αr + 1|$|{\alpha }_{r+1}|$ is very large (see Peters and Wilkinson (1979)). The first half iteration, Uy1 = e$U{y}_{1}=e$, corresponds to taking L1PBx0 = e${L}^{-1}PB{x}_{0}=e$.
If μ$\mu$ is a very accurate eigenvalue, then there should always be an initial vector x0${x}_{0}$ such that one half iteration gives a small residual and thus a good eigenvector. If the eigenvalue is ill-conditioned, then second and subsequent iterated vectors may not be even remotely close to an eigenvector of a neighbouring problem (see pages 374–376 of Wilkinson (1972) and Wilkinson (1974)). In this case it is essential to accept only a vector obtained after one half iteration.
However, for well-conditioned eigenvalues, there is no loss in performing more than one iteration (see page 376 of Wilkinson (1972)), and indeed it will be necessary to iterate if μ$\mu$ is not such a good approximation to the eigenvalue. When the iteration has converged, yr + 1${y}_{r+1}$ will be some multiple of xr${x}_{r}$, yr + 1 = βr + 1xr${y}_{r+1}={\beta }_{r+1}{x}_{r}$, say.
Therefore
 (A − μB)βr + 1xr = Bxr, $(A-μB)βr+1xr=Bxr,$
giving
 (A − (μ + 1/(βr + 1))B) xr = 0. $(A-(μ+1βr+ 1 ) B) xr=0.$
Thus μ + 1/(βr + 1) $\mu +\frac{1}{{\beta }_{r+1}}$ is a better approximation to the eigenvalue. βr + 1${\beta }_{r+1}$ is obtained as the element of yr + 1${y}_{r+1}$ which corresponds to the element of largest magnitude, + 1$+1$, in xr${x}_{r}$. The function terminates when (A(μ + 1/(βr))B)xr $‖\left(A-\left(\mu +\frac{1}{{\beta }_{r}}\right)B\right){x}_{r}‖$ is of the order of the machine precision relative to A + |μ|B$‖A‖+|\mu |‖B‖$.
If the elements of A$A$ and B$B$ vary widely in order of magnitude, then A$‖A‖$ and B$‖B‖$ are excessively large and a different convergence test is required. The function terminates when the difference between successive corrections to μ$\mu$ is small relative to μ$\mu$.
In practice one does not necessarily know if the given problem is well-conditioned or ill-conditioned. In order to provide some information on the condition of the eigenvalue or the accuracy of μ$\mu$ in the event of failure, successive values of 1/(βr) $\frac{1}{{\beta }_{r}}$ are stored in the vector d when d(1)${\mathbf{d}}\left(1\right)$ is nonzero on input. If these values appear to be converging steadily, then it is likely that μ$\mu$ was a poor approximation to the eigenvalue and it is worth trying again with rmu + d(30)${\mathbf{rmu}}+{\mathbf{d}}\left(30\right)$ as the initial approximation. If the values in d vary considerably in magnitude, then the eigenvalue is ill-conditioned.
A discussion of the significance of the singularity of A$A$ and/or B$B$ is given in relation to the QZ$QZ$ algorithm in Wilkinson (1979).

## Example

```function nag_eigen_real_band_geneig_example
ma1 = int64(3);
mb1 = int64(2);
a = [0, 0, 0, 0, 0;
0, -1, -1, -1, -1;
1, 2, 3, 4, 5;
1, 1, 1, 1, 0;
2, 2, 2, 0, 0];
b = [0, 1, 2, 2, 1;
5, 4, 3, 2, 1;
1, 2, 2, 1, 0];
sym = false;
relep = 0;
rmu = -12.33;
d = zeros(30, 1);
d(1) = 1;
[aOut, bOut, vec, dOut, ifail] = nag_eigen_real_band_geneig(ma1, mb1, a, b, sym, relep, rmu, d)
```
```

aOut =

0.0160    0.0204    0.0423    0.0883   68.1366
13.3300   25.2983   28.6600   17.3300         0
2.0000    2.0000   13.3300         0         0
0         0         0         0         0
0         0         0         0         0

bOut =

0     1     2     2     1
5     4     3     2     1
1     2     2     1     0

vec =

-0.0572
0.3951
-0.8427
1.0000
-0.6540

dOut =

-0.0094
-0.0094
-0.0094
-0.0094
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-0.0094

ifail =

0

```
```function f02sd_example
ma1 = int64(3);
mb1 = int64(2);
a = [0, 0, 0, 0, 0;
0, -1, -1, -1, -1;
1, 2, 3, 4, 5;
1, 1, 1, 1, 0;
2, 2, 2, 0, 0];
b = [0, 1, 2, 2, 1;
5, 4, 3, 2, 1;
1, 2, 2, 1, 0];
sym = false;
relep = 0;
rmu = -12.33;
d = zeros(30, 1);
d(1) = 1;
[aOut, bOut, vec, dOut, ifail] = f02sd(ma1, mb1, a, b, sym, relep, rmu, d)
```
```

aOut =

0.0160    0.0204    0.0423    0.0883   68.1366
13.3300   25.2983   28.6600   17.3300         0
2.0000    2.0000   13.3300         0         0
0         0         0         0         0
0         0         0         0         0

bOut =

0     1     2     2     1
5     4     3     2     1
1     2     2     1     0

vec =

-0.0572
0.3951
-0.8427
1.0000
-0.6540

dOut =

-0.0094
-0.0094
-0.0094
-0.0094
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
-0.0094

ifail =

0

```