The method adopted by most algorithms to reduce a matrix A to a diagonal form is to apply a sequence of similarity transformations:
etc. |
(B.1) |
So that eventually we have:
For the Jacobi method, these transformations consists of orthogonal similarities. Each Jacobi transformation, is a plane rotation designed to annihilate one of the off-diagonal matrix elements. Successive transformations undo previously set zeros, but the off-diagonal elements nevertheless get smaller and smaller, until the matrix can be considered diagonal for a given precision. Moreover the product of the transformations,

in equation
B.3, is the matrix of eigenvectors. The Jacobi method is absolutely foolproof for all real symmetric matrices. For matrices of order greater than about 10, the algorithm become slower than other algorithms. However, the Jacobi algorithm is much simpler than the more efficient methods.
The basic Jacobi rotation
is a matrix of the form
![$\displaystyle P_{pq}= \left[ \begin{array}{ccccccc} 1 & & & & & &\\ &\dots & & ...
...s& & &\\ & & -s& &c & &\\ & & & & & \dots&\\ & & & & & &1\\ \end{array} \right]$](img754.gif) |
(B.4) |
The numbers
and
are the cosine and sine of a rotation angle
:
 |
(B.5) |
The Jacobi rotation is used to transform matrix
with the similarity:
 |
(B.6) |
Now,
changes only rows
and
of
, while
changes only its columns
and
. The result of the product on each side is, for
:
and otherwise:
Let's remember that we want to try to annihilate the off-diagonal elements.
and
are similar, so they have the same Frobenius norm
: the sum of squares of all components. However we can choose
such that
, in which case
has a larger sum of squares on the diagonal. As we will further see, this is important for the convergence of the method. In order to optimize this effect,
should be the largest off-diagonal component, called the pivot. From equation B.11,
yields:
 |
(B.12) |
from which we define

:
 |
(B.13) |
If we let
, the definition of
can be rewritten
 |
(B.14) |
We choose the smaller root of this equation, where the rotation angle is less than

. This choice gives the most stable reduction. The smaller root is:
 |
(B.15) |
If

is so large that

would overflow on the computer, we set

. It
now follows that
To minimize numerical roundoff error, equation
B.11 is replaced by

The idea in the remaining equations
B.7 -
B.11 is to set the new quantity equal to the old quantity plus a small correction. Thus we can use

to eliminate

from
B.9, giving
 |
(B.18) |
Similarly,
where

is defined as
 |
(B.22) |
One can see the convergence of the Jacobi method by considering the sum of
the squares of the off-diagonal elements:
 |
(B.23) |
Using equations B.7 - B.11 and the similarity properties, we have:
 |
(B.24) |
The sequence of S's thus decrease monotonically. Since the sequence is bounded below by zero, and since we can
choose

to be whatever element we want, the sequence can be made to converge to zero. Eventually one obtains a matrix

that is diagonal to machine precision. Thediagonal elements give the eigenvalues of the original matrix A, since
 |
(B.25) |
where
 |
(B.26) |
the

's being the successive Jacobi rotation matrices. The columns of

are the
eigenvectors.