Restructuring iterative algorithms under sparse conditions
Abstract—This paper discusses finding
out the given sparse matrix is diagonally dominant or not, by solving a set of
simultaneous equations using iterative algorithms like Gauss-Seidel and Jacobi
method. The time complexity and scalability of the algorithm is analyzed. The results indicate that the algorithm is scalable and
efficient.
I. Introduction
Iterative
schemes require time to achieve sufficient accuracy and are reserved for large
systems of equations where there are a majority of zero elements in the matrix.
Often times the algorithms are Taylor-made to take advantage of the special
structure such as band matrices. Practical uses include applications in circuit
analysis, boundary value problems, and partial differential equations.
Iteration is a popular technique finding
the roots of equations. Generalization of fixed-point iteration can be applied to systems of linear equations to produce
accurate results. The method Jacobi
iteration is attributed to Carl Jacobi (1804-1851) and Gauss-Seidel iteration
is attributed to Johann Carl Friedrich Gauss (1777-1855) and Philipp Ludwig von
Seidel (1821-1896).
In
numerical linear algebra, the Gauss-Seidel method, also known as the Liebmann
method or the method of successive displacement is an iterative method used to
solve a linear system of equations and is similar to the Jacobi method. Though
it can be applied to any matrix with non-zero elements on the diagonals,
convergence is only guaranteed if the matrix is either diagonally dominant, or
symmetric.
Diagonally dominant matrix:
In the above matrix the row and column sums
are equal hence the matrix A is diagonally dominant.
Symmetric matrix:
In linear
algebra, a symmetric matrix
is a square
matrix that is equal to its transpose. Let A be a symmetric matrix. Then: A=AT.
The entries of a symmetric matrix are symmetric with respect to the main
diagonal. So if the entries are written as A = (aij),
then aij = aji. For all indices i and j.
The above shown 3×3 matrix A is symmetric.
Sparse matrix:
Sparse matrix represents all non-zero
elements of the original matrix along with its row and column numbers. In
the sparse matrix first column is the row number of non-zero elements and the second column
is the column number of non-zero elements in the original matrix. The third column is
a non-zero element. The first row first column represents the number of rows in the
original matrix, the second column represents the number of columns in the original matrix
and the third column represents the number of non-zero elements in the original matrix,
The below figure shows the normal /original matrix A and its sparse matrix As
representation.
II. Algorithm development
Algorithm
development for the iterative method is explained using the Jacobi method and gauss
seidel method for both methods we are inputting a sparse matrix is as explained
below.
Jacobi method:
The Jacobi
method is an algorithm for determining the solutions of a system of linear equations with the largest
absolute values in each row and column dominated by the diagonal element. Each
diagonal element is solved for, and an approximate value plugged in. The
process is then iterated until it converges.
Given a square system of n linear
equations:
Where:
Then A can be decomposed into a diagonal
component D, and the remainder R:
The solution is then obtained
iteratively via
The element-based formula is thus:
Note that the computation of xi(k+1)
requires each element in x(k)
except itself. Unlike the Gauss-Seidel method, we can't overwrite xi(k)
with xi(k+1), as that value will be needed
by the rest of the computation. The minimum amount of storage is two vectors of
size n.
Gauss-Seidel method:
The Gauss-Seidel
method, also known as the Liebmann
method or the method of
successive displacement is an iterative
method used to solve a linear system of equations and is
similar to the Jacobi method. Though it can be applied to any matrix
(we are using sparse matrix) with non-zero elements on the diagonals,
convergence is only guaranteed if the matrix is either diagonally dominant or symmetric
and positive definite.
Given a square system of n linear equations
with unknown x:
where:
Then A can
be decomposed into a lower triangular component, and a strictly upper triangular component U:
The system of
linear equations may be rewritten as:
The Gauss-Seidel method is an iterative
technique that solves the left-hand side of this expression for x, using the previous value of x on the right-hand side.
Analytically, this may be written as:
However, by taking advantage of
the triangular form, the elements of x(k+1) can be
computed sequentially using forward substitution:
Note that the sum inside this
computation of xi(k+1) requires each
element in x(k)
except xi(k) itself.
The procedure is generally
continued until the changes made by iteration are below some tolerance.
III. Algorithm and analysis of the algorithm
Algorithm jacobian
-method:
initial
guess x0
to the solution
K=0
check if
convergence is reached
while
convergence not reached do
for
i := 1 ß
n
σ=0
for
j := 1 ß
n
if j ≠ i
then
σ=σ+aijxj(k)
end if
end (j-loop)
xi(k+1)=(bi- σ)/aii
end
(i-loop)
check if
convergence is reached
k=k+1
loop (while
convergence condition not reached)
Convergence
The
standard convergence condition (for any iterative method) is when the spectral radius of the
iteration matrix is less than 1:
ρ (D-1R)<1
This
method guaranteed to converge if the sparse matrix A is strictly or
irreducibly diagonally dominant.
Strict row diagonal dominance means that for each row, the absolute value of
the diagonal term is greater than the sum of absolute values of other terms:
The Jacobi
method sometimes converges even if these conditions are not satisfied.
Algorithm Gauss-seidel
method:
Inputs:
A, b
Output: φ
Output: φ
Choose
an initial guess φto the solution
repeat until convergence
repeat until convergence
for i ß1
to n
σß0
for j ß1
to n
if j ≠ i then
σßσ+aijφj
φiß(1/aii)(bi-σ)
check if
convergence is reached
end (repeat)
IV. Experimentation and profiling
A priori Analysis:
A Priori Analysis is analyzing the algorithm and calculating the
computation time for the algorithm is termed as A Priori Analysis. After having
made an analytical review of the algorithm, the calculated time complexity for
it is of the O(n2).
Computing the time
complexity:
Gauss Jacobian-method:
Frequency
|
Best case
|
Worst case
|
|
for iß1
to n
|
n-1
|
n-1
|
n
|
iß0
|
1
|
1
|
1
|
for jß1
to n
|
n-1
|
n-1
|
n
|
If i(!=)j then
|
1
|
n-1
|
n
|
σßσ+aijφj
|
3
|
3
|
3
|
φiß(1/aii)(bi-σ)
|
4
|
n-1
|
n
|
Figure 4.1 time
complexity for gauss Jacobi method.
Tbest α
(n-1)*(n-1)+1+(n-1)*(n-1)+(n-1)+4*(n-1)
Tbest=
2n2+n+10
Ω (n2).
Gauss-seidel
method:
Frequency
|
Best case
|
Worst case
|
|
initial guess x0 to
the solution K=0
|
1
|
1
|
1
|
check if convergence is reached
|
1
|
1
|
1
|
while convergence not reached do
|
n
|
n- 1
|
n
|
for i ß1
to n
|
n
|
n-1
|
n
|
σ=0
|
1
|
1
|
1
|
for jß
1 to n
|
n
|
n-1
|
n
|
if j ≠ i then
|
1
|
1
|
1
|
σ=σ+aijxj(k)
|
3
|
1
|
1
|
xi(k+1)=(bi-
σ)/aii
|
3
|
1
|
1
|
check if convergence is reached
|
1
|
1
|
1
|
k=k+1
|
2
|
1
|
1
|
loop (while convergence condition
not reached)
|
n
|
n-1
|
n
|
Figure 4.2 Time
complexity for the gauss seidel method.
Tbest α
4n2-4n+10
Ω(n2).
A
Posteriori Analysis:
A Posteriori analysis is done after the execution of the algorithm in the Computer. This analysis is also called as Profiling.
Gauss Jacobian
method
Tworst α
(n-1)*n+1+(n-1)*n+n+9+4n
Tworst α
2n2+3n+10
O(n2).
Gauss seidel
method:
Tworst α
4n2+13
O(n2).
Below figure shows
time complexity graph for the iterative method under different data set.
Figure 5.1 Time Complexity for Priori analysis.
Figure 5.2 Time Complexity for Posteriori analysis.
V. Summary / conclusion
After analyzing the proposed algorithm for the given problem, the best
case, average-case, and worst-case time complexity are the same using both gauss
jacobian and gauss seidel method. The time complexity for the iterative algorithm will
be O (n2).
References
[1] M. T. Heath, E. Ng, and B. W. Peyton. Parallel Algorithms for Sparse Linear Systems. In Parallel
Algorithms for Matrix Computations
pages 83{124. SIAM, Philadelphia, 1991.
[2] D.
P. Koester, S. Ranka, and G.C. Fox. Parallel Cholesky Factorization of Block-Diagonal-Bordered Sparse Matrices. Technical Report SCCS-604, Northeast Parallel
Architectures Center (NPAC), Syracuse
University, Syracuse, NY 13244-4100, January
1994.
[3] Computer Algorithms by Horowitz E., Sahni
S., Rajasekaran S., Galgotia
Publications, 2001.
[4] R.G. Dromey, How to solve by computer, Prentice-Hall of India, New
Delhi, 2004.
[5] Some Characterizations of numerical
methods, Issue 1767.Author
: Robert A. Melter Publisher: University of Maryland, 1987.