# Substitution Inverse of Sparse Matrices

Original Publication Date: 1973-Jun-01

Included in the Prior Art Database: 2005-Feb-26

## Publishing Venue

IBM

## Related People

Crowder, HP: AUTHOR [+2]

## Abstract

Given a sparse matrix A, it is required to construct therefrom a procedure which will efficiently solve the linear equations Ax = y, when y is given. It is supposed that the equations will be solved for many different instances of y, but that A will remain fixed. The procedure is usually more economical than other known algorithms.

**This text was extracted from a PDF file.**

**This is the abbreviated version, containing approximately 31% of the total text.**

__Page 1 of 4__**Substitution Inverse of Sparse Matrices **

Given a sparse matrix A, it is required to construct therefrom a procedure which will efficiently solve the linear equations Ax = y, when y is given. It is supposed that the equations will be solved for many different instances of y, but that A will remain fixed. The procedure is usually more economical than other known algorithms.

A nonsingular square matrix is called atomic if it differs from the identity matrix in exactly one entry. It is of type I if the exceptional entry is +1, and of type II if it is the number r not = 0,+/-1. Thus if T is atomic, for some indices i,j: Type I : T = I, except T(ij) = +/-1; Type II: T = I, except T(ij) = r, r not = 0,+/-1.

Let the matrix A and the atomic matrix T have the same order. If i = j above, then TA results from A by multiplying row i of A by +/-1 or r, according to the type of T, and AT results from A by a similar operation on column i. When i not = j, TA adds or subtracts row j of A from row i (if type I) and adds r times row j to row i if type II; and AT is similarly formed by column operations. It is noted that the inverse of an atomic matrix is atomic.

Let A be a nonsingular matrix of order n. There exist atomic matrices F(1),..,F(p) and G(1),..G(q) so that the matrix A = F(p)..F(2)F(1)AG(1)G(2)..G(q) is a permutation of the identity matrix of order n. (This is well known; it may be done entirely by the "elementary row operations", which consist in atomic matrices of type II applied on the left only--i.e., q = 0.) Noting that A/-1/ = G(1)G(2)..G(q)A/-1/F(p)..F(2)F(1), it may be seen that for any n-vector y, the system of linear equations Ax = y can be solved for x by the calculation x = G(1)(G(2)(..(G(q)(A/-1/ (F(p)(..(F(2)(F(1)y))..))))..)) performed as indicated by the parentheses. The algorithm which consists in so performing that calculation is called the substitution form of the inverse. The task of obtaining the atomic matrices F(1),.., G(q) will be called that of building the inverse: in the language of computational complexity, it is known as "preconditioning".

It is noted that in general the number of multiplications required to solve for x
by the formula above will be exactly the number of atomic matrices of type II
among F(1),..,G(q). The goal is, for a given A, to build a substitution inverse for
which that number is as small as possible. (No economizing on the other
arithmetic operations is attempted here--the number of additions and
subtractions--usually will not exceed the number of multiplications.) The general
strategy for doing this will be: given the partly transformed matrix B = F..A..G, try
to find an atomic T so that either TB or BT has at least one more zero than does

B.

If this were always possible, then a substitution inverse could be found requiring no more multiplications than the number of nonzero entries of A (such an inverse is called ideal). It is not known whether that is always possible, and the only known wa...