Browse Prior Art Database

A Chordal Preconditioner for Large Scale Optimization

IP.com Disclosure Number: IPCOM000148380D
Original Publication Date: 1986-Jun-30
Included in the Prior Art Database: 2007-Mar-29
Document File: 46 page(s) / 2M

Publishing Venue

Software Patent Institute

Related People

Coleman, Thomas F.: AUTHOR [+2]

Abstract

A Chordal Preconditioner for Large Scale Optimization* Thomas F. Coleman

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 8% of the total text.

Page 1 of 46

A Chordal Preconditioner for Large Scale Optimization*

Thomas F. Coleman

TR 86-762 June 1986

Department of Computer Science Cornell University
Ithaca, NY 14853

* This reserch was supported in part by the Applied Mathematical Sciences Research Program

(KC-04-02) of the Office of Energy Research of the U.S. Department of Energy under contract DE-AC02-33-ER10369 and grant DE-FG02-86ER25013.

[This page contains 1 picture or other non-text object]

Page 2 of 46

[This page contains 1 picture or other non-text object]

Page 3 of 46

A Chordal Precondi tioner For
Large Scale Optimization

Thomas F. Coleman

Computer Science Department Cornel 1 University
l thaca New York 14853

June 22, 1986

Abstract: We propose an automatic preconditioning scheme for large sparse numerical optimization. The strategy is based on an examination of the sparsity pattern of the Hessian matrix: using a graph-theoret ic heuristic, a block diagonal approximat ion to the Hessian matrix is induced. The blocks are submatrices of the Hessian matrix; furthermore, each block is chordal. That is, under a positive definiteness assumption, each block can be Cholesky factored without creating any new nonzeroes (f i

11 1.

Therefore the precondi tioner is space efficient. We conduct a number of numerical experiments to determine the effectiveness of the precondi tioner. in the context of a 1 inear con jugate gradient algori thrn for optimization.

[This page contains 1 picture or other non-text object]

Page 4 of 46

[This page contains 1 picture or other non-text object]

Page 5 of 46

0, lntroduct ion

    In this paper we are concerned with the minimization of a smooth function / : ff -2 d in the circumstance where n is large and the Hessian matrix H(xj is sparse. An eff Pcient method for such a problem must exploit sparsity at every turn: In particular, sparsity attention must be paid both to the determination of N(xj (or an approximation) as well as to subsequent computations involving H&) - especially the solution of linear systems. Theory, algorithms, and software for the elf icient sparse f ini te-di f ference approximat ion io H(x) have recently been developed (eg. Powell and Toint11979]; Coleman and More[ 1983, 19841; Coleman, Garbow, and More[ 1984, 19851; Coleman and Ca i[ 1 9861) ;

we propose no improvements here. Indeed, the algorithm proposed here can use this previous work in a straightforward way. The focus of this present work is on the exploitation of sparsity after H has been determined (perhaps via a sparse finite-difference procedure). We assume throughout that the sparsity pattern of H is both known and fixed (for all x).

    A Newton-l ike method for minimization involves the repeated solution of linear systems

When x is close to .a local minimizer ,.u,, of f then we can expect H to be positive definite; In such a case the method of conjugate gradients...