Browse Prior Art Database

A Globally Convergent Parallel Algorithm for Zeros of Polynomial Systems

IP.com Disclosure Number: IPCOM000128714D
Original Publication Date: 1986-Dec-31
Included in the Prior Art Database: 2005-Sep-16
Document File: 13 page(s) / 43K

Publishing Venue

Software Patent Institute

Related People

Alexander P. Morgan: AUTHOR [+4]

Abstract

Certain classes of nonlinear systems of equations, such as polynomial systems, have proper-ties that make them particularly amenable to solution on distributed computing systems. Some algorithms, considered unfavorably on a single processor serial computer, may be excellent on a dis-tributed system. This paper considers the solution of polynomial systems of equations via a globally convergent homotopy algorithm on a hypercube. Some computational results are reported.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 9% of the total text.

Page 1 of 13

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

A Globally Convergent Parallel Algorithm for Zeros of Polynomial Systems

Alexander P. Morgan Layne T. Watson

TR 86 A Globally Convergent Parallel Algorithm for Zeros of Polynomial Systems

Alexander P. Morgan" and Layne T. Watsont

Abstract.

Certain classes of nonlinear systems of equations, such as polynomial systems, have proper- ties that make them particularly amenable to solution on distributed computing systems. Some algorithms, considered unfavorably on a single processor serial computer, may be excellent on a dis-tributed system. This paper considers the solution of polynomial systems of equations via a globally convergent homotopy algorithm on a hypercube. Some computational results are reported.

1. Introduction.

Supercomputing capability can be achieved .in several different ways: sheer hardware speed, algorithmic efficiency, pipelines and vector processors, or multiprocessor systems. Many (perhaps too many) different computer architectures have already been realized, and computational experience on these machines is accumulating. The reality is that very fast hardware is very expensive, and is likely to remain so, and access to huge vector computers like CRAY-2's is and will remain limited for some time to come. Many supercomputer systems developed at universities have both severely limited access and formidable programming problems. Despite federal initiatives and satellite links, using national supercomputer centers is awkward at best, and has the flavor of monolithic university central computer centers, only on a national scale. The hypercube concept, that of .cheap, independent processors connected in a reasonably ef-ficient yet still manageable topology, seems to offer an opportunity for "supercomputing for the masses". A hypercube computer consists of 2' processors (nodes), each with memory, floating-point hardware, and (possibly) communication hardware. The nodes are independent and asynchronous, and connected to each other like the corners of an n- dimensional cube. For the purpose of discussion here, there are three classes of nonlinear systems of equations: (1) large systems with sparse Jacobian matrices, (2) small transcendental (nonpolynomial) systems with dense Jacobian matrices, and (3) small polynomial systems with dense Jacobian matrices. Sparsity for small problems is not significant, and large systems with dense Jacobian matrices are intractable, so these two classes are not counted. Of course medium sized problems are also of practical interest, and the boundaries between small, medium, and large change with computer hardware technology and algorithmic development. Depending on algorithmic efficiency, hardware capability, and the significance of sparsity, a medium sized problem is treated like it belongs to one of the above three classes anyway, so there is no need for a "medium" class. Large sparse nonlinear systems of equations, such as ...