Browse Prior Art Database

Message Length Effects for Solving Polynomial Systems on a Hypercube

IP.com Disclosure Number: IPCOM000128715D
Original Publication Date: 1986-Dec-31
Included in the Prior Art Database: 2005-Sep-16
Document File: 15 page(s) / 46K

Publishing Venue

Software Patent Institute

Related People

Wolfgang Pelz: AUTHOR [+4]

Abstract

Comparisons between problems solved on uniprocessor systems and those solved on distributed computing systems generally ignore the overhead associated with information transfer from one processor to another. This paper considers the solution of polynomial systems of equations via a globally convergent homotopy algorithm on a hypercube and some timing results for different situations.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 9% of the total text.

Page 1 of 15

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

Message Length Effects for Solving Polynomial Systems on a Hypercube

Wolfgang Pelz Layne T. Watson

TR 86 Message Length Effects For Solving Polynomial Systems on a Hypercube

Wolfgang Pelz# and Layne T. Watsont

Abstract.

Comparisons between problems solved on uniprocessor systems and those solved on distributed computing systems generally ignore the overhead associated with information transfer from one processor to another. This paper considers the solution of polynomial systems of equations via a globally convergent homotopy algorithm on a hypercube and some timing results for different situations.

1. Introduction.

Supercomputing capability can be achieved in several different. ways: sheer hardware speed, algorithmic efficiency, pipelines and vector processors, or multiprocessor systems. Many (perhaps too many) different computer architectures have already been realized, and computational experience on these. machines is accumulating. The reality is that very fast hardware is very expensive, and is likely to remain so, and access to huge vector computers like CRAY-2's is and will remain limited for some time to come. Many supercomputer systems developed at universities have both severely limited access and formidable programming problems. Despite federal initiatives and satellite links, using national supercomputer centers is awkward at best. and has the flavor of monolithic university central computer centers, only on a national scale. The hypercube concept, that of cheap, independent processors connected in a reasonably ef-ficient yet still manageable topology, seems to offer an opportunity for "supercomputing for the masses" 125). A hypercube computer consists of 2" processors (nodes), each with memory, floating-point hardware, and (possibly) communication hardware. The nodes are independent and asyn-chronous, and connected to each other like the corners of an n-dimensional cube. At first glance it appears that the time needed to solve a problem on one processor is reduced by a factor of 2' for a hypercube with that number of processors. Of course this reduction is only theoretical since it assumes that the computations are equally divided among the nodes and it ignores the time associated with communication among nodes. Typically the overhead for this communication is considered to be small relative to the time used for computational purposes. For the purpose of discussion here, there are three classes of nonlinear systems of equations: (I) large systems with sparse Jacobian matrices, (2) small transcendental (nonpolynom.ial) systems with dense Jacobian matrices, and (3) small polynomial systems with dense Jacobian matrices. Sparsity for small problems is not significant, and large systems with dense Jacobian matrices are intractable, so these two classes are not counted. Of course medium sized problems are also of practical interest, and the boundaries

Virginia...