Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Metaparallelism - Use of Computation Priority

IP.com Disclosure Number: IPCOM000105712D
Original Publication Date: 1993-Sep-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 6 page(s) / 236K

Publishing Venue

IBM

Related People

Ekanadham, K: AUTHOR [+2]

Abstract

Metaparallelism is a process that determines the form of parallelism that is to be used in a specific application. Metaparallelism has two interfaces:

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 23% of the total text.

Metaparallelism - Use of Computation Priority

      Metaparallelism is a process that determines the form of
parallelism that is to be used in a specific application.
Metaparallelism has two interfaces:

o   Information derived from prior executions.
o   Explicit statements made in the program or compiler output that
    bear on the form of parallelism.

Metaparallelism uses aspects of program behavior as it relates to the
capabilities of the Metaparallel Processor to cope with this behavior
to determine the type of parallelism that is to be pursued.
Metaparallelism employes speculation, that is, allocation of
resources to computations without a guarantee that these computations
are required, in order to complete the application in the faster
time.

The choice of parallelisms that metaparallelism can select from are:

o   Path-oriented forms of parallelism.
o   Path-oriented forms of parallelism with speculation.
o   Compution-oriented parallelism with bifurcation at branches.
o   A set of independent paths that intercommunicate by sending
    messages to each other.
o   A combination of the above.

Metaparallelism employs means at its disposal to alter the form of
parallelism specified by the programmer/compiler at the source level
and to notify the programmer about significant aspects that interfere
with the parallelization of the application.

      There are two distinct types of parallelism which can be
categorized as Coarse Grained (CG) parallelism and Fine Grained (FG)
parallelism.  Fine grained parallelism operates on the instruction
level and partitions a putative instruction stream that has a single
logical register file and a single memory hierarchy among several
processor elements.  As such, fine-grained parallelism allows
successive instructions to be executed in parallel and requires that
the result of such executions conform to a RUBRIC OF SEQUENTIAL
CORRECTNESS.  Another implication of this is that the memory
hierarchy that supports fine-grained parallelism is common to all
processor elements that share the same putative instruction stream.

      The basic computational entity within coarse-grained
parallelism is a THREAD which is given a name.  Each THREAD is said
to comprise a sequence of steps (beads) which are one of the
following types:

1.  Compute Step (Using Local Memory/Registers)
2.  Conditional Fork and Thread(Name) Creation
3.  Send Buffer to Name
4.  Wait & Receive Buffer

These threads are called CSR because of the compute-send-receive
aspect of their structure.  The definition of the COMPUTE-STEP
involves a long sequence of instructions that operate within the
context of a local memory which is comprised of private registers and
a private memory hierarchy.  The operation of the SEND-BUFFER and
WAIT&RECEIVE-BUFFER is performed in conjunction with the local memory
associated with the named-THREAD and different named-THREADS can have
different templates for realizing the structu...