Browse Prior Art Database

Metaparallelism

IP.com Disclosure Number: IPCOM000105418D
Original Publication Date: 1993-Jul-01
Included in the Prior Art Database: 2005-Mar-19
Document File: 6 page(s) / 267K

Publishing Venue

IBM

Related People

Ekanadham, K: AUTHOR [+2]

Abstract

There are two distinct types of parallelism which can be categorized as Coarse Grained, CG, parallelism and Fine Grained, FG, parallelism. Fine grained parallelism operates on the instruction level and partitions a putative instruction stream that has a single logical register file and a single memory hierarchy among several processor elements. As such, fine grained parallelism allows successive instructions to be executed in parallel and requires that the result of such executions conform to a RUBRIC OF SEQUENTIAL CORRECTNESS. Another implication of this is that the memory hierarchy that supports fine-grained parallelism is common to all processor elements that share the same putative instruction stream.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 17% of the total text.

Metaparallelism

      There are two distinct types of parallelism which can be
categorized as Coarse Grained, CG, parallelism and Fine Grained, FG,
parallelism.  Fine grained parallelism operates on the instruction
level and partitions a putative instruction stream that has a single
logical register file and a single memory hierarchy among several
processor elements.  As such, fine grained parallelism allows
successive instructions to be executed in parallel and requires that
the result of such executions conform to a RUBRIC OF SEQUENTIAL
CORRECTNESS.  Another implication of this is that the memory
hierarchy that supports fine-grained parallelism is common to all
processor elements that share the same putative instruction stream.

      The basic computational entity within coarse grained
parallelism is a THREAD which is given a name.  Each THREAD is said
to comprise a sequence of steps (beads) which are one of the
following types:

1.  COMPUTE STEP (USING LOCAL MEMORY/REGISTERS)
2.  CONDITIONAL FORK AND THREAD(NAME) CREATION
3.  SEND BUFFER TO NAME
4.  WAIT & RECEIVE BUFFER

These threads are called CSR because of the compute-send-receive
aspect of their structure.  The definition of the COMPUTE-STEP
involves a long sequence of instructions that operate within the
context of a local memory which is comprised of private registers and
a private memory hierarchy.  The operation of the SEND-BUFFER and
WAIT&RECIEVE-BUFFER is performed in conjunction with the local memory
associated with the named-THREAD and different named-THREADS can have
different templates for realizing the structure of the local memory
within the common hardware.  An important parameter of such
coarse-grained parallelism is the ratio of the COMPUTE-STEP time to
the SEND-BUFFER time.  Coarse grained parallelism usually involves a
distributed memory system in which each CSR is supported by its own
private memory.

Examples of FG parallelism are:

o   MSIS - Multisequencing a Single Instruction Stream

          MSIS is a uniprocessor organization in which a set of
    processing elements (PE) working in concert execute Segments of

    the instruction stream.  The Segments are either P-Segments,
    normal uniprocessor instruction stream portions, that are
    processed in the E-MODE of MSIS and produce Z-Segments, or the
    Z-Segments that are processed in Z-MODE by MSIS.  The main
    difference between E-MODE and Z-MODE is that during E-MODE each
    PE sees all instructions in the Segment and executes the ones
    that are assigned to it, but during Z-MODE a PE only sees the
    instructions assigned to it.

          As all PE's see all instructions in E-MODE, each PE can
    create the Z-CODE it will require to re-execute the Segment as a
    Z-Segment, the Z-CODE being stored in the Z-CACHE, and associated
    with instructions in the Z-CODE are S-LISTS and D-LISTS as
    appropriate.  An S-LIST...