Browse Prior Art Database

Optimizing Tree-Structured Code to Determine Computational Step Time

IP.com Disclosure Number: IPCOM000105865D
Original Publication Date: 1993-Sep-01
Included in the Prior Art Database: 2005-Mar-20
Document File: 4 page(s) / 156K

Publishing Venue

IBM

Related People

Ekanadham, K: AUTHOR [+2]

Abstract

There are two distinct types of parallelism which can be categorized as Coarse Grained (CG) parallelism and Fine Grained (FG) parallelism. Fine-grained parallelism operates on the instruction level and partitions a putative instruction stream that has a single logical register file and a single memory hierarchy among several processor elements. As such, fine-grained parallelism allows successive instructions to be executed in parallel and requires that the result of such executions conform to a RUBRIC OF SEQUENTIAL CORRECTNESS. Another implication of this is that the memory hierarchy that supports fine-grained parallelism is common to all processor elements that share the same putative instruction stream.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 36% of the total text.

Optimizing Tree-Structured Code to Determine Computational Step Time

      There are two distinct types of parallelism which can be
categorized as Coarse Grained (CG) parallelism and Fine Grained (FG)
parallelism.  Fine-grained parallelism operates on the instruction
level and partitions a putative instruction stream that has a single
logical register file and a single memory hierarchy among several
processor elements.  As such, fine-grained parallelism allows
successive instructions to be executed in parallel and requires that
the result of such executions conform to a RUBRIC OF SEQUENTIAL
CORRECTNESS.  Another implication of this is that the memory
hierarchy that supports fine-grained parallelism is common to all
processor elements that share the same putative instruction stream.

      The basic computational entity within coarse-grained
parallelism is a THREAD which is given a name.  Each THREAD is said
to comprise a sequence of steps (beads) which are one of the
following types:

1.  Compute Step (Using Local Memory/Registers)

2.  Conditional Fork and Thread(Name) Creation

3.  Send Buffer to Name

4.  Wait & Receive Buffer

These threads are called CSR because of the compute-send-receive
aspect of their structure.  The definition of the COMPUTE-STEP
involves a long sequence of instructions that operate within the
context of a local memory which is comprised of private registers and
a private memory hierarchy.  The operation of the SEND-BUFFER and
WAIT&RECEIVE-BUFFER is performed in conjunction with the local memory
associated with the named-THREAD, and different named-THREADS can
have different templates for realizing the structure of the local
memory within the common hardware.  An important parameter of such
coarse-grained parallelism is the ratio of the COMPUTE-STEP time to
the SEND-BUFFER time.  Coarse-grained parallelism usually involves a
distributed memory system in which each CSR is supported by its own
private memory.

      In order to effectively determine the run-time of a set of CSRs
from the source, a need to accurately project the computational step
time for the compiled code is required.  A language which allows
computations to be represented as a tree structure can have its nodes
processed in post-order and create all conventional compiler
optimizations, allowing the source to be accurately used in
projecting computational step time.  The development of optimization
techniques that reduce the number of instructions in a program, if
the program can be cast in terms of a tree structure, allows the
source code in that program to accurately project the computational
step time associated with the threads that represent the nodes of the
tree structure.  Without such an optimization, derived from the
source, the run-time estimates for alternative equivalent forms of a
set of CSRs would be worthless in projecting the least-cost solutions
involving optimized compiled code.

      The representation of...