Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Parallel Handling of Logically Serial Task

IP.com Disclosure Number: IPCOM000101931D
Original Publication Date: 1990-Sep-01
Included in the Prior Art Database: 2005-Mar-17
Document File: 3 page(s) / 99K

Publishing Venue

IBM

Related People

Holm, WG: AUTHOR [+4]

Abstract

Serialized processing in a task can create a bottleneck in a set of tasks if it is not forwarding its completed work fast enough to keep the following task busy. The technique described here allows a task to work using parallelism to speed its throughput, while appearing to work serially to the rest of an order-dependent set of tasks. Multiple subtasks and a "serialization list" are used for this purpose.

This text was extracted from an ASCII text file.
This is the abbreviated version, containing approximately 52% of the total text.

Parallel Handling of Logically Serial Task

       Serialized processing in a task can create a bottleneck
in a set of tasks if it is not forwarding its completed work fast
enough to keep the following task busy.  The technique described here
allows a task to work using parallelism to speed its throughput,
while appearing to work serially to the rest of an order-dependent
set of tasks.  Multiple subtasks and a "serialization list" are used
for this purpose.

      Within a set of tasks, each task is performing a step leading
to the completion of a unit of work.  Multiple units of work are
sequentially processed through this set of tasks to perform a
function.  Consider the set of tasks shown in Fig. 1.

      Task A receives the input for a unit of work, performs its
share of the processing, and passes a request for work to Task B.
Task B receives the request from Task A, processes it, and passes its
output in the form of a request to Task C.  Task C receives a request
for work from Task B, processes it, and completes a unit of work.  If
Task B encounters a bottleneck, it prevents Task C from working at
its maximum capabilities.  This slows the completion of each unit of
work and in turn the entire function.

      To eliminate the bottleneck in Task B, multiple subtasks
(Subtask B(1), Subtask B(2), ..., Subtask B(n)) can be created to
work on requests in parallel.  These subtasks are identical in
function, per- forming the original function of Task B.  They share a
common input queue and wait on this common input queue when they are
not busy.  Task B becomes a "controlling" task which receives
requests, sends them to subtasks, receives responses from subtasks,
and sends on completed requests to Task C.  The design then becomes
the set of tasks shown in Fig. 2.

      However, this design does not guarantee that Task B will output
requests in the same order in which they arrived.  For example, if
Subtask B(1) receives Request 1 and Subtask B(2) receives Request 2,
Subtask B(2) could finish its work before Subtask B(1).

      A "serialization" list, under the control of Task B, was
invented so that Task B can appear to work serially to the rest of
the set of tasks.  Its function is to maintain the order of requests
using the following features:
1.  New requests are always added to the bottom of the list and are
marked incomplete.  The controlling task then sends the request to
the common subtask input queue.
2.  Requests can only be forward...