Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Constrained Parallelism to Control the Effects of Serial Resource Contention

IP.com Disclosure Number: IPCOM000079801D
Original Publication Date: 1973-Sep-01
Included in the Prior Art Database: 2005-Feb-26
Document File: 6 page(s) / 90K

Publishing Venue

IBM

Related People

Edel, TR: AUTHOR [+2]

Abstract

Concurrent requestors of the same function may execute serially or with some degree of controlled parallelism. The question remains; when and to what extent should a function execute requestors in parallel? The breakeven point as to whether a function should be programmed serially or in parallel is illustrated by Fig. 1.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 43% of the total text.

Page 1 of 6

Constrained Parallelism to Control the Effects of Serial Resource Contention

Concurrent requestors of the same function may execute serially or with some degree of controlled parallelism. The question remains; when and to what extent should a function execute requestors in parallel? The breakeven point as to whether a function should be programmed serially or in parallel is illustrated by Fig. 1.

The average state of User X's execution is haliway through Serial Function A when User Y wants the same function. If the execution time through Serial Function A is 2/3 as long as through Parallel Function A (due to the overhead necessitated for gate setting within Parallel Function A for accessing serial resources), then the time for User Y to execute Serial Function A is 1/3 (remaining for X) + 2/3 (total time of Serial Function A), or 1.

The breakeven point in order to maintain Y's relative priority in the the system, is when Serial Function A is 2/3 as long as Parallel Function A. Any time less than 2/3 clearly indicates that Function A should be serial when the number of concurrent requestors is two. Note that the total system load is also 1/3 greater for parallel execution, requiring a total time of 1. The obvious point demonstrated is that parallelism, of itself, is not always desirable.

What is desirable is the intelligent restriction of parallelism.

Thus, the number of concurrent requestors for a particular function, which is only known during execution, would dictate the process creation criteria. Fig. 2 illustrates this procedure.

Thus, front-end or transparent monitoring intelligence of a function would dictate the degree of parallelism for that function. Conversely, intelligence for the function might want to curtail the degree of parallelism (existing, or requested to exist), due to the contention overload generated by processes competing for the same function.

The following algorithm describes how parallelism should be controlled for each particular function deemed critical to the operation of the system.
1) Like any intelligence that attempts to predict the future

-- data describing

past activity is maintained by the function's monitor

controller that calculates the time through a function when:

. one asynchronous task exists

. two asynchronous tasks exist

. three, four, five, etc. asynchronous tasks exist.
2) The following equation is used to maintain a matrix

(Fig. 3) for

the execution time through a function that consists of two,

three,

four, or more concurrent tasks.

Actual Time = Useful Asynchronous Task Time + Task

Overhead + Task WAIT Time

1

Page 2 of 6

Due to SRRs (Serial Reusable Resources)

[Note that, Task WAIT Time due to SRRs (both internal to the

function and external intersects with other functions) and

system overhead (i.e., promotion of task create, destroy,

switch, etc.) will increase as the number of asynchronous

tasks for that function increases. Long SRR intersects, like

I/O, will especially affect the...