Browse Prior Art Database

Task Scheduling Algorithm for a Teleprocessing Communications Controller

IP.com Disclosure Number: IPCOM000081018D
Original Publication Date: 1974-Mar-01
Included in the Prior Art Database: 2005-Feb-27
Document File: 4 page(s) / 19K

Publishing Venue

IBM

Related People

Duffie, CA: AUTHOR

Abstract

This description presents a task scheduling algorithm such that, when it is integrated with the storage management algorithm described in the publication entitled "Dynamic Storage Management for a Teleprocessing Communications Controller", IBM Technical Disclosure Bulletin, Vol. 16, No. 10, March 1974, pages 3345 to 3348, two benefits are realized. The first is that maximum performance and minimum response times are achieved, and the second is that the probability of a deadlock (deadly embrace) situation is minimized.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 29% of the total text.

Page 1 of 4

Task Scheduling Algorithm for a Teleprocessing Communications Controller

This description presents a task scheduling algorithm such that, when it is integrated with the storage management algorithm described in the publication entitled "Dynamic Storage Management for a Teleprocessing Communications Controller", IBM Technical Disclosure Bulletin, Vol. 16, No. 10, March 1974, pages 3345 to 3348, two benefits are realized. The first is that maximum performance and minimum response times are achieved, and the second is that the probability of a deadlock (deadly embrace) situation is minimized.

Assume a computer architecture that supports five prioritized levels of interrupts, where the four highest levels serve I/O, external, supervisor cell (SVC), timer, hardware check, and initial program load (IPL) interrupts. The fifth level always runs enabled and serves the computational processes that drive the telecommunications network. The described scheduling algorithm will control the order in which the computational processes at the fifth interrupt level are invoked for execution.

Further assume a multithread programming architecture, where one to n processes must execute in the accomplishment of a given function on a given resource, where that resource may be a line, group, terminal, or component. Two basic problems are to be solved by the scheduling algorithm, within such an architecture; parallel processing and contention resolution. The computational processes will dynamically vary in their relationships to each other. In some cases, a hold-up condition in one process may cause a subsequent hold-up condition in all processes associated with the same resource.

In other cases, processes can continue to execute in parallel, and a hold-up condition in one way have no affect on related processes. Contention for machine cycles, programs, storage, etc., must be resolved when requests for more than one teleprocessing resource are being operated upon. Contention resolution must take into consideration such factors as line status, line speed, and terminal/application priorities.

A computational process, hereafter referred to as a task, is defined as the union of a queue of requests (or stimuli) to be operated upon, and the code that is to be operated upon the queue. Code is considered reentrant when it is associated with more than one input queue. The dispatching unit is the queue control block, the control block that binds the code to the input queue.

A task can exist in one of five logical states; active, pending, wait, ready, and limbo. The active state is where the task is in execution, and in control of the machine's resources. The pending state is where the task queue control block is in one of the system dispatching queues, waiting to be dispatched. The wait state can be voluntary or involuntary. It is where task processing has become temporarily suspended, with the processing environment (i.e., registers, instruction address, condition...