Browse Prior Art Database

Multiple Front End Communication Processor and Multiple Host System

IP.com Disclosure Number: IPCOM000085889D
Original Publication Date: 1976-Jun-01
Included in the Prior Art Database: 2005-Mar-03
Document File: 3 page(s) / 29K

Publishing Venue

IBM

Related People

Schnick, T: AUTHOR

Abstract

The multiple front end communication processor and multiple host environment has historically provided and continues to provide one of the most significant logical design problems. Described is a solution to this problem and also an approach to achieve the objectives of availability and modularity, and also to more evenly distribute the processing load and line handling load among several front end communication processors and hosts.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 52% of the total text.

Page 1 of 3

Multiple Front End Communication Processor and Multiple Host System

The multiple front end communication processor and multiple host environment has historically provided and continues to provide one of the most significant logical design problems. Described is a solution to this problem and also an approach to achieve the objectives of availability and modularity, and also to more evenly distribute the processing load and line handling load among several front end communication processors and hosts.

Essentially, the solution of these problems is the following. Each front end communication processor, or controller, and host consists of its own memory and processing power but, in addition, they each have access to a shared queue container which consists of the queues of events to be processed as in Fig. 1. The queue container memory may also contain the data to be forwarded.

The queuing structure is flexible according to the needs of a particular configuration. One embodiment of the queuing structure could include one queue for each host and each front end processor, plus one queue called the front end processor queue or network queue, the elements of which are received from the hosts and are processed through the front end processors as in Fig. 2.

The hosts would not need to decide which front end processor could process a request. That is, hosts would not need to perform routing functions. They merely place events on the single front end processor queue. The front end processors examine the front end processor queue. If a queue element can be processed by an examining front end processor, then it does so; if not, then it places the queue element on the queue of a front end processor that can handle that element. If more than one front end processor offers access to a particular destination, then whichever front end processor gets to the front end processor queue first handles the requests. This results in load leveling of front end processors and lines and improved availability.

Requests destined for a host are processed by a front end processor and placed on that host's queue. If more than one host can process a request, then a queue exists which is examined by these several hosts. Whichever host gets to the queue first handles the request. If one of several such hosts fails, the other hosts carry on potentially undisturbed. This results in host load leveling and availability.

In a message switching environment, requests received by one front end processor may be destined for a Network Addressable Unit (NAU) accessible via...