Browse Prior Art Database

A Flexible Distributed Testbed for Real-Time Applications

IP.com Disclosure Number: IPCOM000131537D
Original Publication Date: 1982-Oct-01
Included in the Prior Art Database: 2005-Nov-11
Document File: 17 page(s) / 60K

Publishing Venue

Software Patent Institute

Related People

William C. McDonald: AUTHOR [+4]

Abstract

TRW, Inc. A reconfigurable multi-microcomputer system with 64 nodes and 64 shared memories is being developed incrementally. Numerous experiments have already been rust on the existing nucleus network. The complexity and sophistication of the real-time data processing problems encountered in computerbased weapon systems severely tax all aspects of advanced data processing technology. Typically, these systems impose requirements in reliability, availability, cost, performance, and growth that stress today's data processing technology. ~ Furthermore, data processing solutions are required for a wide range of system concepts and operational environments.2 Although proven conventional techniques such as pipelining and cache memory have significantly improved computer performance and advances in circuit technology have increased processor capacity, current and projected needs still represent significant challenges.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 6% of the total text.

Page 1 of 17

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

This record contains textual material that is copyright ©; 1982 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society http://www.computer.org/ (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

A Flexible Distributed Testbed for Real-Time Applications

William C. McDonald System Development Corporation \ R. Wayne Smith, TRW, Inc.

A reconfigurable multi-microcomputer system with 64 nodes and 64 shared memories is being developed incrementally. Numerous experiments have already been rust on the existing nucleus network.

The complexity and sophistication of the real-time data processing problems encountered in computerbased weapon systems severely tax all aspects of advanced data processing technology. Typically, these systems impose requirements in reliability, availability, cost, performance, and growth that stress today's data processing technology. ~ Furthermore, data processing solutions are required for a wide range of system concepts and operational environments.2 Although proven conventional techniques such as pipelining and cache memory have significantly improved computer performance and advances in circuit technology have increased processor capacity, current and projected needs still represent significant challenges.

In real-time systems such as those for ballistic missile defense, the data processing problem is dominated by the necessity of meeting port-to-port response times while achieving total system throughput requirements. Response times may be as low as a few milliseconds with throughput requirements ranging up to tens of millions of instructions per second. Frequently, the data processing system must also remain dormant for months, activate on minutes' notice, and operate unattended with ultrareliability for periods ranging from a few hours to several days.

Distributed computing promises to satisfy the requtrements of these systems by utilizing moderately priced, contemporary hardware in networks. To achieve this promise, research and development is being pursued in all aspects of real-time distributed computing technology.3 4 Many techniques have been proposed for achieving reliability through redundancy, detecting and recovering from errors, distributing and managing shared data, communicating reliably between processes, allocating and scheduling resources in the presence of failures and overloads, and achieving high throughput through architectural innovation. These techniques must be proven experimentally before they can be used in a real-time system. Individually and collectively they impose overheads that create problems in satisfying real-time requirements. As a result, solutions to the problems of real-time distributed computing must be proven in a realistic environment for the...