Browse Prior Art Database

Data Base Machines with Large Content-Addressable Blocks and Structural Information Processors Disclosure Number: IPCOM000131380D
Original Publication Date: 1979-Mar-01
Included in the Prior Art Database: 2005-Nov-10
Document File: 18 page(s) / 63K

Publishing Venue

Software Patent Institute

Related People

Douglas S. Kerr: AUTHOR [+3]


` Ohio State University ,

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 6% of the total text.

Page 1 of 18


This record contains textual material that is copyright ©; 1979 by the Institute of Electrical and Electronics Engineers, Inc. All rights reserved. Contact the IEEE Computer Society (714-821-8380) for copies of the complete work that was the source of this textual material and for all use beyond that as a record from the SPI Database.

Data Base Machines with Large Content-Addressable Blocks and Structural Information Processors

Douglas S. Kerr

` Ohio State University ,

~ ~' ~ 1 v,'

(Image Omitted: The advent of very large data bases is prompting DBMS: designers to take advantage of past advances in specialized processor and memory hardware technologies.)

The design of a hardware backend machine to replace conventional data base management systems has been influenced by the characteristics of conventional systems, changes in processor and memory technologies, and previous work on data base machines.

Conventional data base management systems can be characterized by the size of their data bases, how much software they require, and the underlying hardware that executes the software. The hardware consists of general- purpose computers and secondary storage devices such as movinghead disks. The software is very large, since it must maintain the large data bases, create and maintain data base directories and indices, include access methods (for locating the directories, retrieving the indices, computing the location of the relevant data for a user request, and locating the relevant data), and, finally, transfer the data into main memory to find the answer.

To find the few indices that indicate the locations of the relevant data, many indices must be brought from secondary storage into main memory. Then, to find the few records that satisfy a user request for information, many records must be transferred into main memory, and searched individually. Thus, a conventional LBMS (Figure 1) is often CPU-bound and short of main-memory cycles. Furthermore, I/O traffic between secondary storage and main memory is heavy; the computation and I/O traffic result in execution of a large number of software access modules and in frequent, high- volume movement of data over the channel.

One way to free the CPU and memory cycles and release the channel traffic of the host computer is to off-load the data base and much of the DBMS software to a backend machine (see Figure 2). The backend machine may be a conventional minicomputer on which a "software backend" is implemented. XDMS' showed the practicality of such an approach, and several other software implementations have followed.2-5 One of these, the Datacomputer,6 is available on the Arpanet. At least one commercial software backend may be available soon.7

The overall cost of a computer system using a software backend may be reasonable because a minicomputer is less expensive than an upgrade of the host computer. Ho...