Browse Prior Art Database

Hierarchical Memory

IP.com Disclosure Number: IPCOM000077402D
Original Publication Date: 1972-Jul-01
Included in the Prior Art Database: 2005-Feb-25
Document File: 3 page(s) / 57K

Publishing Venue

IBM

Related People

Beausoleil, WF: AUTHOR

Abstract

A hierarchical storage system is shown in Fig. 1. The storage system has several levels of hierarchy which substantially increase performance, and reduce overrun by the channels. The system comprises a backing store 10 operating through a transient buffer 12 to a high-speed buffer 14. The high-speed buffer 14 is partitioned into two dedicated areas B2 and B3. Area B3 is dedicated to I/O channels 16, 18 and buffer area B2 is dedicated to CPU 20. A further high-speed buffer B1 is also provided for the CPU.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 49% of the total text.

Page 1 of 3

Hierarchical Memory

A hierarchical storage system is shown in Fig. 1. The storage system has several levels of hierarchy which substantially increase performance, and reduce overrun by the channels. The system comprises a backing store 10 operating through a transient buffer 12 to a high-speed buffer 14. The high-speed buffer 14 is partitioned into two dedicated areas B2 and B3. Area B3 is dedicated to I/O channels 16, 18 and buffer area B2 is dedicated to CPU 20. A further high- speed buffer B1 is also provided for the CPU.

It may not be justified for a small I/O processor or channel to operate out of the high speed first level buffer B1 which communicates with the main CPU 20. A more efficient attachment is to a dedicated buffer at a higher and therefore slower level in the hierarchy. This significantly reduces interference with the main processor at the first level and because it is dedicated provides performance gains to the I/O in the form of minimum overrun.

A smaller page size for a given buffer size is required to achieve optimum cost performance, compared to the page size that would normally be provided at the second level of the hierarchy. The two page sizes are illustrated by the partitioning of buffer 14 into sections B3 and B2. Page size B3 being smaller than the page size in B2.

A flow diagram (Fig. 2) illustrates the controls necessary to handle two different page sizes existing at the same time in the same buffer level. This flow chart applies to either physically separate handle two different page sizes existing at the same time in the same buffer level. This flow chart applies to either physically separate buffers or to a single buffer that has been partitioned, as shown in Fig. 1. Controls can also be provided that allow the page size for the partition to be changed dynamically while the system is operating. This provides a means for optimizing performance based upon a particular application.

Referring to Fig. 2, a flow chart of a typical operation is shown. An arrow within a block means "transfer to". A request 30 is tested to determine if it is received from the channel on the CPU in decision block 32. If yes, the request is from a channel and the I/O buffer B3 is tested at decision block 34 to see if the requested data is stored in the I/O buffer. If it is, the word in B3 is transferred to the channel and the directory D3, not shown, associated with the buffer B3 is updated at logic block 36.

If in decision block 34 the requested data is not located in buffer B3, the no route is taken and buffer B2 is tested at decision block 38. If the data is in buffer B2, the B2 word is transferred to the channel in block 40, however, the chronology bits associated with B2 are not updated. If the data is not in B2, the no route is taken and buffer B1 is tested in decision block 42. If the data is in B1, the B1 word is transferred to the channel but the chronology bits are not updated in logic block 44. If the data word is...