Surety is performing system maintenance this weekend. Electronic date stamps on new Prior Art Database disclosures may be delayed.
Browse Prior Art Database

MP Shaped Processor Memory

IP.com Disclosure Number: IPCOM000045373D
Original Publication Date: 1983-Mar-01
Included in the Prior Art Database: 2005-Feb-06
Document File: 4 page(s) / 44K

Publishing Venue


Related People

Fletcher, RP: AUTHOR [+3]


The sharing of data in a multiprocessor (MP) environment creates difficulties. This article describes store-through approaches to sharing at the second level in a memory hierarchy.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 50% of the total text.

Page 1 of 4

MP Shaped Processor Memory

The sharing of data in a multiprocessor (MP) environment creates difficulties. This article describes store-through approaches to sharing at the second level in a memory hierarchy.

In fast processors main memory may become large and its speed may drop if all processors are to have the same access-time to it. Since a small memory can be packaged close to a processor and can be accessed efficiently, it may be asked whether a "dedicated local memory" may be designed for each processor. However, sharing then becomes a problem: it is necessary to provide some communication between the processor memories (PMs).

Hardware-implementable schemes for MP sharing at the second level in a memory hierarchy are described. It is desirable to avoid direct communication between the two local PMs and any cross-invalidation of pages and lines. (Note that cross-invalidation is antithetical to the property of locality, which underlies data staging in hierarchies.) There must be communication about lines which have been changed; when a processor obtains a page from system memory (SM), it must be at least as current as that which is reflected in the other processor's memory.

In what follows an initial hypothetical store-through scheme and two approaches for implementing it are described.

Hypothetical "Store-Through" Scheme. A shared-cache scheme with store- through local caches has been found to have many attractive practical features and promises to provide good performance for sharing at the cache level. This notion is developed for sharing at the next level, when each processor has a private PM.

Consider a hypothetical scheme with a "very large" shared cache as shown in Fig 1. This cache can be thought of as being infinite in size, always having room for new store references, and as being accessible by both processors in a short time. This shared cache would provide the needed communication between PMs by funneling any changes back to the SM. Thus the SM, together with the shared cache, is totally up to date with all changes that have been made.

Before a changed page is moved out from a PM, its changed lines in the shared cache could first be moved to PM; alternatively, the changed lines could be merged into the page enroute to the SM. Note that when a changed page is moved back to SM, a data movement need not occur if the same page is also in the other PM; the page can be merely invalidated in the first PM. The actual writing back need only occur when both processors have finished with the page.

This hypothetical scheme, then, allows sharing in a simple manner. The problem becomes one of providing an implementable version of the infinite shared cache. Two approaches are discussed below.

Shared Processor Memory (Level 2). This approach features a shared PM as a backing store for the finite shared cache (Figure 2). When a line is moved


Page 2 of 4

from the shared cache, it is placed in the shared PM. At the same time, the un...