Browse Prior Art Database

Dynamic Memory Structure

IP.com Disclosure Number: IPCOM000085588D
Original Publication Date: 1976-Apr-01
Included in the Prior Art Database: 2005-Mar-02
Document File: 4 page(s) / 47K

Publishing Venue

IBM

Related People

Tang, DT: AUTHOR [+2]

Abstract

In references [1], [2] and [3] a special kind of memory, called dynamic memory is described. The dynamic memory includes a loop of n cells, each of which can hold one data word. One cell, called the window, is the only cell in the memory whose contents can be read or written externally.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 40% of the total text.

Page 1 of 4

Dynamic Memory Structure

In references [1], [2] and [3] a special kind of memory, called dynamic memory is described. The dynamic memory includes a loop of n cells, each of which can hold one data word. One cell, called the window, is the only cell in the memory whose contents can be read or written externally.

Throughout the present description, it is assumed to be cell 0 as illustrated in Fig. 1. However, the contents of the cells in memory can be internally rearranged by applying a sequence of operations called memory transformations. Each transformation is a permutation of the contents of the n cells. For example, if the permutation pi=(i(o),i(1),...,i(n1)) is applied to an n cell dynamic memory, then in one step, the contents of cell j are transferred to cell i(j) for all j, 0 < or - j <
n.

In reference [2], the following two transformations are proposed (n = 2/k/):

(Image Omitted)

By means of these two transformations, the contents of any cell can be brought to the window in no more than k = log n steps.

However, if sequential access is desired, this memory structure is not as efficient as that proposed in reference [3]. Suppose datum j denotes the initial contents of cell j. To access a block of data, say datum j, datum j+1,..., datum j+b1, the cell which contains datum j must first be located. Then the cell which contains datum j+1 must be located, and so on. Therefore, in the worst case, b log(2)n steps may be needed.

In reference [3], another pair of transformations are proposed for n=2/k/1, by which any datum j can be accessed in no more than 2log (n+1) -2 steps and any block of b > or - 2 data can be accessed in no more than 3 log (n+1)+b4 steps. Thus this memory structure is more efficient for sequential access.

The transformations are; po(m)(i) = (2i) (mod n) (3) pi(c)(i) = (i-1) (mod n) (4) i = 0,1,...,n-1, where x(mod y) = x-y [x/y] . These transformations are further generalized to arbitrary radices [3].

In the first part of this description, transformation (3) is replaced by a class of transformations C while keeping (4) intact. Any member of C together with (4) constitute a new memory structure, which has the same kind of random and sequential access characteristics as (3) and (4), while some of them have better performance than that of (3) and (4) for the average case and equal performance for the worst case.

In the second part of this description, another memory structure is discussed which results in further improvement in performance. In fact, just one more transformation is added to the memory structure considered above: Pi(a)(i) = (i+1)(mod n) (5).

1

Page 2 of 4

Clearly, this is Pi(c) in reverse direction. I. THE CLASS OF MEMORY TRANSFORMATIONS C.

Let n = 2/k/-1 and k = cd, where c,d are positive integers. Define: Pi(m)(d) = (2(c-1)d(i))(mod n)). (6). for i=0,1,...,n-1 and C = {Pi(m)/(d)/absolute value d = 1,2,...}. If i is represented as a binary number, then the binary representation of Pi(m)/(d) (i) is...