Browse Prior Art Database

ON THE ORGANIZATION OF MEMORY

IP.com Disclosure Number: IPCOM000128353D
Original Publication Date: 1973-Dec-31
Included in the Prior Art Database: 2005-Sep-15
Document File: 9 page(s) / 27K

Publishing Venue

Software Patent Institute

Related People

Andrzej Ehrenfeucht: AUTHOR [+4]

Abstract

In order to construct machines which could dis-play an intelligence somewhat like that of animals or man, or to under-stand how the brain functions,.one must develop a model of the organiza-tion of memory. Several models have been proposed [1,4,6,7,11 where other references are given]. Here we propose still another one which is new in that it tends to explain how the brains learns to interpret correctly (or act purposefully upon) the immense amount of information which it obtains continuously from the senses.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 20% of the total text.

Page 1 of 9

THIS DOCUMENT IS AN APPROXIMATE REPRESENTATION OF THE ORIGINAL.

ON THE ORGANIZATION OF MEMORY

by Andrzej Ehrenfeucht* Jan Mycielski** Department of Computer Science Department of Mathematics UNIVERSITY OF COLORADO Boulder, Colorado 80302 Report #CU-CS-014-73 March, 1973

Abstract. A new learning algorithm is presented which may have applications in the theory of natural and artificial intelligence.

This work was supported by N. S. F. Grants GJ-660 and GP-19405.

Introduction.

In order to construct machines which could dis-play an intelligence somewhat like that of animals or man, or to under-stand how the brain functions,.one must develop a model of the organiza-tion of memory. Several models have been proposed [1,4,6,7,11 where other references are given]. Here we propose still another one which is new in that it tends to explain how the brains learns to interpret correctly (or act purposefully upon) the immense amount of information which it obtains continuously from the senses.

For simplicity, we assume at first that the brain se-rves only to an-swer questions which admit a "yes" or "no" answer and that the questions are sequences of O's and i's of a fixed length m. Our questions depict the totality of data (parameters) which the brain gets at a given time (ours is a discrete time model) and hence m is very large (see section 5, remarks 1 and 2 concerning the possible meanings of m). At the beginning the brain does not know what to say and answers arbitrarily but it gets a "reward" when the answer is right and a "punishment" when the answer is wrong. Thus it gets post facto information what is the right answer and attempts to produce correct answers to the questions which follow. By the Rroblem. of organization of memory we understand the problem of defining an algorithm to answer new questions on account of past experience. Various statisti-cal estimation procedures and classical methods of interpolation and ap-proximation of functions seemingly would apply to this problem. But'as information theory shows [3], they loose their power when the dimension m is large, say m > 100. Our algorithm is free from this defect. This is so because we exploit the following natural assumption: Few of the parameters are important for solving any given question (see remarks 6 and 7 below concerning the validity and the interpretation of this assumption), although different parameters are needed to solve different questions. The brain probably develops a method of selecting the important parameters of any given question; so does our algorithm.

We shall not attempt in this paper to define a neural network capable of performing our algorithm nor theorize on the question how the nervous tissue could do it since, although easy, this would seem to us premature (see section 4).

2. The Algorithm.

Let X be the space of questions i.e., a set of se- quences of O's and I's of length m, and let R be the set of all possible re- sponses to a question (from now...