Browse Prior Art Database

High speed memory communication with block lending Disclosure Number: IPCOM000015776D
Original Publication Date: 2002-Jul-01
Included in the Prior Art Database: 2003-Jun-21

Publishing Venue



The hybrid architecture described below combines a memory communication facility with a memory management facility. It allows for an almost unlimited communication bandwidth through shared memory and naturally balances the available buffer space to help the slower communication end point. If there is only one communication end point, then the hybrid architecture simply manages memory for the thread or process. If there are two or more threads or processes using the architecture, they can send allocated buffers to each other in a point to point fashion. Two main parts to the facility are: the main communication buffer area, and reserved buffered areas for the end points of the invention. Before communication begins, each thread or process involved in communications would allocate or 'reserve' a number of blocks that it requires to do its work. This reservation is analogous to allocating the memory from shared memory management facilities. The invention is a method of communication through a area of memory that is shared between two processes or threads. Typical forms of shared memory communication involve copying of data to and from the communication layer which results in extra over head and CPU time. This can both slow down communications as well as take CPU time away from other important tasks. Example: A process or thread is to perform asynchronous I/O and transfer the information to another process or thread. There are many forms of communication available that would provide the necessary features (TCP/IP, named pipes, shared memory, etc). Usually shared memory is chosen due to its speed and low overhead. A typical efficient shared memory communication layer would allow the one process/thread to read directly into the shared area by reserving an a number of blocks with locks, latches, or other mechanism. The process/thread on the other side, would then protect the area, copy the memory off into a private area and then release the protected area. To average out any spurts of activity on either side, the main buffer area should large enough and split into a number of separate communication elements so that any communication end point can transfer more information assuming there is enough space in the buffer area. The main problem with this method is that each side must reserve space inside the shared communication area and one side must copy the data into and out of a private memory area. For asynchronous I/O, one process may read into its private memory area and then copy this data into the communication area. The receiving process would similarly copy the data out of the communication area into its own private memory. Copying memory in this way consumes extra CPU time that could be used to do real work. Typical speeds for this type of shared memory communication usually have a maximum speed of about 100-200 MB/s between two processes or threads. Another downfall of shared memory based communication layers that copy memory is that a certain amount of memory is required for the communication layer. The more memory allocated for the shared memory communication area, be less chance that one communication end point has to wait for the other. This memory is essentially wasted since its only purpose is to store data that is being transferred from one side of the communication layer to another.