Browse Prior Art Database

An Address Caching Mechanism for an Off-Chip Serial Bus Disclosure Number: IPCOM000022648D
Original Publication Date: 2004-Mar-23
Included in the Prior Art Database: 2004-Mar-23
Document File: 2 page(s) / 18K

Publishing Venue



This article presents an address caching mechanism that reduces the memory and I/O access latency over a serial link.

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 59% of the total text.

Page 1 of 2

An Address Caching Mechanism for an Off -Chip Serial Bus

Disclosed is a caching mechanism for memory and I/O accesses over a serial connection. With the cache memories placed at the sending and receiving ends, the address field of payload in the packet is compressed and decompressed to shorten the access latency.

The mechanism is based on the fact that memory and I/O accesses have temporal and spatial locality in address references. Higher order portions of the address are cached in the base address cache and base address array, and transferred in a compressed form over the serial bus, as shown in Figure 1.

Base address cache

Base address array

Serial bus

Division position Division position

Figure 1. Address caching and compression

The division position, which defines how many higher order bits are cached, can be arbitrarily settled, because a serial bus is flexible in terms of the packet length. It even can be moved on the fly by sending special packet dedicated to the division position control.

High order address bits

Low order address bits

High order address bits

Low order address bits


I/O device 3






I/O device 2


Nothing mapped

I/O device 1



Nothing mapped




Memory map


[This page contains 11 pictures or other non-text objects]

Page 2 of 2

Base address cache Base address array

Figure 2. Memory address map and caching scheme

Figure 2 is an example of a typical addr...