Browse Prior Art Database

Concurrent Memory Read Controller for the PCI Express Bridge

IP.com Disclosure Number: IPCOM000083101D
Publication Date: 2005-Feb-28
Document File: 3 page(s) / 349K

Publishing Venue

The IP.com Prior Art Database

Abstract

Disclosed is a method that uses multiple memory read request engines to split requests from the secondary bus into small fragments, then interleave the fragments through a single request interface on the primary bus. Benefits include improving the latency of memory read requests issued from devices.

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 53% of the total text.

Concurrent Memory Read Controller for the

PCI

Express

Bridge

Disclosed is a method that uses multiple memory read request engines to split requests from the secondary bus into small fragments, then interleave the fragments through a single request interface on the primary bus. Benefits include improving the latency of memory read requests issued from devices.

Background

Conventional bridge designs queue memory read requests from multiple PCI Express devices in a single channel, then complete the requests one by one on the primary bus. This causes small memory read requests that are queued behind large requests to have to wait before they can be served.

General Description

The disclosed method defines the concurrent memory read request architecture in the transaction layer of a PCI Express bridge. It consists of multiple memory read request engines that split requests from the secondary bus into small fragments, then interleave the fragments through a single request interface on the primary bus (see Figures 1 and 2). A round-robin arbiter allocates a time slot for each active engine to present its fragment on the primary bus request interface. This architecture reduces the latency of the memory read from the PCI Express device by allowing small memory reads to be interleaved between large memory reads.

 

Each engine is capable of handling a memory read request independently in a DMA-like manner. The original memory read requests from the PCI Express device (known as macro read requests) are first stored in the request engine, then the macro read requests are split into smaller fragments (known as...