Browse Prior Art Database

Method for a zero page accelerator Disclosure Number: IPCOM000005820D
Publication Date: 2001-Nov-08
Document File: 6 page(s) / 1K

Publishing Venue

The Prior Art Database


Disclosed is a method for a zero page accelerator. Benefits include improved system performance

This text was extracted from a Microsoft Word document.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 35% of the total text.

Method for a zero page accelerator

Disclosed is a method for a zero page accelerator. Benefits include improved system performance


              Most modern operating systems implement multiprocessing (ability to run multiple applications simultaneously) and paging (granular translation of linear addresses to physical addresses). Within this context, several optimizations are made by operating system (OS) software to accelerate certain tasks using the unique properties of paging. One of these optimizations is the zero page optimization.

              When a memory allocation is made by an application, the OS must immediately supply the requested amount of memory to the application. In some cases, the request may be very large (hundreds of megabytes or more). If this large allocation is actually performed, thousands of physical memory pages are consumed and significant CPU time is spent initializing (clearing contents to 0) these pages. Figure 1 depicts a 40 KB allocation (ten 4 KB pages) that has been allocated and immediately mapped to ten physical pages. The entire allocation is then cleared before returning it to the application.

              If the allocation in Figure 1 were large enough, numerous physical memory pages would be ripped from other active processes to satisfy the request. When this happens, those processes' working sets (critical sets of memory pages) are removed from memory and performance is severely impacted. Performance is regained when the application processes as though the entire allocation has been completed up front, when in reality it has not. This apparent allocation is the result of mapping all pages of the allocation to a single read‐only zero page. The application may read any page of the allocation and function as though theOS has allocated and cleared the page, as depicted in Figure 2.

              Only when the application writes to a page in the allocation, does the OS allocate a new physical page of memory and re‐assign the address to the new page. In this way, the large allocation is broken‐up and made in small increments, as the application demands, instead of all at once. The read‐only mapping to the zero page causing a page‐fault facilitates this operation. Figure 3 shows the zero page allocation of Figure 2 after the application has written to the 10th page.

              Operating systems implement the zero page optimization for three reasons: performance, reliability, and security.

              Performance is improved because large memory requests are not immediately allocated and cleared. The allocation and clearing process would require the operating system to claim all physical pages necessary to map the entire allocation clear (set to 0) the entire allocation. This process would consume much processor bandwidth, may thrash the level 1 (L1) and level 2 (L2) data caches, and may steal the working set of pages away from other processes. All other processes would page fault numerous times as they reclaim their working set of pages.

              Reliability is improved because al...