Dismiss
InnovationQ will be updated on Sunday, Oct. 22, from 10am ET - noon. You may experience brief service interruptions during that time.
Browse Prior Art Database

Client-side Data Culling

IP.com Disclosure Number: IPCOM000028940D
Original Publication Date: 2004-Jun-08
Included in the Prior Art Database: 2004-Jun-08
Document File: 1 page(s) / 28K

Publishing Venue

IBM

Abstract

The problem realized is that often large amounts of data are displayed in table format as part of markup served to clients via Internet. Futhermore, due to the amount of data and small display real-estate of monitors and mobile display units, this table data is "paged", broken into a series of subsequent pages. This works well in splitting up data but can lead to latency as subsequent pages are requested from the server. These pages are often requested when the user clicks on a "previous" or "next" link or button. Also, a user often will toggle between a series of pages in order to analyze grouped data. As an example, imagine searching through a list of used cars on the cars.com website. A user might order a result set by price which spans several pages. The user will then toggle back and forth between two or more pages looking at cars that fall within their desired price range. Every time they "page" forward or backward to a subsequent page, they drive another request to the server and redundantly render table data. Futhermore, the server may also be redundantly pulling the data from a data store to regenerate the view.

This text was extracted from a PDF file.
This is the abbreviated version, containing approximately 67% of the total text.

Page 1 of 1

Client-side Data Culling

This article proposes to cache (store on client) and cull (remove from client) paged data displayed in table format in order to expidite and improve the user experience.

Initial Data Request

When a table of data is displayed, it will be broken up into a series of pages consisting of between 0 .. n result set rows. In performing an initial request for the first page of data, this invention proposes the following flow:

1. Request initial page of table data from server
2. Render table data in appropriate markup
3. Drive discrete request to server for subsequent page of data
4. Cache returned data on client in "next cache"

Subsequent Data Request

When the user then requests the next page, it already exists on the client. The following flow would apply:

1. Check "next cache" for data
2. If not present, drive full request
3. If present, drive simple request to server to check data validity (might be invalid due to manipulation of data on server, session timeout, etc.)
4. If data is not valid, drive full request
5. If data is valid, move previous table data (current page) to "previous cache"
4. Update visible table rows with data from "next cache"
5. Drive discrete request to server for subsequent page of data

The benefit of this flow is that a large percentage of the time, the user will be presented with a subsequent page of data, discretely and immediately. Futhermore, in this likely case, the entire document did not refresh, thus improving the user's...