Browse Prior Art Database

Efficient Wave-Based Web Content Distribution Method

IP.com Disclosure Number: IPCOM000015091D
Original Publication Date: 2002-Apr-26
Included in the Prior Art Database: 2003-Jun-20
Document File: 2 page(s) / 84K

Publishing Venue

IBM

Abstract

Disclosed is a mechanism for scalably and efficiently disseminating web content in a content distribution system. Typically, content is published to a staging server which then becomes the source for all the web servers and caches in the content distribution system; clearly, this solution is not scalable when there are hundreds of nodes in the system. The mechanism described below addresses solves this problem in a scalable manner. Another motivation for solving this problem is that a set of clients may be separated from the staging server by a low-bandwidth Wide Area Network (WAN). Upon receipt of a control message, all these clients would try to pull data from the server across the WAN simultaneously, which affects the performance of the content distribution system significantly. The mechanism outlined achieves scalability by staggering the dissemination of content into multiple waves such that web servers and caches in the first wave pull content directly from the staging server, and nodes in the second wave pull content from a node in the previous wave, and so on. In a content distribution system, web content is disseminated by sending control messages containing a list of files to be pulled from the data servers to the clients. Upon receipt of the control messages, clients pull data from the staging server. Content published onto a staging server in a content distribution (CD) system may have to be disseminated to hundreds of web servers and caches. In such cases, staging servers may be overwhelmed if all web servers and caches make requests to the staging server simultaneously. In order to ensure scalability, CD clients are partitioned such that each client belongs to one of three waves: clients in the first wave pull data from the staging server, clients in the second wave pull data from clients in the first wave or from the staging server, and so on. This ensures that no single server is flooded with requests at any given point in time. The above mechanism is illustrated below (Note that the dissemination of control messages is not illustrated in the figure below). In this figure, the content publishers writes/publishes new content onto the staging server. Then, control messages are first sent to web servers and caches in the first wave, which results in them pulling content from the staging server. After the nodes in the first wave have pulled content from the staging server, control messages are sent to nodes in the second wave. This results in the second-wave nodes pulling content from either nodes in the first wave. Finally, notifications are sent to nodes in the third wave, which then pull content from nodes in the first or second wave. Typically, the number of nodes in each wave is much greater than the nodes in the previous wave, so the number of choices of servers to which requests can be made increases with each wave. Thus, scalability is achieved by appropriately increasing the number of nodes in each wave. In the case of multiple clients in an enterprise that are separated from the staging server by a low-bandwidth WAN, the number of requests made to the staging server must be minimized to increase rate of dissemination. This is achieved by configuring one of the nodes in the enterprise in the first (or second) wave and configuring the remaining nodes to be in the second (or third) wave such that all of them pull from the single node in the previous wave. Thus, all but one of the machines will only result in local traffic within the enterprise, rather than WAN traffic .

This text was extracted from a PDF file.
At least one non-text object (such as an image or picture) has been suppressed.
This is the abbreviated version, containing approximately 51% of the total text.

Page 1 of 2

Efficient Wave-Based Web Content Distribution Method

  Disclosed is a mechanism for scalably and efficiently disseminating web content in a content distribution system. Typically, content is published to a staging server which then becomes the source for all the web servers and caches in the content distribution system; clearly, this solution is not scalable when there are hundreds of nodes in the system. The mechanism described below addresses solves this problem in a scalable manner. Another motivation for solving this problem is that a set of clients may be separated from the staging server by a low-bandwidth Wide Area Network (WAN). Upon receipt of a control message, all these clients would try to pull data from the server across the WAN simultaneously, which affects the performance of the content distribution system significantly. The mechanism outlined achieves scalability by staggering the dissemination of content into multiple waves such that web servers and caches in the first wave pull content directly from the staging server, and nodes in the second wave pull content from a node in the previous wave, and so on.

In a content distribution system, web content is disseminated by sending control messages containing a list of files to be pulled from the data servers to the clients. Upon receipt of the control messages, clients pull data from the staging server. Content published onto a staging server in a content distribution (CD) system may have to be disseminated to hundreds of web servers and caches. In such cases, staging servers may be overwhelmed if all web servers and caches make requests to the staging server simultaneously.

In order to ensure scalability, CD clients are partitioned such that each client belongs to one of three waves: clients in the first wave pull data from the staging server, clients in the second wave pull da...