Browse Prior Art Database

Quality-of-Search concept for communication-related search operations Disclosure Number: IPCOM000015528D
Original Publication Date: 2002-Jan-26
Included in the Prior Art Database: 2003-Jun-20

Publishing Venue



Internet routers and other network equipment such as firewalls perform for each incoming packet several searches using information extracted from the packet header/payload on data structures such as routing tables, firewall rules, access control lists, routing policies, load balancing rules, etc. Although these searches typically involve similar search types (e.g., exact matches, prefix matches, range matches), the performance requirements of those searches can be significantly different. Also, the update requirements (for inserting and deleting information into and from the search data structures) can be different between the various searches, but also for certain parts of the same data structure. For example, certain rules of a firewall application will never be removed (static rules), whereas other rules might be only entered for the duration of a connection (dynamic rules). Several search algorithms used for the above applications have the ability to make trade-offs between performance parameters. For example, for applications involving static data structures that will not be modified so often, it would be more efficient to spend some more time on update operations in order to minimize the storage requirements through improved compression. In case of highly dynamic data structures, improved update performance might be achieved by compromising on storage requirements by applying a less efficient compression allowing faster updates. Entries in a routing table, that are expected to be "hit" more often during the search process, can be inserted in the data structure such that the lookup performance is improved, for example by locating this entry in a separate cache or locating it closer to the root in case of a tree search. These trade-offs can also be applied on other parameters such as memory bandwidth and memory fragmentation (memory management). Until now, however, there is no means to provide the "search application" with this sort of information that would allow it to make the described trade-offs. The presented idea involves a concept of specifying so called Quality of Search parameters to the search function (e.g., a hardware-based search engine) that performs the required searches. Those quality of search parameters include search performance parameters (e.g., latency, search rates) and update performance parameters (e.g., update rates). Furthermore, these parameters can also provide additional information related to the entire search structure or to individual rules/routing-table entries: for example, whether these are static or dynamic of nature (i.e., the probability that a given rule will be modified in the near future). Other possible Quality-of-Search parameters relate to storage-efficiency and memory manager operation