Extending the reach and accuracy of TCP/IP Search Discovery
Original Publication Date: 2001-May-03
Included in the Prior Art Database: 2003-Jun-17
Search Discovery is a DB technology that can help users configure connections to DB servers that are available in a LAN. It gathers a list of the various available DB servers which can then be individually queried using "Directed Discovery" to further resolve these and ultimately present a list of DBs to the user. The first part, where the various DB servers are "discovered", is crucial to the whole process. It must be able to discover, accurately and consistently, all of the participating DB servers in the LAN segment . The first problem with the current implementation of Search Discovery is that it does not discover all of the participating DB servers in the LAN. The second problem with the current implementation of Search Discovery is that it is not consistent in it's discovery, identifying different DB servers each time the search request is made on the same LAN segment. The invention solves these problems in the following ways: The first problem solved was that of consistently identifying the participating DB servers in response to a search request. The existing methodology issues the broadcast request (a datagram using the User Datagram Protocol (UDP)), sets the receive socket, and then loops (for a user specified wait-time) constantly checking for messages (responses) in the message queue and processing them till the wait-time expires. This creates a tight loop and some responses (datagrams) are missed between the context switching and message processing, producing the inconsistent results. The solution eliminates the root cause of missed responses by eliminating all message processing, and any unnecessary context switching, during the user specified wait time loop. To achieve this, and in contrast to the current methodology, after the broadcast request is issued and the receiver socket is set (non-blocking), the process goes to sleep for a time equal to the user specified wait-time. After waking up, it begins processing the messages from the message queue. Since no time is spend doing any message processing and attendant context switching during the sleep period, no messages are missed for the duration of the sleep time (equal to the user specified wait-time). This, together with the increased receive message queue size (below), ensures consistent reach. The second problem solved was that of ensuring discovery of all participating DB servers in the LAN. Given that during the sleep period, no processing of messages occurs, there is a need to ensure that all message are caught. This was done by allocating an adequate receive buffer (message queue size). This buffer was calculated on the basis of the size of each response (fixed size for the datagram), multiplied by the maximum number of possible responses. The maximum number of responses is equal to the maximum number of participating DB servers on the network segment since there is only one response from each participating DB server to the broadcast request. The maximum number of participating DB servers on a network segment is equal to the maximum number of 255 IP addresses on that segment. Actually, this should be reduced by one, since one of the IP addresses is used by the initiator of the broadcast request! By allowing for responses from the maximum number of participating DB servers on a LAN, the reach of Search Discovery is maximized.