SHARED RTO TIMER MECHANISM FOR TCP
Original Publication Date: 2000-May-01
Included in the Prior Art Database: 2002-Sep-17
TCP detects packet loss by using a dynamically adjustable retransmission timeout (RIO) timer. The RTO is an estimation of how long it will take to receive an acknowledgment for a transmitted segment given current network conditions. For peak TCP performance, an accurate determination of the correct RTO is important. An RTO that is too short will be effected by small changes in delay. It will timeout prematurely and cause unnecessary segment retransmissions. An RTO that is too large will cause a TCP implementation to wait more than necessary before detecting segment loss. This will lead to reduced throughput.