You are not logged in.
The simplest thing to do is simply pick a fairly small delay such as one second and stick with it. The problem is that this can congest your network with useless traffic if there is a problem on the lan or on the other machine, and this added traffic may only serve to make the problem worse.
A better technique, described with source code in "UNIX Network Programming" by Richard Stevens (see 1.6 Where can I get source code for the book [book title]?), is to use an adaptive timeout with an exponential backoff. This technique keeps statistical information on the time it is taking messages to reach a host and adjusts timeout values accordingly. It also doubles the timeout each time it is reached as to not flood the network with useless datagrams. Richard has been kind enough to post the source code for the book on the web. Check out his home page at http://www.kohala.com/~rstevens.
The performance of TCP depends on its ability to estimate the mean of the round trip time on a connection. The best way to think of the problem is to imagine a sequence of round trip measurements that arrive over time. TCP uses the history of measurements to estimate the current round trip delay, and chooses a retransmission timeout derived from its estimate of round trip delay. Because the round trip delay varies over time, TCP weights recent measurements more heavily than older ones. However, because individual measurements of round trip delay can fluctuate wildly from the norm when congestion occurs, TCP cannot ignore the history of measurements completely