| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Dynamic Address Resolution Caching and Efficiency Issues (Page 1 of 2) Dynamic address resolution removes the restrictions that we saw in our look at direct mapping, and allows us to easily associate layer two and layer three addresses of any size or structure. The only problem with it is that each address resolution requires us to send an extra message that would not be required in direct mapping. Worse yet, since we don't know the layer two identity of the recipient, we must use a broadcast message (or at least a multicast), which means that many devices on the local network must take resources to examine the data frame and check which IP address is being resolved. Sure, sending one extra message may not seem like that big a deal, and the frame doesn't have to be very large since it contains only a network layer address and some control information. However, when we have to do this for every hop of every datagram transmission, the overhead really adds up. For this reason, while basic dynamic address resolution as described in the previous topic is simple and functional, it's usually not enough. We must add some intelligence to the implementation of address resolution to reduce the impact on performance of continual address resolutions. Consider that most devices on a local network send to only a small handful of other physical devices, and tend to do so over and over again. This is a phenomenon known as locality of reference, and is observed in a variety of different areas in the field of computing. If you send a request to an Internet Web site from your office PC, it will need to go first to your company network's local router, so you will need to resolve the router's layer two address. If you later click a link on that site, that request will also need to go to the router. In fact, almost everything you do off your local network probably goes first to that same router (commonly called a default gateway). Having to do a fresh resolution each time is, well, stupid. It would be like having to look up the phone number of your best friend every time you want to call to say hello. (Reminds me of that sketch on Saturday Night Live where the guy had no short-term memorybut I digress.) To avoid being accused of making address resolution protocols that are, well, stupid, designers always include a caching mechanism. After a device's network layer address is resolved to a data link layer address, the link between the two is kept in the memory of the device for a period of time. When it needs the layer two address the next time, the device just does a quick lookup in its cache. This means instead of doing a broadcast on every datagram, we only do it once for a whole sequence of datagrams. Caching is by far the most important performance-enhancing tool in dynamic resolution. It transforms what would otherwise be a very wasteful process into one which most of the time is no less efficient than direct mapping. It does, however, add complexity. The cache table entries must be maintained. There is also the problem that the information in the table may become stale over time; what happens if we change the network layer address or the data link layer address of a device? For this reason, cache entries must be set to expire periodically. The topic on caching in the TCP/IP ARP protocol shows some of the particulars of how these issues are handled.
Home - Table Of Contents - Contact Us The TCP/IP Guide (http://www.TCPIPGuide.com) Version 3.0 - Version Date: September 20, 2005 © Copyright 2001-2005 Charles M. Kozierok. All Rights Reserved. Not responsible for any loss resulting from the use of this site. |