| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
NFS Overview, History, Versions and Standards (Page 2 of 3) Overview of NFS Architecture and General Operation NFS follows the classical TCP/IP client/server model of operation. A hard disk or a directory on a storage device of a particular computer can be set up by an administrator as a shared resource. This resource can then be accessed by client computers, which mount the shared drive or directory, causing it to appear like a local directory on the client machine. Some computers may act as only servers or only clients, while others may be both: sharing some of their own resources and accessing resources provided by others. NFS uses an architecture that includes three main components that define its operation. The External Data Representation (XDR) standard defines how data is represented in exchanges between clients and servers. The Remote Procedure Call (RPC) protocol is used as a method of calling procedures on remote machines. Then, a set of NFS procedures and operations works using RPC to carry out various requests. The separate Mount protocol is used to mount resources as mentioned above. One of the most important design goals of NFS was performance. Obviously, even if you set up a file on a distant machine as if it were local, the actual read and write operations have to travel across a network. Usually this takes more time than simply sending data within a computer, so the protocol itself needed to be as lean and mean as possible. This decision led to some interesting decisions, such as the use of the unreliable User Datagram Protocol (UDP) for transport in TCP/IP, instead of the reliable TCP like most file transfer protocols do. This in turn has interesting implications on how the protocol works as a whole. Another key design goal for NFS was simplicity (which of course is related to performance). NFS servers are said to be stateless, which means that the protocol is designed so that servers do not need to keep track of which files have been opened by which clients. This allows requests to be made independently of each other, and allows a server to gracefully deal with events such as crashes without the need for complex recovery procedures. The protocol is also designed so that if requests are lost or duplicated, file corruption will not occur.
Home - Table Of Contents - Contact Us The TCP/IP Guide (http://www.TCPIPGuide.com) Version 3.0 - Version Date: September 20, 2005 © Copyright 2001-2005 Charles M. Kozierok. All Rights Reserved. Not responsible for any loss resulting from the use of this site. |