NFS Admin And Security. Steve Nuchia Sravani Motati Ashish Katyarmal. NFS Overview. Server. Client. /. /. home. home. usr. usr. bin. bin. c. c. a. b. a. b. NFS: Export Subtree Windows: Share Folder. NFS: Mount a remote filesystem Windows: map a shared drive.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
NFS: Export Subtree
Windows: Share Folder
NFS: Mount a remote filesystem
Windows: map a shared drive
In windows you get a new drive letter. In Unix, the imported file tree can be mounted as a subtree anywhere in your tree.
Three Main Configuration files
Only /etc/exports is needed for NFS to work
but to make the sharing secure we need the other two
directory machine1(option11,option12) machine2(option21,option22)
The directory you want to share .
All directory under it within same filesystem will be shared as well.
machine1 and machine2
Client machines that will have access to the directory . The machine may be listed by their IP address or their DNS address
The option listing for each machine will describe what kind of access that machine will have
The directory is shared read only --default
The client machine will have read and write access to the directory
The root on the client machine will have the same level of access to the files on the system as root on the server
This can have serious security implications but one might need to perform any administrative work on the client machine that involves the exported directories
If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
Version 2 NFS server will tell a client machine that a file write is complete however, the file system may not sync it to the disk.
The default behavior may therefore cause file corruption if the server reboots
This option forces the filesystem to sync to disk every time NFS completes a write operation
Version 3 NFS has a commit operation that the client can call that actually will result in a disk sync on the server end.
Two client machines, slave1 and slave2,with IP addresses 192.168.0.1 and 192.168.0.2, respectively . To share software binaries and home directories with these machines the entry in the /etc/exports will look like
/usr/local 192.168.0.1(ro) 192.168.0.2(ro)
/home 192.168.0.1(rw) 192.168.0.2(rw)
eg /usr/local 192.168.0.0/255.255.255.0(ro)
These simplification could cause a security risk if there are machines in the netgroup or local network which cannot be trusted completely
These files specify which computer on the network can use services on the machine.
Each line is an entry listing a service and a set of machines
When a server gets a request from the client
In addition to controlling access to services handled by inetd this file can also control access to NFS by restricting connection to the daemons
The first daemons to restrict access to is the portmapper. Restricting access to this is the best defense but it is not enough if the intruder knows how to find the daemons
Restricting portmapper will also restrict NIS but usually NFS and NIS are restricted in similar way
Good idea to explicitly deny access to hosts that don’t need to allow access.
By adding entry ALL:ALL to the file /etc/hosts.deny will causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed (this is more secure behavior)
Typically entry in the host.allow will look like
service: host [or network/netmask] , host [or network/netmask]
Here host is IP address of a potential client
(it may be possible in some versions to use the DNS name of the host, but it is strongly deprecated )
To allow access to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following entry to /etc/hosts.allow:
portmap: 192.168.0.1 , 192.168.0.2
For Recent nfs-utils we would add the following entries
lockd: 192.168.0.1 ,192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
The NFS server is configured and we can start it running for this we need to check the following things:
1)Appropriate packages installed
This consists mainly of a new kernel and a new version of the nfs-utils package
2)TCP/IP networking functioning correctly?
If telnet, FTP, are working then chances are TCP networking is fine.
NFS starts up simply by rebooting the machine, and the startup scripts should detect the set up of /etc/exports file.
To check this query the portmapper with the command rpcinfo -p to find out what services it is providing
Its should look something like:
If we do not at least see a line that says "portmapper", a line that says "nfs", and a line that says "mountd" then we need to backtrack and try again to start up the daemons
If we see these services listed, then the server should be ready to set up NFS clients to access files.
If this does not work, or if we cannot reboot the machine then we can start the daemons in order to run NFS services.
Portmap or rpc.portmap
NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which handles the initial mount requests, and rpc.rquotad, which handles user file quotas on exported volumes
The daemons are all part of the nfs-utils package, and may be either in the /sbin directory or the /usr/sbin directory.
If the distribution does not include them in the startup scripts, then we need to add them, configured to start in the following order:
rpc.statd, rpc.lockd (if necessary),rpc.rquotad
If we want to change etc/exports file, the changes may not take effect immediately. we should run the command exportfs -ra to force nfsd to re-read the /etc/exports file. If we cannot find the exportfs command, then we can kill nfsd with the -HUP flag
#%Defaultvfs jfs nfs
#Nfs 2 /sbin/helpers/nfsmnthelp none remote
Delete # signs.
$ Cat /proc/filesystems
If missing compile your own kernel with NFS enabled / type insmod nfs to check if it exists.
* For old kernel versions try to mount a local directory.If mount fails with error message ”fs type nfs not supported by kernel”, then make a new kernel with NFS enabled.
Start the NFS Daemons at System Startup:
Start the NFS Daemons individually:
Start all the NFS Daemons:
*To start nfsd, rpc.mountd daemons-/etc/exports file must exist.
Showmount –e ServerName
# mount –t nfs -o options nfs_volumemount_point
root@helium>>mount –t nfs –o nosuid,hard,intr neon:/usr/local /usr/local
Edit /etc/fstab on client “helium” with the entry:
# volume mount point type options dump fsckorder
neon:/usr/local/usr/local nfs nosuid,hard,intr 0 0
-p Print the list of mounted file systems
-a all the file systems described in /etc/fstab
-n mount without making an entry in /etc/mtab
-v Display a message indicating each file system
-t Specify a file system type.
-r Mount the specified file system as read-only
-o Specifies a comma-separated list of file system options
If a program resident on the remote filesystem is setuid and it is run on the client, it will have the privileges associated with that userid (perhaps root) on the client machine. Usually allowing this to happen is a bad idea. Setting the nosuid option prevents it.
Some older kernels rely on the idea that root can write everywhere. Programs that do this type of suid action can potentially be used to change apparent uid on nfs servers doing uid mapping. So it has to be disabled.
/usr/sbin/automount [ -m ] [-n ] [ -T ] [ -v ] [ -D name=value ] [ -f MasterFile ] [ -M MountDirectory ] [ -tl Duration ] [ -tm Interval ] [ -tw Interval ] Directory ..$. MapName ... [ -MountOption [ ,MountOption ] ... ]
Its advantages include:
Solving the UID and Trust Problems scripts, then we need to add them, configured to start in the following order: