Difference between revisions of "NFS/Kerberos"

From CSCWiki
Jump to navigation Jump to search
 
(41 intermediate revisions by 8 users not shown)
Line 1: Line 1:
  +
Our user-data is stored in /users on [[Machine_List#psilodump|psilodump]] on an ISCSI volume exported to [[Machine_List#aspartame|aspartame]], which exports /users/ via NFS. Plans to add a layer of LVM abstraction so as to support regular snapshot backups of /users/ are currently in-place, but not yet fully implemented. All of our systems NFS mount /users, and most of them do so using [[Kerberos]] for authentication.
__NOTOC__
 
  +
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv4 mount /users. We have also explored additional methods for replicating user-data, listed below.
 
  +
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic.
   
 
= NFS =
 
= NFS =
   
NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunatedly NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache resulting in poor preformance with large files such as virtual machine hard drives (note: caching has yet to implemented in the CSC).
+
NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunately NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache, resulting in poor performance with large files.
 
= Coda =
 
 
Coda is a network filesystem explored several times by the CSC. Coda provides advantages over NFS such as Read-Write replication and disconnected operation. Coda was developed at Carnegie Mellon. Unfortunately, coda is a one-way ticket to madness for our users and systems administrators.
 
 
The coda documentation is unfortunately quite archaic, so it is best to visit the wiki as most information regarding coda server setup in incorrect in some way.
 
The volume creation and other scripts are very touchy bash scripts so be sure to double check everything you feed in or terrible, mysterious things will happen. The coda client seems to be sane luckily though it would be wise to look at the manpage for cfs and how to work with the acls.
 
 
Coda is unfortunately seeming to be ill-suited to the CSC environment as the manual conflict resolution and directory based acls don't integrate well with our existing way of doing things. Coda itself is also quite tempramental and not very stable.
 
   
  +
On November 8, 2007, we experienced a major NFS failure. An analysis of the logs indicated that the fault was likely caused by NFSv4-specific code. As a result, we have returned to mounting with NFSv3.
== External Links ==
 
* CMU Coda Site [http://coda.cs.cmu.edu]
 
* Coda Wiki [http://coda.wikidev.net/]
 
* A discussion on coda/afs acls [http://lists.freebsd.org/pipermail/trustedbsd-discuss/2000-April/000060.html]
 
   
  +
In November 2015, we made another attempt at mounting with NFSv4 in the office. This was a huge time suck and failed sporadically. As a result, we have returned to mounting with NFSv3. NFSv4 ACLs/mapping seem to be the culprit. NFSv4, '''just not ready'''.
= DRBD =
 
   
  +
== Troubleshooting ==
In our traditional NFS setup the filesytems are backup-ed only through RAID1 on the NFS server. If the NFS server (caffeine) goes down all the CSC systems become desperately unhappy. A proposed way to solve this problem is with network replicated block devices which are traditionally used in clustering systems. DRBD is a network block device system we used for a little while (until it exploded). It is much like network RAID1. Unfortunately, it only supports exporting the device to nodes that are actually replicating it, not clients.
 
   
  +
* If NFS refuses to mount, with a message similar to "Incorrect mount option was specified", ensure that the "nfs-common" service is running. This is required for Kerberos authentication with NFS.
= GNBD =
 
   
 
= ZFS =
[[GNBD]] is similar to DRBD but supports clients. It requires integration with fencing.
 
   
  +
On March 15, 2008, we transitioned to ZFS. This move has since been reversed; details are preserved in [http://wiki.csclub.uwaterloo.ca/User-data?oldid=2331 a previous revision of this page]
== External Links ==
 
   
  +
[[Category:Services]]
GFS administrator's guide [http://man.chinaunix.net/linux/redhat/rh-gfs-en-6.0/]
 
  +
[[Category:Software]]

Latest revision as of 01:20, 4 December 2015

Our user-data is stored in /users on psilodump on an ISCSI volume exported to aspartame, which exports /users/ via NFS. Plans to add a layer of LVM abstraction so as to support regular snapshot backups of /users/ are currently in-place, but not yet fully implemented. All of our systems NFS mount /users, and most of them do so using Kerberos for authentication.

We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic.

NFS

NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunately NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache, resulting in poor performance with large files.

On November 8, 2007, we experienced a major NFS failure. An analysis of the logs indicated that the fault was likely caused by NFSv4-specific code. As a result, we have returned to mounting with NFSv3.

In November 2015, we made another attempt at mounting with NFSv4 in the office. This was a huge time suck and failed sporadically. As a result, we have returned to mounting with NFSv3. NFSv4 ACLs/mapping seem to be the culprit. NFSv4, just not ready.

Troubleshooting

  • If NFS refuses to mount, with a message similar to "Incorrect mount option was specified", ensure that the "nfs-common" service is running. This is required for Kerberos authentication with NFS.

ZFS

On March 15, 2008, we transitioned to ZFS. This move has since been reversed; details are preserved in a previous revision of this page