NFS/Kerberos

From CSCWiki
Revision as of 17:51, 8 November 2007 by Dtbartle (talk | contribs) (→‎DRBD)
Jump to navigation Jump to search

Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv4 mount /users. We have also explored additional methods for replicating user-data, listed below.

NFS

NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunatedly NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache resulting in poor preformance with large files such as virtual machine hard drives (note: caching has yet to implemented in the CSC).

Coda

Coda is a network filesystem explored several times by the CSC. Coda provides advantages over NFS such as Read-Write replication and disconnected operation. Coda was developed at Carnegie Mellon. Unfortunately, coda is a one-way ticket to madness for our users and systems administrators.

The coda documentation is unfortunately quite archaic, so it is best to visit the wiki as most information regarding coda server setup in incorrect in some way. The volume creation and other scripts are very touchy bash scripts so be sure to double check everything you feed in or terrible, mysterious things will happen. The coda client seems to be sane luckily though it would be wise to look at the manpage for cfs and how to work with the acls.

Coda is unfortunately seeming to be ill-suited to the CSC environment as the manual conflict resolution and directory based acls don't integrate well with our existing way of doing things. Coda itself is also quite tempramental and not very stable.

External Links

  • CMU Coda Site [1]
  • Coda Wiki [2]
  • A discussion on coda/afs acls [3]

DRBD

In our traditional NFS setup the filesytems are backup-ed only through RAID1 on the NFS server. If the NFS server (caffeine) goes down all the CSC systems become desperately unhappy. A proposed way to solve this problem is with network replicated block devices which are traditionally used in clustering systems. DRBD is a network block device system we used for a little while (until it exploded). It is much like network RAID1. Unfortunately, it only supports exporting the device to nodes that are actually replicating it, not clients.

GNBD

GNBD is similar to DRBD but supports clients. It requires integration with fencing.