NFS/Kerberos: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users. |
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users. |
||
We are now planning to run Solaris 10 as the disk server's operating system, with [http://en.wikipedia.org/wiki/ZFS ZFS] as the file system, and exporting |
We are now planning to run Solaris 10 as the disk server's operating system, with [http://en.wikipedia.org/wiki/ZFS ZFS] as the file system, and exporting via NFSv4. See [[Solaris 10]] for current information. |
||
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic. |
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic. |
Revision as of 20:52, 25 January 2008
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users.
We are now planning to run Solaris 10 as the disk server's operating system, with ZFS as the file system, and exporting via NFSv4. See Solaris 10 for current information.
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic.
NFS
NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunatedly NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache resulting in poor preformance with large files such as virtual machine hard drives (note: caching has yet to implemented in the CSC).
On Novemeber 8, 2007, we experienced a major NFS failure. An analysis of the logs indicated that the fault was likely caused by NFSv4-specific code. As a result, we have returned to mounting with NFSv3.
ZFS
We plan to use ZFS in mirror mode. We also plan to implement rolling snapshots, which can be accessed via /users/$USER/.zfs/snapshot/.