NFS/Kerberos: Difference between revisions
No edit summary |
|||
Line 1: | Line 1: | ||
[[Ginseng]] will be our new user-data server, running Solaris 10 and exporting data via [http://en.wikipedia.org/wiki/ZFS ZFS] and NFSv4. |
|||
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users. |
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users. |
||
We are now planning to run Solaris 10 as the disk server's operating system, with [http://en.wikipedia.org/wiki/ZFS ZFS] as the file system, and exporting via NFSv4. See [[Solaris 10]] for current information. |
|||
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic. |
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic. |
||
Line 12: | Line 12: | ||
= ZFS = |
= ZFS = |
||
We plan to use ZFS in mirror mode. |
|||
== Overview == |
== Overview == |
Revision as of 19:41, 9 March 2008
Ginseng will be our new user-data server, running Solaris 10 and exporting data via ZFS and NFSv4.
Our user-data is stored in /export/users on artificial-flavours in a RAID 1 software array running on two 400 GB SATA disks. We export /users via NFSv3 and NFSv4. All of our systems NFSv3 mount /users.
We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic.
NFS
NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunately NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache, resulting in poor performance with large files.
On November 8, 2007, we experienced a major NFS failure. An analysis of the logs indicated that the fault was likely caused by NFSv4-specific code. As a result, we have returned to mounting with NFSv3.
ZFS
Overview
Each user directory is stored in a separate zfs file system.
To create a user directory:
zfs create users/$USER
To delete a user directory:
zfs destroy users/$USER
To move/rename a user directory:
zfs rename users/$USER_OLD users/$USER_NEW
NFS (server-side)
To disable atime, devices, and setuid:
zfs set atime=off users zfs set devices=off users zfs set setuid=off users
To export over NFS using host-based access-control:
zfs set sharenfs="sec=sys,rw=$ACCESS_LIST,nosuid" users
where ACCESS_LIST may be as a colon-separated list of any of the following:
- hostname (e.g. glucose-fructose.csclub.uwaterloo.ca)
- netgroup (e.g. in LDAP)
- domain name suffix (e.g. .csclub.uwaterloo.ca)
- network (e.g. @129.97.134.0/24)
A minus sign (-) may prefix one of the above to indicate that access is to be denied. 'man share_nfs' has full details.
NFS (client-side)
You should install the autofs package. Then edit /etc/auto.master and append the following:
/users /etc/auto.users
Create /etc/auto.users with content:
* -fstype=nfs4,soft,intr,nosuid,nodev disk:/users/&
In order to support NFSv4 ACL's with getfacl/setfacl, you should apply the NFSv4 ACL patch. You can also compile the nfs4_getfacl/nfs4_setfacl utils.
Quota
To query quota:
zfs get quota users/$USER
To set quota:
zfs set quota=$SIZE users/$USER
where $SIZE could be 2.5G, 100M, etc...
To set no quota:
zfs set quota=none users/$USER
Snapshots
Snapshots can be accessed from /users/$USER/.zfs/snapshot/.
To create a snapshot:
zfs create users/$USER@$SNAPSHOT
To delete a snapshot:
zfs destroy users/$USER@$SNAPSHOT
To rename a snapshot:
zfs rename users/$USER@$SNAPSHOT_OLD users/$USER@$SNAPSHOT_NEW
To list snapshots:
zfs list -t snapshot -r users
Miscellaneous
You should occasional scrub (error-check) the zpool:
zpool scrub users
For small files, ZFS+NFS performance really sucks. You can fix this by enabling zil_disable:
echo 'set zfs:zil_disable=1' >> /etc/system
More information on zil_disable is available here and here.
To see zpool status and statistics:
zpool status -v users zpool iostat -v users