NFS/Kerberos: Difference between revisions

From CSCWiki
Jump to navigation Jump to search
Line 44: Line 44:
To make umask work sanely with ACL's:
To make umask work sanely with ACL's:
zfs set aclmode=passthrough users
zfs set aclmode=passthrough users

To make ACL inheritance work sanely:
zfs set aclinherit=passthrough users


Here's what we set this to currently:
Here's what we set this to currently:

Revision as of 04:35, 27 March 2008

Our user-data is stored in /users on ginseng in a RAID 1 mirror running on two 400 GB SATA disks. Ginseng runs Solaris 10 and uses ZFS as the filesystem for /users. All of our systems NFSv4 mount /users.

We have also explored additional methods for replicating user-data, including AFS, Coda, and DRBD, but have found all to be unusable or problematic.

NFS

NFSv3 has been in long standing use by the CSC as well as almost everyone else on the planet. NFSv4 mounts of /users are currently in the works to CSCF. Unfortunately NFS has a number of problems. Clients become desperately unhappy when disconnected from the NFS server. Also previous to NFSv4 there was no way to client side cache, resulting in poor performance with large files.

On November 8, 2007, we experienced a major NFS failure. An analysis of the logs indicated that the fault was likely caused by NFSv4-specific code. As a result, we have returned to mounting with NFSv3.

ZFS

On March 15, 2008, we transitioned to ZFS.

Overview

Each user directory is stored in a separate zfs file system.

To create a user directory:

zfs create users/$USER

To delete a user directory:

zfs destroy users/$USER

To move/rename a user directory:

zfs rename users/$USER_OLD users/$USER_NEW

NFS (server-side)

To disable atime, devices, and setuid:

zfs set atime=off users
zfs set devices=off users
zfs set setuid=off users

To export over NFS using host-based access-control:

zfs set sharenfs="sec=sys,rw=$ACCESS_LIST,nosuid" users

where ACCESS_LIST may be as a colon-separated list of any of the following:

  • hostname (e.g. glucose-fructose.csclub.uwaterloo.ca)
  • netgroup (e.g. in LDAP)
  • domain name suffix (e.g. .csclub.uwaterloo.ca)
  • network (e.g. @129.97.134.0/24)

A minus sign (-) may prefix one of the above to indicate that access is to be denied. 'man share_nfs' has full details.

To make umask work sanely with ACL's:

zfs set aclmode=passthrough users

To make ACL inheritance work sanely:

zfs set aclinherit=passthrough users

Here's what we set this to currently:

rw=acesulfame-potassium:artificial-flavours:ascorbic-acid:caffeine:caramel-colour:citric-acid:dextroamphetamine-saccharate:\
  glucose-fructose:natural-flavours:ozone:perpugilliam:phosphoric-acid:potassium-citrate:sodium-citrate:taurine,root=caffeine

The NFSv4 domain is auto-detected by default, although to be safe, you can explicitly set it in /etc/default/nfs:

NFSMAPID_DOMAIN=csclub.uwaterloo.ca

NFS (client-side)

You should install the autofs package. Then edit /etc/auto.master and append the following:

/users    /etc/auto.users

Create /etc/auto.users with content:

* -fstype=nfs4,soft,intr,nosuid,nodev disk:/users/&

In order to support NFSv4 ACL's with getfacl/setfacl, you should apply the NFSv4 ACL patch. You can also compile the nfs4_getfacl/nfs4_setfacl utils.

Quota

To query quota:

zfs get quota users/$USER

To set quota:

zfs set quota=$SIZE users/$USER

where $SIZE could be 2.5G, 100M, etc...

To set no quota:

zfs set quota=none users/$USER

Snapshots

Snapshots can be accessed from /users/$USER/.zfs/snapshot/.

To create a snapshot:

zfs create users/$USER@$SNAPSHOT

To delete a snapshot:

zfs destroy users/$USER@$SNAPSHOT

To rename a snapshot:

zfs rename users/$USER@$SNAPSHOT_OLD users/$USER@$SNAPSHOT_NEW

To list snapshots:

zfs list -t snapshot -r users

Miscellaneous

You should occasional scrub (error-check) the zpool:

zpool scrub users

For small files, ZFS+NFS performance really sucks. You can fix this by enabling zil_disable:

echo 'set zfs:zil_disable=1' >> /etc/system

More information on zil_disable is available here and here.

To see zpool status and statistics:

zpool status -v users
zpool iostat -v users