New NetApp: Difference between revisions
(Add documentation about increasing inodes) |
(Add backup volume) |
||
Line 107: | Line 107: | ||
* <code>music</code> |
* <code>music</code> |
||
* <code>backup</code> |
|||
=== Volumes === |
=== Volumes === |
||
Line 127: | Line 128: | ||
For music. |
For music. |
||
Snapshots: |
|||
* 2 nightly and 16 weekly |
|||
==== <code>backup</code> |
|||
For backups of LDAP, Kerberos. |
|||
Snapshots: |
Snapshots: |
Revision as of 12:21, 10 March 2019
At some point in 2017, CSCF and MFCF donated us their FASXXXX NetApp filers. These filers are to replace the FAS3000 filers currently in use.
Additionally, since we were approaching maximum disk capactiy, the Math Endowment Fund funded a new 24x2TB disk shelf to go with the new filers.
NetApp Support + Documentation
As the filers were decommishioned by both CSCF and MFCF, there is no support of the filers.
Official NetApp documentation is available at https://csclub.uwaterloo.ca/~syscom/netapp-docs/.
At one point, we had access to full information about the NetApp filers on the NetApp support site. At some point, unfortunately, that stopped working. The information provided includes the license keys. We have a copy of the license keys for one of the filers (FS00) but not the other. Someone should ask CSCF or MFCF if they have this information recorded somewhere.
Physical Installation
Both of the NetApp filers are installed in the MC 3015 machine room. One filer and two disk shelves are located in rack E. The other filer was installed in rack F.
For simplicity, we decided to only use of the of the filers. We haven’t decided yet what to do with the other filer.
Networking
FS00 is connected via two 1gbps to mc-rt-3015-mso-a using LACP. Therefore, traffic should be balance between the two connections. If one of the connections goes down, the NetApp will continue to function with just the one connection.
Power
It is important that we keep the NetApp filer + disk shelves running as long as possible. At the time of installation, the UPS in rack E (mc-3015-e1-ups1) was dedicated for critical services (networking, network file shares and web hosting).
Configuration
You can SSH into the NetApp from dextrose by running ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oCiphers=+3des-cbc root@fs00.csclub.uwaterloo.ca
.
If you need information about the NetApp, run sysconfig -a
on the NetApp.
Modifying /etc
on the NetApp
The easiest way to change configuration on the NetApp is to mount its system directory on a different machine (only aspartame or dextrose are allowed to mount it).
mkdir /mnt/fs00
mount -t nfs -o vers=3,sec=sys fs00.csclub.uwaterloo.ca:/vol/vol0 /mnt/fs00
The NetApp system directory is currently mounted on dextrose, at /mnt/fs00.
Networking
The NetApp is configured in VLAN 530 (CSC Storage).
Here is the networking configuration in etc/rc
:
# create lacp link ifgrp create lacp csc_storage -b ip e0a e0b ifconfig csc_storage inet 172.19.168.35 netmask 255.255.255.224 mtusize 1500 ifconfig csc_storage inet6 fd74:6b6a:8eca:4903:c5c::35 prefixlen 64 route add default 172.19.168.33 1 route add inet6 default fd74:6b6a:8eca:4903::1 1 routed on options dns.domainname csclub.uwaterloo.ca options dns.enable on options nis.enable off savecore
The CSC DNS servers are configured in etc/hosts
:
nameserver 2620:101:f000:4901:c5c::4 nameserver 2620:101:f000:7300:c5c::20 nameserver 129.97.134.4 nameserver 129.97.18.20 nameserver 129.97.2.1 nameserver 129.97.2.2
TODO: The NetApp has a dedicated management port. We should take advantage of this and connect that directly to a machine which only the Systems Committee can access. Configuring this port should disable SSH via the non-management ports (this may need additional configuration).
Disks
There are two disk shelves connected to the FS00 NetApp.
- 14x136GB 10 000RPM FibreChannel disks
- This was unused from our old NetApp system and was originally used for testing.
- (ztseguin) I can’t remember, but I don’t think all disks are present.
- DS4243: 24x2TB 7 200RPM SATA disks
- Funded by the Math Endowment Fund (MEF)
- Purchased from Enterasource in Winter 2018
Aggregates
All aggregates are configured with RAID-DP.
Note: any other aggregate on the NetApp is for testing only.
aggr0
NetApp system aggregate. Disks assigned to this aggregate are located on the old disk shelf.
Volumes:
vol0
aggr_users
Aggregate dedicated to user home directories.
Volumes:
users
aggr_misc
Aggregate for miscellaneous purposes.
Volumes:
music
backup
Volumes
Note: any other volume on the NetApp is for testing only.
vol0
NetApp system volume.
users
For user home directories. Each user is given a quota of 12GB.
Snapshots:
- 12 hourly, 4 nightly and 2 weekly
music
For music.
Snapshots:
- 2 nightly and 16 weekly
==== backup
For backups of LDAP, Kerberos.
Snapshots:
- 2 nightly and 16 weekly
Exporting Volumes
In general, sec=sys
should only be exported to MC VLAN 530 (172.19.168.32/27, fd74:6b6a:8eca:4903::/64). This VLAN is only connected to trusted machines (NetApp, CSC servers in the MC 3015 or DC 3558 machine rooms).
All other machines should be given sec=krb5p
permissions only.
The NetApp exports are stored in /etc/exports
. If you update the exports, they can be reloaded by running exportfs -r
on the NetApp.
Quotas
Quotas are configured on the NetApp, in /etc/quotas
.
After updating the quotas, the NetApp must be instructed to reload them:
# this will work for most quota changes
quota resize <volume>
# however, some changes might need a full re-initialization of quotas
# note: while re-initializing, quotas will not be enforced.
quota off <volume>
quota on <volume>
Quota Reports
Users can view their current usage + quota by running quota -s
on any machine.
The Systems Committee can run a report of everyone’s usage by running quota report
on the NetApp.
Snapshots
Most volumes have snapshots enabled. Snapshots only use space when files change contained within them change (as it’s copy on write).
Snapshots are available in a special directory called .snapshot
. This directory is available everywhere and will not show up in a directory listing (except at the volume root).
Current schedules can be viewed by running snap sched <volume>
.
inodes
The number of inodes can be increased with the command:
maxfiles $VOLUME $NEW_VALUE
It is not possible to decrease the number of inodes.