NetApp: Difference between revisions
Line 154: | Line 154: | ||
==Other notes== |
==Other notes== |
||
===Transferring old files from ginseng=== |
|||
===Method A=== |
====Method A==== |
||
* On ginseng, use parted to set up the mounted iscsi drive as an ext4 primary partition (setting up a partition of size >2TB requires care and a GPT) |
* On ginseng, use parted to set up the mounted iscsi drive as an ext4 primary partition (setting up a partition of size >2TB requires care and a GPT) |
||
Line 170: | Line 170: | ||
-C /export/users $(notdir $<) /mnt/iscsi |
-C /export/users $(notdir $<) /mnt/iscsi |
||
===Method B=== |
====Method B==== |
||
* On ginseng, authenticate with iSCSI target (psilodump.csclub.uwaterloo.ca lun0). |
* On ginseng, authenticate with iSCSI target (psilodump.csclub.uwaterloo.ca lun0). |
||
Line 179: | Line 179: | ||
resize2fs /path/to/psilodump:lun0 |
resize2fs /path/to/psilodump:lun0 |
||
===Exporting Kerberized NFS from Debian Sid=== |
|||
The default kernel in Debian sid (stable, 2.6.32) does not support the necessary crypto suites to export kerberized NFS to newer kernels. You MUST upgrade the kernel, nfs-common, and nfs-kernel-server packages to AT LEAST squeeze-backports. |
The default kernel in Debian sid (stable, 2.6.32) does not support the necessary crypto suites to export kerberized NFS to newer kernels. You MUST upgrade the kernel, nfs-common, and nfs-kernel-server packages to AT LEAST squeeze-backports. |
||
===iSCSI block device mount optimizations=== |
|||
tmyklebu made some changes to /sys/block/sda/queue. The following is now in /etc/rc.local on aspartame: |
|||
echo 2048 > /sys/block/sda/queue/read_ahead_kb |
|||
echo 32768 > /sys/block/sda/queue/max_sectors_kb |
|||
echo 4096 > /sys/block/sda/queue/nr_requests |
|||
echo noop > /sys/block/sda/queue/scheduler |
|||
We should increase the iSCSI configs node.session.queue_depth and node.session.cmds_max during next maintenance window. |
|||
==Disk information== |
==Disk information== |
Revision as of 17:06, 4 February 2013
As of 2013, the CSC has a NetApp FAS3000 series which is capable of hosting network shares. It was donated to us by CSCF. It is also pretty old.
Documentation
All the manuals are hosted in ~sysadmin/netapp-docs/
Relevant docs for storage modification are: smg.pdf, sysadmin.pdf
iSCSI documentation is in ontop/bsag.pdf
Background
While the NetApp supports both NFS and CIFS, neither of these export options provide the versatility nor the options we desire of a network fileshare. Instead, we have configured the NetApp to export iSCSI block devices to be mounted on aspartame. Therefore, aspartame now replaces ginseng as being the primary fileserver in CSC.
Access
Configuration mechanisms are accessible either via SSH or serial interface, but through aspartame only. The NetApp is not visible on 134net at all.
Private IP TBD.
Configuration
Should aspartame get totally hosed, or stability is long enough such that all sysadmin folk at the time have graduated, here is how to set up iSCSI on the NetApp+aspartame.
NetApp Configuration
This section describes how to create a volume on the NetApp and export it as an iSCSI target. For further NetApp configuration instructions, refer to the NetApp documentation.
1. Login to the NetApp. You'll either need access to the physical serial console or to ssh as root to psilodump's private IP (10.15.134.130). Credentials are stored in /users/sysadmin .
2. To get information on the available disks, run the command:
aggr status -r
This command will return three lists: Active aggregates with their assigned disks, spare disks, and disks managed by the partner. An aggregate is roughly equivalent to an LVM volume group: It is a collection of physical disks, possibly across multiple disk shelves and with various RAID levels applied, which may host one or more logical volumes. Do not proceed if there are fewer than three spare disks of each type available. Refer to the NetApp documentation to add more disks or release disks from existing aggregates.
3. Choose a list of disks for your new aggregate. The available space will be approximately 2/3 of the total disk space.
4. Create the aggregate as follows:
aggr create aggrN -t raid_dp -d [disk-list]
where [disk-list] is a list of the form AA:BB CC:DD ... containing the identifiers for the disks you wish to use to create the aggregate.
5. Retrieve the aggregate information. You will need to know the available space for the next step.
aggr show_space aggrN
6. Create a volume in the aggregate:
vol create volNfoo -s volume aggrN XXXK
where XXX is the total available space in aggrN. You may need to choose a slightly smaller number due to hidden size constraints and rounding.
7. Disable snapshotting and access time update. Neither will be needed for exporting an iSCSI LUN.
vol options volNfoo no_atime_update on vol options volNfoo nosnap on snap reserve volNfoo 0
8. Enable iSCSI and configure default authentication.
options iscsi.enable on iscsi nodename iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca iscsi security default -s CHAP -p yoursecurepassword -n psilodump
where yoursecurepassword is more secure. For iSCSI hosts, the target will be on node iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca with username psilodump and password yoursecurepassword.
9. Create a LUN on your volume:
lun create -s XXXK -t linux /vol/volNfoo/lun0
where XXXK is the amount of available space on the volume, as shown by the command df.
10. Create an iSCSI initiator group and add all of your hosts to it:
igroup create -i -t linux volNfoo_group igroup add volNfoo_group iqn.1993-08.org.debian:01:123456789 igroup add volNfoo_group iqn.1993-08.org.debian:01:981287231 ...
The node identifiers given to the igroup add command will soon be able to access the iSCSI LUN you created above.
11. Map the LUN to the iSCSI initiator group:
lun map /vol/volNfoo/lun0 volNfoo_group
You're done! Any host in the initiator group should now be able to access the LUN you've created as a block device.
aspartame Configuration
Install open-iscsi:
apt-get install open-scsi
Edit /etc/iscsi/iscsid.conf:
node.startup = manual discovery.sendtargets.auth.authmethod=CHAP discovery.sendtargets.auth.username=username discovery.sendtargets.auth.password=password node.session.auth.authmethod=CHAP node.session.auth.username=username node.session.auth.password=password
Start open-iscsi service:
service open-iscsi start
Scan for iSCSI devices from the NetApp:
iscsiadm --mode discovery --type st --portal psilodump
This should dump out a ton of information, for example:
[fe80::XXXX:XXXX:XXXX:XXXX]:3260,2001 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca [fe80::XXXX:XXXX:XXXX:XXXX]:3260,2000 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca [fe80::XXXX:XXXX:XXXX:XXXX]:3260,2002 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca [fe80::XXXX:XXXX:XXXX:XXXX]:3260,1000 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca 10.15.134.131:3260,2002 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca 129.97.134.131:3260,2001 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca 10.15.134.130:3260,2000 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca 129.97.134.130:3260,1000 iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca
The .130 IPs correspond to one filer, and the .131 IPs correspond to the other filer. Currently we are only using one of the filers (psilodump).
This also populates the /etc/iscsi/nodes/iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca directory with all possible ways to access the NetApp. For testing purposes (i.e. node.startup = manual), this is okay.
Test to see if you can get the iSCSI device to show up correctly:
iscsiadm --mode node --targetname "iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca" --portal 10.15.134.130:3260 --login
This should produce output similar to:
Logging in to [iface: default, target: iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca, portal: 10.15.134.130,3260] Login to [iface: default, target: iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca, portal: 10.15.134.130,3260]: successful
Check /dev/disk/by-path/ip* to ensure new disks show up:
# ls -l /dev/disk/by-path/ip* /dev/disk/by-path/ip-10.15.134.130:3260-iscsi-iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca-lun-0 -> ../../sda /dev/disk/by-path/ip-10.15.134.130:3260-iscsi-iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca-lun-0-part1 -> ../../sda1 /dev/disk/by-path/ip-10.15.134.130:3260-iscsi-iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca-lun-1 -> ../../sdb /dev/disk/by-path/ip-10.15.134.130:3260-iscsi-iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca-lun-1-part1 -> ../../sdb1
If this fails, check all your configuration again.
If this succeeds, you are now ready to try autoconnecting the iSCSI device.
Delete all extraneous entries from /etc/iscsi/nodes/iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca . This prevents the startup script from (a) hanging, and (b) being very upset. All that is left should be the interface you intend to connect through:
# ls -l /etc/iscsi/nodes/iqn.1992-08.com.netapp:psilodump.csclub.uwaterloo.ca/ 10.15.134.130,3260,2000
Edit /etc/iscsi/iscsid.conf:
node.startup = automatic
For the init.d script to work correctly (i.e. properly mount things) we need to add a sleep to allow the device to settle: Edit /etc/init.d/open-iscsi roughly around line 127 to add a "sleep 1":
... # Now let's mount sleep 1 log_daemon_msg "Mounting network filesystems" MOUNT_RESULT=1 if mount -a -O _netdev >/dev/null 2>&1; then MOUNT_RESULT=0 break fi log_end_msg $MOUNT_RESULT ...
Now we can restart the service:
service open-iscsi restart
Now you can configure partitions and mountpoints.
Other notes
Transferring old files from ginseng
Method A
- On ginseng, use parted to set up the mounted iscsi drive as an ext4 primary partition (setting up a partition of size >2TB requires care and a GPT)
- Compiled star in /root on ginseng
- Transferred files with the following Makefile (assuming original user directories in /export/users, destination volume in /mnt/iscsi, make -j8):
foo := $(wildcard /export/users/*) bar := $(patsubst /export/users/%,/mnt/iscsi/%,$(foo)) all: $(bar) /mnt/iscsi/%: /export/users/% # echo $@ $< ~/star-1.5.2/star/OBJ/x86_64-linux-cc/star \ -copy -p -acl artype=exustar \ -C /export/users $(notdir $<) /mnt/iscsi
Method B
- On ginseng, authenticate with iSCSI target (psilodump.csclub.uwaterloo.ca lun0).
- Umount /dev/mapper/vg0-users
- Copy users filesystem directly to iSCSI target:
dd if=/dev/mapper/vg0-users of=/path/to/psilodump:lun0 bs=8M
- Resize users filesystem on destination partition to fit:
resize2fs /path/to/psilodump:lun0
Exporting Kerberized NFS from Debian Sid
The default kernel in Debian sid (stable, 2.6.32) does not support the necessary crypto suites to export kerberized NFS to newer kernels. You MUST upgrade the kernel, nfs-common, and nfs-kernel-server packages to AT LEAST squeeze-backports.
iSCSI block device mount optimizations
tmyklebu made some changes to /sys/block/sda/queue. The following is now in /etc/rc.local on aspartame:
echo 2048 > /sys/block/sda/queue/read_ahead_kb echo 32768 > /sys/block/sda/queue/max_sectors_kb echo 4096 > /sys/block/sda/queue/nr_requests echo noop > /sys/block/sda/queue/scheduler
We should increase the iSCSI configs node.session.queue_depth and node.session.cmds_max during next maintenance window.
Disk information
- shelf 1
- 14x??? 10,000RPM FibreChannel disks
- Currently set to standalone filer+shelf, not set up
- shelf 2
- 14x??? 10,000RPM FibreChannel disks
- Currently assigned to phlogiston, not set up (phlogiston is off)
- shelf 3
- 14x500GB 7,200RPM ATA disks
- Currently assigned to psilodump
- shelf 4
- 14x500GB 7,200RPM ATA disks
- Currently assigned to psilodump
Aggregates
- aggr0
- Root aggregate volume, in RAID-DP
- aggr1
- Music aggregate volume, in RAID-DP
- aggr2
- Users aggregate volume, in RAID-DP
Volumes
- /vol/vol0
- Root volume.
- /vol/vol1music
- Music volume. This volume is not accessible via NFS or CIFS. It contains only the iSCSI LUN /vol/vol1music/lun0 .
- /vol/vol2users
- Users volume. This volume is not accessible via NFS or CIFS. It contains only the iSCSI LUN /vol/vol2users/lun0 .
Commands
aggr status -r aggr<num> Shows aggregate status disk show -v Shows disks, and which filer they are owned by (currently all by psilodump) storage storage related things disk assign Assigns orphaned disks to a filer vol Volume stuffs
Terminology
- RAID-DP - Double Parity RAID6