From CSCWiki
Revision as of 01:41, 4 February 2013 by Sharvey (talk | contribs)
Jump to navigation Jump to search

As of 2013, the CSC has a NetApp FAS3000 series which is capable of hosting network shares. It was donated to us by CSCF. It is also pretty old.


All the manuals are hosted in ~sysadmin/netapp-docs/

Relevant docs for storage modification are: smg.pdf, sysadmin.pdf

iSCSI documentation is in ontop/bsag.pdf


While the NetApp supports both NFS and CIFS, neither of these export options provide the versatility nor the options we desire of a network fileshare. Instead, we have configured the NetApp to export iSCSI block devices to be mounted on aspartame. Therefore, aspartame now replaces ginseng as being the primary fileserver in CSC.


Configuration mechanisms are accessible either via SSH or serial interface, but through aspartame only. The NetApp is not visible on 134net at all.

Private IP TBD.


Should aspartame get totally hosed, or stability is long enough such that all sysadmin folk at the time have graduated, here is how to set up iSCSI on the NetApp+aspartame.

NetApp Configuration

1. on the netapp,

  • start the iscsi stuff. Set up a client user/pass (m4burns: please fill this part out at some point)

aspartame Configuration

Install open-iscsi:

apt-get install open-scsi

Edit /etc/iscsi/iscsid.conf:

node.startup = manual

Start open-iscsi service:

service open-iscsi start

Scan for iSCSI devices from the NetApp:

iscsiadm --mode discovery --type st --portal psilodump

This should dump out a ton of information, for example:


The .130 IPs correspond to one filer, and the .131 IPs correspond to the other filer. Currently we are only using one of the filers (psilodump).

This also populates the /etc/iscsi/nodes/ directory with all possible ways to access the NetApp. For testing purposes (i.e. node.startup = manual), this is okay.

Test to see if you can get the iSCSI device to show up correctly:

iscsiadm --mode node --targetname ""  --portal --login

This should produce output similar to:

Logging in to [iface: default, target:, portal:,3260]
Login to [iface: default, target:, portal:,3260]: successful

Check /dev/disk/by-path/ip* to ensure new disks show up:

# ls -l /dev/disk/by-path/ip*
   /dev/disk/by-path/ -> ../../sda
   /dev/disk/by-path/ -> ../../sda1
   /dev/disk/by-path/ -> ../../sdb
   /dev/disk/by-path/ -> ../../sdb1

If this fails, check all your configuration again.

If this succeeds, you are now ready to try autoconnecting the iSCSI device.

Delete all extraneous entries from /etc/iscsi/nodes/ . This prevents the startup script from (a) hanging, and (b) being very upset. All that is left should be the interface you intend to connect through:

# ls -l /etc/iscsi/nodes/,3260,2000

Edit /etc/iscsi/iscsid.conf:

node.startup = automatic

For the init.d script to work correctly (i.e. properly mount things) we need to add a sleep to allow the device to settle: Edit /etc/init.d/open-iscsi roughly around line 127 to add a "sleep 1":

       # Now let's mount
       sleep 1
       log_daemon_msg "Mounting network filesystems"
       if mount -a -O _netdev >/dev/null 2>&1; then
       log_end_msg $MOUNT_RESULT

Now we can restart the service:

service open-iscsi restart

Now you can configure partitions and mountpoints.

Other notes

4. set up the netapp filesystem and transfer old files from ginseng:

  • on ginseng, use parted to set up the mounted iscsi drive as an ext4 primary partition (setting up a partition of size >2TB requires care)
  • installed star as root on ginseng
  • transferred files with the following Makefile (make -j8):
foo := $(wildcard /export/users/*)
bar := $(patsubst /export/users/%,/mnt/iscsi/%,$(foo))
all: $(bar)
/mnt/iscsi/%: /export/users/%
	# echo $@ $<
	~/star-1.5.2/star/OBJ/x86_64-linux-cc/star \
	    -copy -p artype=exustar \
	    -C /export/users $(notdir $<) /mnt/iscsi

Disk information

  • shelf 1
    • 14x??? 10,000RPM FibreChannel disks
    • Currently set to standalone filer+shelf, not set up
  • shelf 2
    • 14x??? 10,000RPM FibreChannel disks
    • Currently assigned to phlogiston, not set up (phlogiston is off)
  • shelf 3
    • 14x500GB 7,200RPM ATA disks
    • Currently assigned to psilodump
  • shelf 4
    • 14x500GB 7,200RPM ATA disks
    • Currently assigned to psilodump


  • aggr0
    • Root aggregate volume, in RAID-DP
  • aggr1
    • Music aggregate volume, in RAID-DP
  • aggr2
    • Users aggregate volume, in RAID-DP


  • /vol/vol0
    • Root volume
  • /vol/vol1music
    • Music volume
  • /vol/vol2users
    • Users volume


aggr status -r aggr<num>
  Shows aggregate status
disk show -v
  Shows disks, and which filer they are owned by (currently all by psilodump)
  storage related things
disk assign
  Assigns orphaned disks to a filer
  Volume stuffs


  • RAID-DP - Double Parity RAID6