<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csclub.uwaterloo.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ztseguin</id>
	<title>CSCWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csclub.uwaterloo.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ztseguin"/>
	<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/Special:Contributions/Ztseguin"/>
	<updated>2026-04-23T17:14:17Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5233</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5233"/>
		<updated>2024-03-16T23:00:45Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Add instructions for the new IPAM system&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through it&#039;s [https://ipam.private.uwaterloo.ca IP Address Management system]. IST has published some information on the [https://uwaterloo.atlassian.net/wiki/spaces/ISTKB/pages/43401052394/IP+Address+Management IST Knowledge Base].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* API account located in the standard syscom place&lt;br /&gt;
&lt;br /&gt;
=== Managing Records ===&lt;br /&gt;
There are two primary types of records that are maintained: Hosts and Aliases.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: Use the v4 and v6 toggles in the top left to switch between IPv4 and IPv6 networks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Add a new host ====&lt;br /&gt;
&lt;br /&gt;
# Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
# Click on IPAM -&amp;gt; Networks&lt;br /&gt;
# Locate the appropriate network for the server&lt;br /&gt;
# Click on the IP address that you want to register&lt;br /&gt;
# Set the appropriate information&lt;br /&gt;
## Set the &amp;quot;MAC&amp;quot; address of the machine (&#039;&#039;note: CSC networks don&#039;t use the IST DHCP system, so this is effectively ignored&#039;&#039;)&lt;br /&gt;
## Under &amp;quot;IPAM to DNS replication&amp;quot;&lt;br /&gt;
### Domain: Click the grey button next to the text box and change &amp;quot;Inherit&amp;quot; to &amp;quot;Set&amp;quot;. Then select the &amp;quot;csclub.uwaterloo.ca&amp;quot; domain (or other as appropriate)&lt;br /&gt;
### Shortname: The machine&#039;s name (e.g., caffeine)&lt;br /&gt;
## At the bottom&lt;br /&gt;
### Add &amp;quot;systems-committee@csclub.uwaterloo.ca&amp;quot; as a Technical Contact&lt;br /&gt;
### Select the appropriate Pol8 Classification (usually Public)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Add any aliases for the host (these will be created as CNAME records)&lt;br /&gt;
# Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Repeat the instructions for the IPv6 entry, however you may need to click the &amp;quot;+&amp;quot; to add the IP address on the network.&lt;br /&gt;
&lt;br /&gt;
==== Add/remove an alias to an existing host ====&lt;br /&gt;
&lt;br /&gt;
* Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
* Click on IPAM -&amp;gt; Networks&lt;br /&gt;
* Locate the appropriate network for the server&lt;br /&gt;
* Click on the IP address associated with the &#039;&#039;&#039;destination&#039;&#039;&#039; server (e.g., caffeine)&lt;br /&gt;
* If you get sent to a blank list.. click the &amp;quot;Address&amp;quot; object in the breadcrumb&lt;br /&gt;
* Click &amp;quot;Edit&amp;quot; under the ALIASES section on the screen&lt;br /&gt;
* Click &amp;quot;Next&amp;quot; twice&lt;br /&gt;
* Add or remove the alias to the list&lt;br /&gt;
* Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the IPAM records.&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Updating records ===&lt;br /&gt;
If you manually update a record in the dns1 container (somewhere in /etc/bind), make sure you also update the serial number for the SOA record for the corresponding zone. Then, run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5033</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5033"/>
		<updated>2023-07-01T23:39:41Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Adding a new project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Spring 2023, it is now generated by Hugo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/deploy.sh&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run every minute.&lt;br /&gt;
&lt;br /&gt;
The script will first run &amp;lt;code&amp;gt;synctask2project&amp;lt;/code&amp;gt;, which pull project synchronization status from Merlin (using merlin&#039;s socket), combine sub-projects (for example &amp;lt;code&amp;gt;racket&amp;lt;/code&amp;gt; is a combination for two merlin tasks, &amp;lt;code&amp;gt;plt-bundles&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;racket-installers&amp;lt;/code&amp;gt;) and read the size of the project using &amp;lt;code&amp;gt;zfs list -Hp&amp;lt;/code&amp;gt;. This Python script then spits out a json file to &amp;lt;code&amp;gt;data/sync.json&amp;lt;/code&amp;gt;. Hugo then read the json file and generate the HTML table based on it. The table part is also generated separately into &amp;lt;code&amp;gt;public/project_table/index.html&amp;lt;/code&amp;gt;, which can be read by htmx (JS library used on index page) to achieve live reload on sync status. Finally, the generated product of Hugo is copied to mirror root for display by nginx.&lt;br /&gt;
&lt;br /&gt;
Project information is located at &amp;lt;code&amp;gt;synctask2project/config.toml&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOT&#039;&#039;&#039; the config.toml in the root folder! That&#039;s the config for Hugo). Its format is as follows:&lt;br /&gt;
&amp;lt;pre class=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
merlin_sock = &amp;quot;/path/to/merlin/socket&amp;quot;&lt;br /&gt;
zfs_pools = [&amp;quot;mirror_zfs_pool1&amp;quot;, &amp;quot;mirror_zfs_pool2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[project_name]&lt;br /&gt;
# This is supposed to be the short version shown on the website&lt;br /&gt;
# Mandatory field&lt;br /&gt;
site = &amp;quot;project.site&amp;quot;&lt;br /&gt;
# The full URL&lt;br /&gt;
# Mandatory field&lt;br /&gt;
url = &amp;quot;https://full.project.site&amp;quot;&lt;br /&gt;
# We are the upstream or archived project. Don&#039;t show sync error or last sync time&lt;br /&gt;
# Optional. Default: no&lt;br /&gt;
upstream = yes &lt;br /&gt;
# If this project contains multiple merlin sync tasks, list them here&lt;br /&gt;
# Optional. Default: project_name&lt;br /&gt;
merlin-tasks = [&amp;quot;task1&amp;quot;, &amp;quot;task2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# define more projects below...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mirror-index also supports news. When adding new projects or making modifications, create a markdown file in &amp;lt;code&amp;gt;mirror-index/content/news/&amp;lt;/code&amp;gt; to tell the user what was changed. It should be picked up by Hugo automatically on next generation.&lt;br /&gt;
&lt;br /&gt;
On first setup, run &amp;lt;code&amp;gt;setup.sh&amp;lt;/code&amp;gt;. When doing development (like change the sass or static files), run &amp;lt;code&amp;gt;build.sh&amp;lt;/code&amp;gt; to build assets.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Making changes ===&lt;br /&gt;
Everything in the &amp;lt;code&amp;gt;~mirror&amp;lt;/code&amp;gt; is managed by git (so a monorepo containing all sub-projects like Merlin and mirror-index). To make changes, switch to the mirror user and commit with &amp;lt;code&amp;gt;--author &amp;quot;FirstName LastName &amp;lt;email@csc&amp;gt;&amp;lt;/code&amp;gt; to show who made the change. Then run &amp;lt;code&amp;gt;git push&amp;lt;/code&amp;gt; to push the changes. The remote is using the HTTPS URL, so just enter your CSC credentials.&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin-config.ini&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin-go&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/log/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/log-$PROTOCOL/$PROJECT_NAME-*.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)  [&#039;&#039;&#039;NOTE: The backup machine is currently unavailable, so this step is not currently needed]&#039;&#039;&#039;&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update the rsync daemon&lt;br /&gt;
#* Edit &amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;, adding a new entry for the new name (keep the old name too). Restart with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5028</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5028"/>
		<updated>2023-06-19T20:57:22Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Rename project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update the rsync daemon&lt;br /&gt;
#* Edit &amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;, adding a new entry for the new name (keep the old name too). Restart with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5027</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5027"/>
		<updated>2023-06-19T20:56:09Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Rename project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4888</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4888"/>
		<updated>2022-10-04T00:32:49Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Update storage information for new zpool&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;**&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039;UPDATE&#039;&#039;&#039;**: merlin.py and the sync scripts have been rewritten in Go. The current status can be found using &amp;lt;code&amp;gt;systemctl status merlin-go.service&amp;lt;/code&amp;gt; or by going to &amp;lt;code&amp;gt;/home/mirror/merlin/cmd/arthur&amp;lt;/code&amp;gt; and running &amp;lt;code&amp;gt;./arthur status&amp;lt;/code&amp;gt;. To force sync a project execute &amp;lt;code&amp;gt;./arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4887</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4887"/>
		<updated>2022-10-04T00:30:32Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Updated adding a new project for new pool configuration&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;**&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039;UPDATE&#039;&#039;&#039;**: merlin.py and the sync scripts have been rewritten in Go. The current status can be found using &amp;lt;code&amp;gt;systemctl status merlin-go.service&amp;lt;/code&amp;gt; or by going to &amp;lt;code&amp;gt;/home/mirror/merlin/cmd/arthur&amp;lt;/code&amp;gt; and running &amp;lt;code&amp;gt;./arthur status&amp;lt;/code&amp;gt;. To force sync a project execute &amp;lt;code&amp;gt;./arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4853</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4853"/>
		<updated>2022-08-05T02:09:43Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Removed protection from &amp;quot;Mirror&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2,3}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Two-Factor_Authentication&amp;diff=4533</id>
		<title>Two-Factor Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Two-Factor_Authentication&amp;diff=4533"/>
		<updated>2021-09-15T04:38:33Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Update on-campus range to include private addresses&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CSC currently uses [https://uwaterloo.ca/2fa DUO 2FA] for off-campus SSH access to the general-use machines. This makes it easy to sign up new members remotely, who almost certainly already have the DUO app installed.&lt;br /&gt;
&lt;br /&gt;
== For members ==&lt;br /&gt;
If you are on campus, you may SSH into any general-use machine via:&lt;br /&gt;
* public key authentication&lt;br /&gt;
* password&lt;br /&gt;
* GSSAPI (Kerberos ticket)&lt;br /&gt;
If you are using a student CS machine as a jump host, or are using the campus VPN, this also counts as being on campus.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
If you are off campus, you may SSH into any general-use machine via:&lt;br /&gt;
* public key authentication&lt;br /&gt;
* password &amp;lt;b&amp;gt;and&amp;lt;/b&amp;gt; DUO&lt;br /&gt;
Note that you may not SSH remotely into a CSC machine using only your password. After you enter your password, you should see a prompt from DUO.&lt;br /&gt;
&lt;br /&gt;
== For syscom ==&lt;br /&gt;
We are using the [https://duo.com/docs/duounix pam_duo] module to contact the DUO server.&lt;br /&gt;
&lt;br /&gt;
=== Off-campus case ===&lt;br /&gt;
This is the relevant portion of /etc/ssh/sshd_config:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# For pam_duo&lt;br /&gt;
UsePAM yes&lt;br /&gt;
&lt;br /&gt;
# DUO should be passed the IP address, not the hostname&lt;br /&gt;
UseDNS no&lt;br /&gt;
&lt;br /&gt;
# public key authentication with authorized_keys&lt;br /&gt;
PubkeyAuthentication yes&lt;br /&gt;
&lt;br /&gt;
# password authentication, *not* via PAM&lt;br /&gt;
PasswordAuthentication yes&lt;br /&gt;
PermitEmptyPasswords no&lt;br /&gt;
KerberosAuthentication yes&lt;br /&gt;
&lt;br /&gt;
# for PAM conversations&lt;br /&gt;
ChallengeResponseAuthentication yes&lt;br /&gt;
&lt;br /&gt;
# off-campus access&lt;br /&gt;
AuthenticationMethods publickey password,keyboard-interactive&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The last line says that users may authenticate via publickey, &amp;lt;b&amp;gt;or&amp;lt;/b&amp;gt; with a password and DUO (keyboard-interactive basically means &amp;quot;use PAM&amp;quot;).&lt;br /&gt;
Note that sshd is &amp;lt;b&amp;gt;not&amp;lt;/b&amp;gt; using PAM to verify the user&#039;s password; it is contacting the Kerberos server directly instead (we set KerberosAuthentication to &#039;yes&#039;). Once it has verified the user&#039;s&lt;br /&gt;
password, it runs the &#039;auth&#039; sections in /etc/pam.d/sshd, which we have set to:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
auth  [success=1 default=ignore] pam_duo.so&lt;br /&gt;
auth  requisite pam_deny.so&lt;br /&gt;
auth  required pam_permit.so&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that we are &amp;lt;b&amp;gt;not&amp;lt;/b&amp;gt; including the common-auth file (which is the default). This is because at this stage, the user&#039;s password has already been verified, so DUO is the last step.&lt;br /&gt;
&lt;br /&gt;
For account, session and password, sshd will still consult PAM, meaning that the user will still be prompted to change their password if +needchange was set (which we want).&lt;br /&gt;
&lt;br /&gt;
=== On-campus case ===&lt;br /&gt;
In /etc/ssh/sshd_config, we also have:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# On-campus&lt;br /&gt;
Match Address 129.97.0.0/16,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,2620:101:f000::/47,fd74:6b6a:8eca::/47&lt;br /&gt;
    AuthenticationMethods publickey password gssapi-with-mic&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The IP prefixes are those for AS12093 (University of Waterloo). If someone is on-campus, then they may use just a password, or a Kerberos ticket (GSSAPI).&lt;br /&gt;
&lt;br /&gt;
== Helpful Links ==&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/services/two-factor-authentication&lt;br /&gt;
* https://duo.com/docs/duounix&lt;br /&gt;
* https://manpages.debian.org/stable/openssh-server/sshd_config.5.en.html&lt;br /&gt;
* http://www.linux-pam.org/Linux-PAM-html/sag-configuration-file.html&lt;br /&gt;
* https://cern-cert.github.io/pam_2fa/&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4523</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4523"/>
		<updated>2021-08-17T22:19:23Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Adding a new project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2,3}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4522</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4522"/>
		<updated>2021-08-17T22:19:14Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Adding a new project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4521</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4521"/>
		<updated>2021-08-17T22:18:54Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Adding a new project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2,3}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4520</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4520"/>
		<updated>2021-08-17T22:18:38Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Index */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the information of directories. All directories are listed by default, whether or not they appear in this list - only those under &amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; are ignored. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4519</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4519"/>
		<updated>2021-08-17T22:17:23Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Index */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run hourly. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script will iterate all folders in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;, identify the size of the project using `zfs get -H -o value used $dataset`, where $dataset is calculated from the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;. The size of all folders is added together to calculate the total folder size (the total size includes hidden projects).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
duflags: --human-readable --max-depth=1&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. &amp;lt;code&amp;gt;duflags&amp;lt;/code&amp;gt; specifies the flags to be passed to &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the list of directories to be listed. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4518</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4518"/>
		<updated>2021-08-17T22:14:22Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of three zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run at 5:40am on the 14th and 28th of each month. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This causes an instance of &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt; which computes the size of each directory. This list is then sorted alphabetically by directory name and returned to the Python script. If any errors occur during this process, the script conservatively chooses to exit rather than risk generating an index file that is incorrect.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
duflags: --human-readable --max-depth=1&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. &amp;lt;code&amp;gt;duflags&amp;lt;/code&amp;gt; specifies the flags to be passed to &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the list of directories to be listed. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4517</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4517"/>
		<updated>2021-08-17T22:14:14Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of two zfs zpools. There are 8 drives per array (7 run cscmirror3), configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror3&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run at 5:40am on the 14th and 28th of each month. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This causes an instance of &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt; which computes the size of each directory. This list is then sorted alphabetically by directory name and returned to the Python script. If any errors occur during this process, the script conservatively chooses to exit rather than risk generating an index file that is incorrect.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
duflags: --human-readable --max-depth=1&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. &amp;lt;code&amp;gt;duflags&amp;lt;/code&amp;gt; specifies the flags to be passed to &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the list of directories to be listed. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=How_to_IRC&amp;diff=4506</id>
		<title>How to IRC</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=How_to_IRC&amp;diff=4506"/>
		<updated>2021-07-11T00:04:15Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Replace references to taurine with neotame, as taurine caught fire 2019 (as well as some unnecessary references to caffeine)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Chatting with us =&lt;br /&gt;
&lt;br /&gt;
== The Lounge ==&lt;br /&gt;
We have a web UI for IRC at [https://chat.csclub.uwaterloo.ca The Lounge].&lt;br /&gt;
[[File:The_lounge_screenshot.png|alt=The Lounge screenshot of #csc channel|400px|top|left|thumbnail|A screen capture of the #csc channel, as seen from The Lounge web client]]&lt;br /&gt;
If you are a first-time IRC user, this is by far the easiest way to join.&lt;br /&gt;
The steps are (roughly):&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Login using your CSC credentials. &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Follow the prompts to join the Libera server (make sure TLS is enabled). &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Type &amp;lt;code&amp;gt;/join #csc&amp;lt;/code&amp;gt; &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt; Say hi! &amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are a syscom member, you will need to join the #csc-syscom channel, which requires nick (nickname) registration. To do this, run the following commands in The Lounge (or any IRC client):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/nick my_nickname&lt;br /&gt;
/msg NickServ REGISTER my_password my_email@example.com&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Fill in your nickname, password and email as appropriate. You will receive an email from Libera asking you to verify your email address.&lt;br /&gt;
Once you have done so, go back to The Lounge, go the Libera window, click on the three-dots button in the top right corner, and click &amp;quot;Edit this network&amp;quot;. There, you can specify the nick and password which you just created.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Mattermost ==&lt;br /&gt;
We self-host [https://mattermost.csclub.uwaterloo.ca/ Mattermost] now, it&#039;s easier to use if you&#039;re not familiar with IRC. If you know how to use IRC, we&#039;re #csc on libera.chat and there should be a #csc bridge that relays messages between #csc and our Mattermost instance.&lt;br /&gt;
&lt;br /&gt;
Both #csc on libera.chat ([https://web.libera.chat/#csc Libera Webchat] or via IRC clients e.g. irssi, weechat) and ~csc on Mattermost are official channels to interact with CSC members!&lt;br /&gt;
&lt;br /&gt;
= Mattermost Setup =&lt;br /&gt;
&lt;br /&gt;
[[File:Mattermost-phone-sample.jpg|alt=Mattermost android screenshot of #csc channel|200px|top|left|thumbnail|A screen capture of the #csc channel, as seen from Mattermost Android client]]&lt;br /&gt;
[[File:Mattermost-csc-sample.png|alt=Mattermost #csc screen capture, including a conversation between members of the channel|200px|top|right|thumbnail|A screen capture of the #csc channel, as seen from Mattermost desktop]]&lt;br /&gt;
&lt;br /&gt;
Make an account at [https://mattermost.csclub.uwaterloo.ca/signup_email our self-hosted Mattermost]. For username, you can put your questid (i.e. your CSC username), although you can always set your full name as it will appear in Mattermost.&lt;br /&gt;
&lt;br /&gt;
The benefit of Mattermost over Slack and family is that Slack stores all your information on Slack&#039;s servers, wherever they are in the US. They do this so they can sell your data back to you (e.g. not allowing you to see old messages), but Slack is also closed-source even though it was derived from IRC. Mattermost is open-source and hosted on CSC servers.&lt;br /&gt;
&lt;br /&gt;
For iOS users, Mattermost&#039;s mobile app is also a superior option if you wish to receive push notifications as it supports Apple&#039;s native push via iCloud/APN.&lt;br /&gt;
&lt;br /&gt;
= IRC Setup =&lt;br /&gt;
&lt;br /&gt;
[[File:Glowing-bear-screencap.png|alt=glowing-bear screen capture of #csc IRC channel|right|thumbnail|450px|A screen capture of the #csc IRC channel, as seen from glowing-bear client]]&lt;br /&gt;
&lt;br /&gt;
[[File:Weechat-Android-screenshot.png|alt=Weechat Android screen capture of #csc IRC channel|right|thumbnail|A screen capture of the #csc IRC channel, as seen from Weechat Android client]]&lt;br /&gt;
&lt;br /&gt;
This method will establish a persistent IRC sessions that you can connect to with different clients. A weechat server program running on a CSClub server will remain connected to IRC networks at all times, and simply connecting to your weechat server program will give you all the chat history upon connection.&lt;br /&gt;
&lt;br /&gt;
To set up your weechat server program:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log in to a CS Club general-use server, such as taurine.csclub.uwaterloo.ca, and run `weechat` in such a way that it will keep running after you log out&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Replace ctdalek with your username&lt;br /&gt;
&lt;br /&gt;
  $ ssh ctdalek@taurine.csclub.uwaterloo.ca&lt;br /&gt;
  $ screen -U weechat&lt;br /&gt;
&lt;br /&gt;
A &amp;quot;WeeChat&amp;quot; window should have opened up. Type the following commands into this window, replacing [yourpassword] with a password of your choice and [yourport] with a number in the range of [28100-28400]:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; /set relay.network.password [yourpassword]&lt;br /&gt;
  &amp;gt; /relay add weechat [yourport]&lt;br /&gt;
  &amp;gt; /save&lt;br /&gt;
&lt;br /&gt;
Once you have entered in all these commands, you don&#039;t need your terminal anymore. You can close your ssh window!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Your personal WeeChat server is set up. Now connect to it using a pretty client:&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[http://www.glowing-bear.org/ glowing-bear] is a free and open source web-based weechat client. It works well as a desktop client, and on iOS. To connect using glowing-bear, fill in &amp;quot;Connection Settings&amp;quot; with `neotame.csclub.uwaterloo.ca`, `[yourport]`, and `[yourpassword]`. Make sure to use the http version of the website with this guide! HTTPS only works if you set up encryption. That&#039;s not covered here.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended&#039;&#039;&#039;: [https://play.google.com/store/apps/details?id=com.ubergeek42.WeechatAndroid Weechat Android] is a free and open source android weechat client. It gives notifications when your receive a direct message or your name is mentioned in one of the channels you are in. To connect using Weechat Android, fill in Settings &amp;gt; Connection with `neotame.csclub.uwaterloo.ca`, `[yourport]`, and `[yourpassword]`.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Join the #csc IRC channel&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In your weechat client (e.g. glowing-bear or Weechat Android), switch to the &#039;Libera&#039; tab and type:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; /server add libera irc.libera.chat/6697 -ssl -autoconnect&lt;br /&gt;
  &amp;gt; /set irc.server.libera.autojoin &amp;quot;#csc&amp;quot;&lt;br /&gt;
  &amp;gt; /save&lt;br /&gt;
  &amp;gt; /connect libera&lt;br /&gt;
&lt;br /&gt;
You&#039;re now connected to the main IRC network! Connected by an SSL connection, so you&#039;re super sneaky as well. Way to go.&lt;br /&gt;
&lt;br /&gt;
Now, to join the CSC channel!&lt;br /&gt;
&lt;br /&gt;
In your client, you&#039;ll now have two buffers that you can switch to. One is called &amp;quot;weechat&amp;quot; and the other is &amp;quot;libera&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Switch to the &amp;quot;libera&amp;quot; buffer and type:&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; /join #csc&lt;br /&gt;
&lt;br /&gt;
Congratulations you win!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Know some IRC commands&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Welcome to the channel! Go ahead and say something, like&lt;br /&gt;
&lt;br /&gt;
  &amp;gt; Good morning ctdalek http://www.total-knowledge.com/~ilya/mips/ugt.html&lt;br /&gt;
&lt;br /&gt;
If you want to privately message someone, use &lt;br /&gt;
  &amp;gt; /q [nick] [optional message] &lt;br /&gt;
which will open a new tab with that person. For example `/q pj2melan ping pong`.&lt;br /&gt;
&lt;br /&gt;
If you want to join another channel, use &lt;br /&gt;
  &amp;gt; /join [channel]&lt;br /&gt;
For example `/join #csc`.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&#039;&#039;Note about CSClub server restarts:&#039;&#039; If neotame or any server you&#039;re using to run the weechat program on is restarted for any reason (we&#039;ll email you if it does), Make sure to run `screen -U weechat` again to start your server. You won&#039;t have to reconfigure weechat (step 2) though.&lt;br /&gt;
&lt;br /&gt;
== Securing Glowing Bear - SSL/TLS Setup ==&lt;br /&gt;
&lt;br /&gt;
With the default setup, when you log in to your weechat relay using a client such as glowing-bear or Weechat Android &#039;&#039;your password is sent in the clear&#039;&#039;. If you believe this to be a bad thing, follow these steps to enable SSL encryption between you and your weechat relay running on neotame.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Log in to neotame.csclub.uwaterloo.ca to generate an SSL certificate:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ssh neotame.csclub.uwaterloo.ca&lt;br /&gt;
$ mkdir ~/.weechat/ssl&lt;br /&gt;
$ cd ~/.weechat/ssl&lt;br /&gt;
$ openssl req -nodes -newkey rsa:4096 -keyout relay.pem -x509 -days 365 -out relay.pem # Fill in the fields as it asks&lt;br /&gt;
$ exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Tell weechat to use the new certificate you generated, and add a new relay with a different password (since your old password was likely compromised):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;In your weechat client (glowing-bear, or Weechat Android), run&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;gt; /set relay.network.password [newpassword]&lt;br /&gt;
&amp;gt; /relay sslcertkey&lt;br /&gt;
&amp;gt; /relay del weechat&lt;br /&gt;
&amp;gt; /relay add ssl.weechat [yourport]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;li&amp;gt;Tell your client to connect to your relay using SSL:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For glowing-bear, refresh and simply check the &amp;quot;Encryption. Check settings for help.&amp;quot; checkbox when logging in with your new password.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;For Weechat Android, in Settings &amp;gt; Connection, change Connection type to WeeChat SSL and change your Relay password.&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Enjoy fully encrypted communication!&lt;br /&gt;
&lt;br /&gt;
You might have warnings about untrusted certificates, but since you made the certificate yourself you can trust yourself and add required security exceptions.&lt;br /&gt;
&lt;br /&gt;
== Quick SSH-based Setup ==&lt;br /&gt;
&lt;br /&gt;
1. Open up an IRC client, i.e. irssi. Launch irssi in a screen session, which you&lt;br /&gt;
can return to later.&lt;br /&gt;
&lt;br /&gt;
  $ ssh neotame.csclub.uwaterloo.ca&lt;br /&gt;
  $ screen -U irssi&lt;br /&gt;
&lt;br /&gt;
2. In irssi, connect to the libera network and join our channel. &lt;br /&gt;
&lt;br /&gt;
  /server add -auto -net libera -ssl -ssl_verify irc.libera.chat 6697&lt;br /&gt;
  /save&lt;br /&gt;
  /connect libera&lt;br /&gt;
  /join #csc&lt;br /&gt;
&lt;br /&gt;
3. Please set your nickname to your Quest ID so we know who you are.  &lt;br /&gt;
&lt;br /&gt;
  /nick $YOUR_QUEST_ID&lt;br /&gt;
  /save&lt;br /&gt;
&lt;br /&gt;
You can register your nickname on the libera network by messaging NickServ.&lt;br /&gt;
&lt;br /&gt;
  /msg NickServ REGISTER password email &lt;br /&gt;
&lt;br /&gt;
4. Close your screen session, which you can return to later.&lt;br /&gt;
&lt;br /&gt;
  CTRL-A CTRL-D&lt;br /&gt;
&lt;br /&gt;
5. Return to your screen session. You will have remained connected to the channel. &lt;br /&gt;
&lt;br /&gt;
  $ ssh neotame.csclub.uwaterloo.ca -t &amp;quot;screen -Urd&amp;quot;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=New_Member_Guide&amp;diff=4354</id>
		<title>New Member Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=New_Member_Guide&amp;diff=4354"/>
		<updated>2021-01-31T00:51:20Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Hello, and welcome to the Computer Science Club! Thanks for joining. The office staff who signed you up should have told you about this stuff, but just as a refresher, here it is again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Office ==&lt;br /&gt;
&lt;br /&gt;
* Our office is MC 3036/3037 (we occupy both rooms) and we&#039;re across the hall (but distinct from) the Mathsoc office.&lt;br /&gt;
&lt;br /&gt;
* Our club doesn&#039;t have weekly meetings or anything like that. If the door is open, we are open (even if it&#039;s 3 in the morning on Sunday). Feel free to drop in and say hi!&lt;br /&gt;
&lt;br /&gt;
* The office closes when the last office staff leaves the room, and the office opens when somebody with a key comes by. If you&#039;re interested in becoming office staff, look out for the termly office staff training event or ask around the office.&lt;br /&gt;
&lt;br /&gt;
* We have staplers by the door farthest from Mathsoc. Even if you&#039;re not a member, you&#039;re allowed to use them. You don&#039;t even have to ask (and in fact, we&#039;d prefer if you didn&#039;t. Office regulars spend a good amount of time telling people that yes, they can use the staplers).&lt;br /&gt;
&lt;br /&gt;
* We sell pop, chips, chocolate bars and other snacks. Prices are on the fridge door. Pay the red cup in the fridge.&lt;br /&gt;
&lt;br /&gt;
== Events ==&lt;br /&gt;
We hold a different set of events every term, but the same types of events come up again and again. Watch out for emails about:&lt;br /&gt;
* Industry tech talks. In the past, we&#039;ve gotten folks from various tech companies to talk about algorithms, database design decisions and other things.&lt;br /&gt;
&lt;br /&gt;
* UNIX 10X tutorials. Don&#039;t know how to use the commandline? Come out and learn with us. Know how to use the commandline? Come out and help us answer questions.&lt;br /&gt;
&lt;br /&gt;
* Member talks. Do you have a burning desire to talk about AVL trees? No? Well, if you want to talk about a computer sciencey topic that&#039;s close to your heart, send an email to exec at csclub.uwaterloo.ca with a talk abstract (a paragraph we can put on a poster to describe your talk) and we&#039;ll see if we can make something happen.&lt;br /&gt;
&lt;br /&gt;
* Code parties. We eat food, talk and write code. Code parties happen several times a term.&lt;br /&gt;
&lt;br /&gt;
== Machines ==&lt;br /&gt;
As a member of the club, you have access to our machines, both [[Machine_List#Servers|servers in the machine room down the hall]] and [[Machine_List#Office Terminals|desktops in our physical office]]. Keep in mind that your username is your quest userid (e.g. j7smith) and your password starts out as the one you set when you joined the club for the first time.&lt;br /&gt;
&lt;br /&gt;
* As a member you must abide by the [https://csclub.uwaterloo.ca/services/machine_usage machine usage policy].&lt;br /&gt;
&lt;br /&gt;
* Your files are accessible on all of our machines&lt;br /&gt;
&lt;br /&gt;
* Keep in mind that the machines are shared among all of our members. Play nice. For example, &amp;lt;nowiki&amp;gt;caffeine&amp;lt;/nowiki&amp;gt; is our web server. You are strongly advised not to run long, intensive jobs on it. Something like that is a better fit for &amp;lt;nowiki&amp;gt;hfcs&amp;lt;/nowiki&amp;gt;, &amp;lt;nowiki&amp;gt;corn-syrup&amp;lt;/nowiki&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
* use SSH for access to the machines in the server room.&lt;br /&gt;
** If you don&#039;t know how to use the commandline, you can wait for our approximately termly UNIX 101 event, google for &amp;quot;how to use the command line&amp;quot;, or ask around the office.&lt;br /&gt;
** if you happen to be using Windows, you can use an SSH client such as PuTTY[http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html].&lt;br /&gt;
** If you have a Mac or you run Linux, you already have the &amp;lt;nowiki&amp;gt;ssh&amp;lt;/nowiki&amp;gt; command installed. If your userid is &amp;lt;nowiki&amp;gt;j7smith&amp;lt;/nowiki&amp;gt; and you want to use &amp;lt;nowiki&amp;gt;taurine&amp;lt;/nowiki&amp;gt;, just open up a terminal window and type the following. You will be asked for your CSC password.&lt;br /&gt;
&lt;br /&gt;
 ssh j7smith@corn-syrup.csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
* Our office terminals are turned off, rebooted and otherwise reset somewhat frequently.&lt;br /&gt;
&lt;br /&gt;
* If you forget your password, come by the office with your watcard and some other form of ID. Regular office staff can&#039;t reset your password for you, but if there&#039;s someone on our Systems Committee hanging around, they can do this for you.&lt;br /&gt;
&lt;br /&gt;
* If you would like to change your password, log on to any of our machines and type &amp;lt;nowiki&amp;gt;kpasswd&amp;lt;/nowiki&amp;gt; in a terminal. You will be prompted for your old password and be asked to type in your new password twice (just to make sure you didn&#039;t make a typo).&lt;br /&gt;
&lt;br /&gt;
* We have a MySQL daemon running, but only on our web server &amp;lt;nowiki&amp;gt;caffeine&amp;lt;/nowiki&amp;gt;. Check out [[MySQL|this page]] if you would like a database.&lt;br /&gt;
&lt;br /&gt;
* for technical questions (including package installation requests), send an email to our systems committee, syscom at csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
== Web Hosting ==&lt;br /&gt;
You get web space with your CSC membership. Your website is visible at [http://csclub.uwaterloo.ca/~j7smith] (where j7smith is replaced with your own userid, of course).&lt;br /&gt;
&lt;br /&gt;
See [[Web Hosting]] for more information.&lt;br /&gt;
&lt;br /&gt;
== IRC ==&lt;br /&gt;
We have an IRC (internet relay chat) channel. Come hang out with us in #csc on freenode. If you are unfamiliar with IRC, you may want to read [[How to IRC|this guide]].&lt;br /&gt;
&lt;br /&gt;
== Mail ==&lt;br /&gt;
* see the [[Mail]] page.&lt;br /&gt;
* The CSC gets a lot of requests to distribute [[Industry Opportunities]] to our members. We have a special opt-in mailing list for the people that want to hear about such things.&lt;br /&gt;
* We have a low-volume general mailing list which we use to send out information about upcoming events.&lt;br /&gt;
&lt;br /&gt;
== Library ==&lt;br /&gt;
There are books on the shelves lining the office. Feel free to drop by and read them.&lt;br /&gt;
&lt;br /&gt;
Someone who knows more about the library checkout system than jy2wong should write something here.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=4334</id>
		<title>SSL</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=4334"/>
		<updated>2020-08-29T04:03:55Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== GlobalSign ==&lt;br /&gt;
&lt;br /&gt;
The CSC currently has an SSL Certificate from GlobalSign for *.csclub.uwaterloo.ca provided at no cost to us through IST.  GlobalSign likes to take a long time to respond to certificate signing requests (CSR) for wildcard certs, so our CSR really needs to be handed off to IST at least 2 weeks in advance. You can do it sooner – the certificate expiry date will be the old expiry date + 1 year (+ a bonus )  Having an invalid cert for any length of time leads to terrible breakage, followed by terrible workarounds and prolonged problems.&lt;br /&gt;
&lt;br /&gt;
When the certificate is due to expire in a month or two, syscom should (but apparently doesn&#039;t always) get an email notification. This will include a renewal link. Otherwise, use the [https://uwaterloo.ca/information-systems-technology/about/organizational-structure/information-security-services/certificate-authority/globalsign-signed-x5093-certificates/self-service-globalsign-ssl-certificates IST-CA self service system]. Please keep a copy of the key, CSR and (once issued) certificate in &amp;lt;tt&amp;gt;/home/sysadmin/certs&amp;lt;/tt&amp;gt;. The OpenSSL examples linked there are good to generate a 2048-bit RSA key and a corresponding CSR. It&#039;s probably a good idea to change the private key (as it&#039;s not that much effort anyways). Just sure your CSR is for &amp;lt;tt&amp;gt;*.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
At the self-service portal, these options worked in 2013. If you need IST assistance, [mailto:ist-ca@uwaterloo.ca ist-ca@uwaterloo.ca] is the email address you should contact.&lt;br /&gt;
  Products: OrganizationSSL&lt;br /&gt;
  SSL Certificate Type: Wildcard SSL Certificate&lt;br /&gt;
  Validity Period: 1 year&lt;br /&gt;
  Are you switching from a Competitor? No, I am not switching&lt;br /&gt;
  Are you renewing this Certificate? Yes (paste current certificate)&lt;br /&gt;
  30-day bonus: Yes (why not?)&lt;br /&gt;
  Add specific Subject Alternative Names (SANs): No (*.csclub.uwaterloo.ca automatically adds csclub.uwaterloo.ca as a SAN)&lt;br /&gt;
  Enter Certificate Signing Request (CSR): Yes (paste CSR)&lt;br /&gt;
  Contact Information:&lt;br /&gt;
    First Name: Computer Science Club&lt;br /&gt;
    Last Name: Systems Committee&lt;br /&gt;
    Telephone: +1 519 888 4567 x33870&lt;br /&gt;
    Email Address: syscom@csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
== Certificate Location ==&lt;br /&gt;
&lt;br /&gt;
Keep a copy of newly generated certificates in /home/sysadmin/certs on the NFS server (currently [[Machine_List#aspartame|aspartame]]).&lt;br /&gt;
&lt;br /&gt;
A list of places you&#039;ll need to put the new certificate to keep our services running. Private key (if applicable) should be kept next to the certificate with the extension .key.&lt;br /&gt;
&lt;br /&gt;
* caffeine:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* coffee:/etc/ssl/private/csclub.uwaterloo.ca (for PostgreSQL and MariaDB)&lt;br /&gt;
* mail:/etc/ssl/private/csclub-wildcard.crt (for Apache, Postfix and Dovecot)&lt;br /&gt;
* rt:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* potassium-benzoate:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* auth1:/etc/ssl/private/csclub-wildcard.crt (for slapd)&lt;br /&gt;
* auth2:/etc/ssl/private/csclub-wildcard.crt (for slapd)&lt;br /&gt;
* logstash:/etc/ssl/private/csclub-wildcard.crt (for nginx) [temporarily down 2020]&lt;br /&gt;
* mattermost:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* load-balancer-0(1|2):/etc/ssl/private/csclub.uwaterloo.ca (for haproxy) [temporarily down 2020]&lt;br /&gt;
&lt;br /&gt;
Some services (e.g. Dovecot, Postfix) prefer to have the certificate chain in one file. Concatenate the appropriate intermediate root to the end of the certificate and store this as csclub-wildcard-chain.crt.&lt;br /&gt;
&lt;br /&gt;
== letsencrypt ==&lt;br /&gt;
&lt;br /&gt;
We support letsencrypt for our virtual hosts with custom domains. We use the &amp;lt;tt&amp;gt;cerbot&amp;lt;/tt&amp;gt; from debian repositories with a configuration file at &amp;lt;tt&amp;gt;/etc/letsencrypt/cli.ini&amp;lt;/tt&amp;gt;, and a systemd timer to handle renewals.&lt;br /&gt;
&lt;br /&gt;
The setup for a new domain is:&lt;br /&gt;
&lt;br /&gt;
# Become &amp;lt;tt&amp;gt;certbot&amp;lt;/tt&amp;gt; on caffine with &amp;lt;tt&amp;gt;sudo -u certbot bash&amp;lt;/tt&amp;gt; or similar.&lt;br /&gt;
# Run &amp;lt;tt&amp;gt;certbot certonly -c /etc/letsencrypt/cli.ini -d DOMAIN --logs-dir /tmp&amp;lt;/tt&amp;gt;. The logs-dir isn&#039;t important and is only needed for troubleshooting.&lt;br /&gt;
# Set up the Apache site configuration using the example below. (apache config is in /etc/apache2) Note the permanent redirect to https.&lt;br /&gt;
# Make sure to commit your changes when you&#039;re done.&lt;br /&gt;
# Reloading apache config is &amp;lt;tt&amp;gt;sudo systemctl reload apache2&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
     ServerName example.com&lt;br /&gt;
     ServerAlias *.example.com&lt;br /&gt;
     ServerAdmin example@csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
     #DocumentRoot /users/example/www/&lt;br /&gt;
     Redirect permanent / https://example.com/&lt;br /&gt;
 &lt;br /&gt;
     ErrorLog /var/log/apache2/example-error.log&lt;br /&gt;
     CustomLog /var/log/apache2/example-access.log combined&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;VirtualHost csclub:443&amp;gt;&lt;br /&gt;
     SSLEngine on&lt;br /&gt;
     SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem&lt;br /&gt;
     SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem&lt;br /&gt;
     SSLStrictSNIVHostCheck on&lt;br /&gt;
 &lt;br /&gt;
     ServerName example.com&lt;br /&gt;
     ServerAlias *.example.com&lt;br /&gt;
     ServerAdmin example@csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
     DocumentRoot /users/example/www&lt;br /&gt;
 &lt;br /&gt;
     ErrorLog /var/log/apache2/example-error.log&lt;br /&gt;
     CustomLog /var/log/apache2/example-access.log combined&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=4323</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=4323"/>
		<updated>2020-01-24T03:32:55Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through [https://nsbuild.uwaterloo.ca Infoblox].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* jxpryde&lt;br /&gt;
* mtrberzi&lt;br /&gt;
* API account located in the standard syscom place&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the Infoblox records.&lt;br /&gt;
* Any zone added to Designate DNS service on CSC Cloud&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Infoblox ===&lt;br /&gt;
&lt;br /&gt;
The main DNS zone for the club (csclub.uwaterloo.ca) is managed using the University&#039;s Infoblox system.&lt;br /&gt;
&lt;br /&gt;
To add a new record:&lt;br /&gt;
&lt;br /&gt;
# Visit [https://nsbuild.uwaterloo.ca Infoblox]&lt;br /&gt;
# Locate the desired network&lt;br /&gt;
# Find a free IP address (ping and reverse DNS it to make sure it&#039;s unused) &lt;br /&gt;
# Click add host (+)&lt;br /&gt;
# Set the zone to csclub.uwaterloo.ca&lt;br /&gt;
# Set the name&lt;br /&gt;
# Add the IPv4 address, if it is not set&lt;br /&gt;
# Add the IPv6 address, typically in the format of (2620:101:f000:$SUBNET:c5c::$LAST_OCTET_OF_V4_ADDRESS)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Set Pol8 Classification to &amp;quot;Public&amp;quot;&lt;br /&gt;
# Set Primary OU to &amp;quot;CS&amp;quot;&lt;br /&gt;
# Set Technical Contact to &amp;quot;syscom@csclub.uwaterloo.ca&amp;quot;&lt;br /&gt;
# Click &amp;quot;Save &amp;amp; Close&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The CSC DNS servers will update within 10 minutes with the new information.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=4322</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=4322"/>
		<updated>2020-01-24T03:31:54Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through [https://nsbuild.uwaterloo.ca Infoblox].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* jxpryde&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the Infoblox records.&lt;br /&gt;
* Any zone added to Designate DNS service on CSC Cloud&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Infoblox ===&lt;br /&gt;
&lt;br /&gt;
The main DNS zone for the club (csclub.uwaterloo.ca) is managed using the University&#039;s Infoblox system.&lt;br /&gt;
&lt;br /&gt;
To add a new record:&lt;br /&gt;
&lt;br /&gt;
# Visit [https://nsbuild.uwaterloo.ca Infoblox]&lt;br /&gt;
# Locate the desired network&lt;br /&gt;
# Find a free IP address (ping and reverse DNS it to make sure it&#039;s unused) &lt;br /&gt;
# Click add host (+)&lt;br /&gt;
# Set the zone to csclub.uwaterloo.ca&lt;br /&gt;
# Set the name&lt;br /&gt;
# Add the IPv4 address, if it is not set&lt;br /&gt;
# Add the IPv6 address, typically in the format of (2620:101:f000:$SUBNET:c5c::$LAST_OCTET_OF_V4_ADDRESS)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Set Pol8 Classification to &amp;quot;Public&amp;quot;&lt;br /&gt;
# Set Primary OU to &amp;quot;CS&amp;quot;&lt;br /&gt;
# Set Technical Contact to &amp;quot;syscom@csclub.uwaterloo.ca&amp;quot;&lt;br /&gt;
# Click &amp;quot;Save &amp;amp; Close&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The CSC DNS servers will update within 10 minutes with the new information.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=New_CSC_Machine&amp;diff=4301</id>
		<title>New CSC Machine</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=New_CSC_Machine&amp;diff=4301"/>
		<updated>2019-09-15T19:48:00Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* apt */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Booting =&lt;br /&gt;
&lt;br /&gt;
* Put the TFTP image in place (if dist-arch pair installed before, you may skip this).&lt;br /&gt;
e.g. extract http://mirror.csclub.uwaterloo.ca/ubuntu/dists/oneiric/main/installer-amd64/current/images/netboot/netboot.tar.gz to caffeine:/srv/tftp/oneiric-amd64&lt;br /&gt;
&lt;br /&gt;
* Force network boot in the BIOS. This may be called &amp;quot;Legacy LAN&amp;quot; or other such cryptic things. If this doesn&#039;t work, boot from CD or USB instead.&lt;br /&gt;
&lt;br /&gt;
It is preferred to use the &amp;quot;alternate&amp;quot; Ubuntu installer image, based on debian-installer, instead of the Ubiquity installer. This installer supports software RAID and LVM out of the box, and will generally make your life easier. If installing Debian, this is the usual installer, so don&#039;t sweat it.&lt;br /&gt;
&lt;br /&gt;
= Installing =&lt;br /&gt;
&lt;br /&gt;
== debian-installer ==&lt;br /&gt;
&lt;br /&gt;
At least in expert mode, you can choose a custom mirror (top of the countries list) and give the path for mirror directly. This will make installation super-fast compared to installing from anywhere else.&lt;br /&gt;
&lt;br /&gt;
Please install to LVM volumes, as this is our standard configuration on all machines where possible. It allows more flexible partitioning across available volumes. Since GRUB 2, even /boot may be on LVM; this is the preferred configuration for simplicity, except when legacy partitioning setups make this inconvenient.&lt;br /&gt;
&lt;br /&gt;
You may enable unattended upgrades, but do not enable Canonical&#039;s remote management service or any such nonsense. This is mostly a straightforward Debian/Ubuntu install.&lt;br /&gt;
&lt;br /&gt;
== Ubiquity ==&lt;br /&gt;
&lt;br /&gt;
Ubiquity is the Ubuntu GUI installer. For it to have lvm support, run:&lt;br /&gt;
 apt-get install lvm2&lt;br /&gt;
&lt;br /&gt;
If you still can&#039;t see the partitions (even if lvscan sees them, but no devices exist), run &amp;lt;tt&amp;gt;vgscan&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;vgchange -ay&amp;lt;/tt&amp;gt; as root. Now the partitioner should be able to see them. We prefer to use LVM for partitions. Since GRUB 2, even /boot may be on LVM; this is the preferred configuration for simplicity, except when legacy partitioning setups make this inconvenient.&lt;br /&gt;
&lt;br /&gt;
After installing with Ubiquity, you must also add LVM support to the newly installed system, and in particular its initramfs.&lt;br /&gt;
&lt;br /&gt;
 mount /dev/vg0/root /mnt&lt;br /&gt;
 mount /dev/sda1 /mnt/boot&lt;br /&gt;
 chroot /mnt&lt;br /&gt;
 apt-get install lvm2&lt;br /&gt;
&lt;br /&gt;
You should see an update-initramfs update. Reboot.&lt;br /&gt;
&lt;br /&gt;
= After Installing =&lt;br /&gt;
&lt;br /&gt;
Add the machine&#039;s name to ~git/public/hosts.git, and run the ansible playbook (https://git.uwaterloo.ca/csc/playbooks/blob/master/update-hosts.yml) to distribute the updated hosts file to all machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== apt ==&lt;br /&gt;
&lt;br /&gt;
If you did not during installation, change all references in &amp;lt;tt&amp;gt;/etc/apt/sources.list&amp;lt;/tt&amp;gt; to use &amp;lt;tt&amp;gt;mirror&amp;lt;/tt&amp;gt; instead of the usual mirrors.&lt;br /&gt;
&lt;br /&gt;
Also add support for the CSC packages. Add the following to &amp;lt;tt&amp;gt;/etc/apt/sources.list.d/csclub.list&amp;lt;/tt&amp;gt; (or copy from another host):&lt;br /&gt;
&lt;br /&gt;
 deb http://debian.csclub.uwaterloo.ca/ &amp;lt;distribution&amp;gt; main contrib non-free&lt;br /&gt;
 deb-src http://debian.csclub.uwaterloo.ca/ &amp;lt;distribution&amp;gt; main contrib non-free&lt;br /&gt;
&lt;br /&gt;
You&#039;ll also need the CSC archive signing key (if &amp;lt;tt&amp;gt;curl&amp;lt;/tt&amp;gt; is not installed, install it).&lt;br /&gt;
 curl -s http://debian.csclub.uwaterloo.ca/csclub.asc | apt-key add -&lt;br /&gt;
&lt;br /&gt;
You should now run &amp;lt;tt&amp;gt;apt-get update&amp;lt;/tt&amp;gt; to reflect these changes.&lt;br /&gt;
&lt;br /&gt;
Next, install &amp;lt;tt&amp;gt;inapt&amp;lt;/tt&amp;gt; (it is in the CSC Debian archive). If it hasn&#039;t previously been built for the current platform, clone and build it (TODO: describe how to do this).&lt;br /&gt;
&lt;br /&gt;
Clone &amp;lt;tt&amp;gt;~git/public/packages.git&amp;lt;/tt&amp;gt;, update it if necessary (notably updating &amp;lt;tt&amp;gt;nodes.ia&amp;lt;/tt&amp;gt; to reflect the distribution and role of the machine), then run:&lt;br /&gt;
 inapt *.ia&lt;br /&gt;
&lt;br /&gt;
(Due to a bug, if a warning is thrown, this will segfault. Until fixed, just temporarily remove whatever packages it complains about from the list.)&lt;br /&gt;
&lt;br /&gt;
Warning: this will take a long time due to the large number of packages being installed. Some of the below can be done once the relevant packages are installed, but while other packages are still being installed.&lt;br /&gt;
&lt;br /&gt;
For unattended upgrades in the future, install the &amp;lt;tt&amp;gt;unattended-upgrades&amp;lt;/tt&amp;gt; package and copy &amp;lt;tt&amp;gt;/etc/apt/apt.conf&amp;lt;/tt&amp;gt; from another host.&lt;br /&gt;
&lt;br /&gt;
== network ==&lt;br /&gt;
&lt;br /&gt;
Note that inapt current uninstalls NetworkManager, which is what Ubuntu uses by default to configure the network. Once this completes, open &amp;lt;tt&amp;gt;/etc/network/interfaces&amp;lt;/tt&amp;gt; and set up a static networking configuration (otherwise, networking will not come back up on reboot). It should look something like this (NOTE: csc-storage is only for servers in the machine room):&lt;br /&gt;
&lt;br /&gt;
 # This file describes the network interfaces available on your system&lt;br /&gt;
 # and how to activate them. For more information, see interfaces(5).&lt;br /&gt;
 &lt;br /&gt;
 # The loopback network interface&lt;br /&gt;
 auto lo&lt;br /&gt;
 iface lo inet loopback&lt;br /&gt;
 &lt;br /&gt;
 # The primary network interface&lt;br /&gt;
 auto eth0&lt;br /&gt;
 iface eth0 inet static&lt;br /&gt;
         address 129.97.134.xxx&lt;br /&gt;
         netmask 255.255.255.0&lt;br /&gt;
         gateway 129.97.134.1&lt;br /&gt;
 &lt;br /&gt;
 iface eth0 inet6 static&lt;br /&gt;
         address 2620:101:f000:4901:c5c:XXXX&lt;br /&gt;
         netmask 64&lt;br /&gt;
         gateway 2620:101:f000:4901::1&lt;br /&gt;
  &lt;br /&gt;
  # csc-storage&lt;br /&gt;
  auto eth0.530&lt;br /&gt;
  iface eth0.530 inet static&lt;br /&gt;
         address 172.19.168.xxx&lt;br /&gt;
         netmask 255.255.255.224&lt;br /&gt;
         vlan-raw-device eth0&lt;br /&gt;
  &lt;br /&gt;
  iface eth0.530 inet6 static&lt;br /&gt;
         address fd74:6b6a:8eca:4903:c5c::xx&lt;br /&gt;
         netmask 64&lt;br /&gt;
&lt;br /&gt;
== Keys ==&lt;br /&gt;
&lt;br /&gt;
If this is a reinstall of an existing host, copy back the SSH host keys and &amp;lt;tt&amp;gt;/etc/krb5.keytab&amp;lt;/tt&amp;gt; from its former incarnation. Otherwise, create a new Kerberos principal and copy the keytab over, as follows (run from the host in question):&lt;br /&gt;
 kadmin -p sysadmin/admin   # or any other admin principal; the password for this one is the usual root password&lt;br /&gt;
 addprinc -randkey host/[hostname].csclub.uwaterloo.ca&lt;br /&gt;
 ktadd host/[hostname].csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
This will generate a new principal (you can skip this step if one already exists) and add it to the local Kerberos keytab.&lt;br /&gt;
&lt;br /&gt;
Also copy &amp;lt;tt&amp;gt;/etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&amp;lt;/tt&amp;gt; from another host, as many of our services use a certificate issued by this CA.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
=== General ===&lt;br /&gt;
&lt;br /&gt;
The following config files are needed to work in the CSC environment (examples given below for an office terminal; perhaps refer to another host if preferred).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;/etc/nsswitch.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 # /etc/nsswitch.conf&lt;br /&gt;
 #&lt;br /&gt;
 # Example configuration of GNU Name Service Switch functionality.&lt;br /&gt;
 # If you have the `glibc-doc-reference&#039; and `info&#039; packages installed, try:&lt;br /&gt;
 # `info libc &amp;quot;Name Service Switch&amp;quot;&#039; for information about this file.&lt;br /&gt;
 &lt;br /&gt;
 passwd:         files ldap&lt;br /&gt;
 group:          files ldap&lt;br /&gt;
 shadow:         files ldap&lt;br /&gt;
 sudoers:        files ldap&lt;br /&gt;
 &lt;br /&gt;
 hosts:          files dns&lt;br /&gt;
 networks:       files&lt;br /&gt;
 &lt;br /&gt;
 protocols:      db files&lt;br /&gt;
 services:       db files&lt;br /&gt;
 ethers:         db files&lt;br /&gt;
 rpc:            db files&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;/etc/ldap/ldap.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 # $OpenLDAP: pkg/ldap/libraries/libldap/ldap.conf,v 1.9 2000/09/04 19:57:01 kurt Exp $&lt;br /&gt;
 #&lt;br /&gt;
 # LDAP Defaults&lt;br /&gt;
 #&lt;br /&gt;
 &lt;br /&gt;
 # See ldap.conf(5) for details&lt;br /&gt;
 # This file should be world readable but not world writable.&lt;br /&gt;
 &lt;br /&gt;
 BASE   dc=csclub, dc=uwaterloo, dc=ca&lt;br /&gt;
 URI     ldap://ldap1.csclub.uwaterloo.ca ldap://ldap2.csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
 SIZELIMIT      0&lt;br /&gt;
 &lt;br /&gt;
 TLS_CACERT      /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
 TLS_CACERTFILE /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
 &lt;br /&gt;
 SUDOERS_BASE    ou=SUDOers,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
&lt;br /&gt;
Also make &amp;lt;tt&amp;gt;/etc/sudo-ldap.conf&amp;lt;/tt&amp;gt; a symlink to the above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;/etc/nslcd.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 # /etc/nslcd.conf&lt;br /&gt;
 # nslcd configuration file. See nslcd.conf(5)&lt;br /&gt;
 # for details.&lt;br /&gt;
 &lt;br /&gt;
 # The user and group nslcd should run as.&lt;br /&gt;
 uid nslcd&lt;br /&gt;
 gid nslcd&lt;br /&gt;
 &lt;br /&gt;
 # The location at which the LDAP server(s) should be reachable.&lt;br /&gt;
 uri ldap://ldap1.csclub.uwaterloo.ca&lt;br /&gt;
 uri ldap://ldap2.csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
 # The search base that will be used for all queries.&lt;br /&gt;
 base dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
 &lt;br /&gt;
 # use the uniqueMember attribute for group membership&lt;br /&gt;
 # (not applicable on Debian squeeze)&lt;br /&gt;
 map group member uniqueMember&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;/etc/krb5.conf&amp;lt;/tt&amp;gt;&lt;br /&gt;
 [libdefaults]&lt;br /&gt;
         default_realm = CSCLUB.UWATERLOO.CA&lt;br /&gt;
         forwardable = true&lt;br /&gt;
         proxiable = true&lt;br /&gt;
         dns_lookup_kdc = false&lt;br /&gt;
         dns_lookup_realm = false&lt;br /&gt;
 &lt;br /&gt;
 [realms]&lt;br /&gt;
         CSCLUB.UWATERLOO.CA = {&lt;br /&gt;
                 kdc = kdc1.csclub.uwaterloo.ca&lt;br /&gt;
                 kdc = kdc2.csclub.uwaterloo.ca&lt;br /&gt;
                 admin_server = kadmin.csclub.uwaterloo.ca&lt;br /&gt;
         }&lt;br /&gt;
 (rest omitted for brevity)&lt;br /&gt;
&lt;br /&gt;
Update: &amp;lt;tt&amp;gt;allow_weak_crypto&amp;lt;/tt&amp;gt; is basically a no-op in recent Kerberos versions - but this is not a problem as any linux kernel with version &amp;gt;= 2.6.38.2 can use any cipher available to the kernel to grab tickets from the KDC for the purpose of NFS sec=krb5. Notably, this means you can use ciphersuites less craptastic than des-cbc-crc (the only one that used to work prior to this kernel revision) for NFS sec=krb5 mounts. Therefore, &amp;lt;tt&amp;gt;allow_weak_crypto&amp;lt;/tt&amp;gt; has been removed from /etc/krb5.conf on all our machines.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the lines &amp;lt;tt&amp;gt;dns_lookup_kdc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;dns_lookup_realm&amp;lt;/tt&amp;gt; have been added - they are needed to stop the KDC from throwing its arms in the air and giving up if IST&#039;s DNS servers ever explode - an event that has happened in the recent past far more often than I&#039;d like it to.&lt;br /&gt;
&lt;br /&gt;
Notably, &amp;lt;tt&amp;gt;allow_weak_crypto&amp;lt;/tt&amp;gt; is currently needed to mount &amp;lt;tt&amp;gt;/users&amp;lt;/tt&amp;gt; (/music and &amp;lt;tt&amp;gt;/scratch&amp;lt;/tt&amp;gt; is sec=sys and thus will always mount, even when krb5 is down and/or broken). Otherwise, you will get a mysterious &amp;quot;permission denied&amp;quot; error (even though the server claims to have authenticated the mount successfully).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;/etc/pam.d/common-account&amp;lt;/tt&amp;gt;&lt;br /&gt;
 #&lt;br /&gt;
 # /etc/pam.d/common-account - authorization settings common to all services&lt;br /&gt;
 #&lt;br /&gt;
 &lt;br /&gt;
 # here are the per-package modules (the &amp;quot;Primary&amp;quot; block)&lt;br /&gt;
 account        [success=1 new_authtok_reqd=done default=ignore]        pam_unix.so &lt;br /&gt;
 # here&#039;s the fallback if no module succeeds&lt;br /&gt;
 account        requisite                       pam_deny.so&lt;br /&gt;
 # prime the stack with a positive return value if there isn&#039;t one already;&lt;br /&gt;
 # this avoids us returning an error just because nothing sets a success code&lt;br /&gt;
 # since the modules above will each just jump around&lt;br /&gt;
 account        required                        pam_permit.so&lt;br /&gt;
 # and here are more per-package modules (the &amp;quot;Additional&amp;quot; block)&lt;br /&gt;
 account        required                        pam_krb5.so minimum_uid=10000&lt;br /&gt;
 # end of pam-auth-update config&lt;br /&gt;
 &lt;br /&gt;
 # Make sure the user is up to date. System accounts and syscom are exempt.&lt;br /&gt;
 account [success=2 default=ignore]     pam_succeed_if.so quiet uid &amp;lt; 10000&lt;br /&gt;
 account [success=1 default=ignore]     pam_succeed_if.so quiet user ingroup syscom&lt;br /&gt;
 account required        pam_csc.so&lt;br /&gt;
&lt;br /&gt;
This file is notably different on syscom-only hosts. Look at an existing syscom-only host to see the difference.&lt;br /&gt;
&lt;br /&gt;
Alter &amp;lt;tt&amp;gt;/etc/default/nfs-common&amp;lt;/tt&amp;gt; to enable &amp;lt;tt&amp;gt;statd&amp;lt;/tt&amp;gt;, and more importantly &amp;lt;tt&amp;gt;gssd&amp;lt;/tt&amp;gt; (needed for Kerberos NFS mounts). Start both daemons manually for now.&lt;br /&gt;
&lt;br /&gt;
Add &amp;lt;tt&amp;gt;/users&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/music&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/scratch&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt; (as appropriate for the machine&#039;s role), make their mount points and mount them. Note that &amp;lt;tt&amp;gt;/music&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/scratch&amp;lt;/tt&amp;gt; are sec=sys whereas /users is sec=krb5 (with exceptions granted on a case-by-case basis for servers only, office terminals are always sec=krb5 for security reasons).&lt;br /&gt;
&lt;br /&gt;
To allow single sign-on as &amp;lt;tt&amp;gt;root&amp;lt;/tt&amp;gt; (primarily useful for pushing files to all machines simultaneously), put the following in &amp;lt;tt&amp;gt;/root/.k5login&amp;lt;/tt&amp;gt;:&lt;br /&gt;
 sysadmin/admin@CSCLUB.UWATERLOO.CA&lt;br /&gt;
&lt;br /&gt;
Also copy the following files from another CSC host:&lt;br /&gt;
* &amp;lt;tt&amp;gt;/etc/ssh/ssh_config&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/etc/ssh/sshd_config&amp;lt;/tt&amp;gt; (for single sign-on)&lt;br /&gt;
* &amp;lt;tt&amp;gt;/etc/ssh/ssh_known_hosts&amp;lt;/tt&amp;gt; (to remove hostkey warnings within our network)&lt;br /&gt;
* &amp;lt;tt&amp;gt;/etc/hosts&amp;lt;/tt&amp;gt; (for host tab completion and emergency name resolution)&lt;br /&gt;
* &amp;lt;tt&amp;gt;/etc/resolv.conf&amp;lt;/tt&amp;gt; (to use IST&#039;s nameservers and search csclub/uwaterloo domains. Only required if you are not using &amp;lt;tt&amp;gt;/etc/network/interfaces&amp;lt;/tt&amp;gt; to configure DNS)&lt;br /&gt;
&lt;br /&gt;
=== Display Manager ===&lt;br /&gt;
&lt;br /&gt;
LightDM (with unity-greeter) is the current display manager of choice for CSC office terminals. Copy &amp;lt;tt&amp;gt;/etc/lightdm/lightdm.conf&amp;lt;/tt&amp;gt; from another CSC machine to configure it properly. If kdm or another display manager gets installed, please ensure that you continue to choose LightDM as the default display manager.&lt;br /&gt;
&lt;br /&gt;
Please leave AccountsService enabled, as LightDM and certain parts of the GNOME packages work better when it is available.&lt;br /&gt;
&lt;br /&gt;
The Unity greeter configuration is now in gsettings. We currently have a novelty wallpaper configured. To configure this, copy &amp;lt;tt&amp;gt;/usr/local/share/backgrounds/tarkin.png&amp;lt;/tt&amp;gt; from another machine and run:&lt;br /&gt;
&lt;br /&gt;
 sudo -u lightdm dbus-launch gsettings set com.canonical.unity-greeter background /usr/local/share/backgrounds/tarkin.png&lt;br /&gt;
&lt;br /&gt;
=== User-Defined Session ===&lt;br /&gt;
&lt;br /&gt;
For some reason, ubuntu does not install a session file for a session that just launches whatever&#039;s in the user&#039;s ~/.xsession. To fix this, put the following into &amp;lt;tt&amp;gt;/usr/share/xsessions/xsession.desktop&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
 [Desktop Entry]&lt;br /&gt;
 Name=User-defined session&lt;br /&gt;
 Exec=/etc/X11/Xsession&lt;br /&gt;
&lt;br /&gt;
=== Audio ===&lt;br /&gt;
&lt;br /&gt;
On an office terminal, copy &amp;lt;tt&amp;gt;/etc/pulse/default.pa&amp;lt;/tt&amp;gt; from another office terminal.&lt;br /&gt;
&lt;br /&gt;
If this is to be the machine that actually plays audio (currently &amp;lt;tt&amp;gt;nullsleep&amp;lt;/tt&amp;gt;), the setup is slightly more complicated. You&#039;ll need to set up MPD and PulseAudio to receive connections, and store the PulseAudio cookie in &amp;lt;tt&amp;gt;~audio&amp;lt;/tt&amp;gt;, with appropriate permissions so that only the &amp;lt;tt&amp;gt;audio&amp;lt;/tt&amp;gt; group can access it. If this is a new audio machine, you&#039;ll also need to change &amp;lt;tt&amp;gt;default.pa&amp;lt;/tt&amp;gt; on all office terminals to point to it.&lt;br /&gt;
&lt;br /&gt;
=== Tweaks ===&lt;br /&gt;
&lt;br /&gt;
On Ubuntu precise, even when &amp;lt;tt&amp;gt;gnome-keyring&amp;lt;/tt&amp;gt; is uninstalled, it leaves a config file behind that causes error messages. Remove &amp;lt;tt&amp;gt;/etc/pkcs11/modules/gnome-keyring-module&amp;lt;/tt&amp;gt; to fix this.&lt;br /&gt;
&lt;br /&gt;
On Ubuntu saucy or newer, edit &amp;lt;tt&amp;gt;/etc/sysctl.d/10-magic-sysrq&amp;lt;/tt&amp;gt; at change the value 244.&lt;br /&gt;
&lt;br /&gt;
== Records ==&lt;br /&gt;
&lt;br /&gt;
You probably already created the host in the University IPAM system beforehand. If not, please do so.&lt;br /&gt;
&lt;br /&gt;
Please also add the host to the [[Machine List]] here on the Wiki, and to &amp;lt;tt&amp;gt;/users/syscom/csc-machines&amp;lt;/tt&amp;gt; (and &amp;lt;tt&amp;gt;csc-office-machines&amp;lt;/tt&amp;gt;, if applicable).&lt;br /&gt;
&lt;br /&gt;
== Munin (System Monitoring) ==&lt;br /&gt;
&lt;br /&gt;
If the new machine is not a container, you probably want to have it participate in the Munin cluster. Run &amp;lt;tt&amp;gt;apt-get install munin-node&amp;lt;/tt&amp;gt; to install the monitoring client, then&lt;br /&gt;
edit the file /etc/munin/munin-node.conf. Look for a line that says &amp;lt;tt&amp;gt;allow ^127\.0\.0\.1$&amp;lt;/tt&amp;gt; and add the following on a new line immediately below it:&lt;br /&gt;
&amp;lt;tt&amp;gt;allow ^129\.97\.134\.51$&amp;lt;/tt&amp;gt; (this is the IP address for munin.csclub). Save the file, then &amp;lt;tt&amp;gt;/etc/init.d/munin-node restart&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;update-rc.d munin-node defaults&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Then, ssh into munin.csclub and edit the file /etc/munin/munin.conf and add the following lines to the end:&lt;br /&gt;
&amp;lt;tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[NEW-MACHINE-NAME.csclub] &amp;lt;br/&amp;gt;&lt;br /&gt;
addr 129.97.134.### &amp;lt;br /&amp;gt;&lt;br /&gt;
use_node_name yes&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= New Distribution =&lt;br /&gt;
&lt;br /&gt;
If you&#039;re adding a new distribution, there a couple of steps you&#039;ll need to take in updating the CSClub Debian repository on [[Machine_List#sodium_benzoate|sodium-benzoate/mirror]]. &lt;br /&gt;
&lt;br /&gt;
The steps to add a new Debian release (in the examples, jessie) is as follows, modify as necessary:&lt;br /&gt;
&lt;br /&gt;
=== Step 0: Create a GPG key ===&lt;br /&gt;
&lt;br /&gt;
Use &amp;quot;gpg --gen-key&amp;quot; or something like that. Skip this if you already have one.&lt;br /&gt;
&lt;br /&gt;
=== Step 1: Add to Uploaders ===&lt;br /&gt;
&lt;br /&gt;
The /srv/debian/conf/uploaders file on mirror contains the list of people who can upload. Add your GPG key id to this file.  Use &amp;quot;gpg --list-secret-keys&amp;quot; to find out the key ID. You also need to import your key into the mirror&#039;s gpg homedir as follows:&lt;br /&gt;
&lt;br /&gt;
 gpg --export $KEYID | sudo env GNUPGHOME=/srv/debian/gpg gpg --import&lt;br /&gt;
&lt;br /&gt;
You only need to do this step once.&lt;br /&gt;
&lt;br /&gt;
=== Step 2: Add Distro ===&lt;br /&gt;
&lt;br /&gt;
Add a new section to /srv/debian/conf/distributions:&lt;br /&gt;
&lt;br /&gt;
 Origin: CSC&lt;br /&gt;
 Label: Debian&lt;br /&gt;
 Codename: &#039;&#039;&#039;jessie&#039;&#039;&#039;&lt;br /&gt;
 Architectures: alpha amd64 i386 mips mipsel sparc powerpc armel source&lt;br /&gt;
 Components: main contrib non-free&lt;br /&gt;
 Uploaders: uploaders&lt;br /&gt;
 Update: dell chrome&lt;br /&gt;
 SignWith: yes&lt;br /&gt;
 Log: jessie.log&lt;br /&gt;
  --changes notifier&lt;br /&gt;
&lt;br /&gt;
And update the &#039;&#039;&#039;Allow&#039;&#039;&#039; line in /srv/debian/conf/incoming:&lt;br /&gt;
&lt;br /&gt;
 Allow: &#039;&#039;&#039;jessie&amp;gt;jessie&#039;&#039;&#039; oldstable&amp;gt;squeeze stable&amp;gt;wheezy lucid&amp;gt;lucid maverick&amp;gt;maverick oneiric&amp;gt;oneiric precise&amp;gt;precise quantal&amp;gt;quantal&lt;br /&gt;
&lt;br /&gt;
=== Step 3: Update from Sources ===&lt;br /&gt;
&lt;br /&gt;
Run:&lt;br /&gt;
&lt;br /&gt;
 sudo env GNUPGHOME=/srv/debian/gpg rrr-update&lt;br /&gt;
&lt;br /&gt;
If all went well you should see the new distribution listed at http://debian.csclub.uwaterloo.ca/dists/&lt;br /&gt;
&lt;br /&gt;
=== Step 4: CSC Packages ===&lt;br /&gt;
&lt;br /&gt;
Now that we&#039;ve got our new distribution set up we need to generate our packages and have them uploaded. Namely, ceo, libpam-csc &amp;amp; inapt. Using libpam-csc as an example:&lt;br /&gt;
&lt;br /&gt;
Get the package:&lt;br /&gt;
&lt;br /&gt;
 git clone ~git/public/libpam-csc.git&lt;br /&gt;
 cd libpam-csc&lt;br /&gt;
&lt;br /&gt;
Update change log:&lt;br /&gt;
&lt;br /&gt;
 EMAIL=[you]@csclub.uwaterloo.ca NAME=&amp;quot;Your Name&amp;quot; dch -i&lt;br /&gt;
&lt;br /&gt;
Update as necessary, i.e:&lt;br /&gt;
&lt;br /&gt;
 libpam-csc (1.10&#039;&#039;&#039;jessie0&#039;&#039;&#039;) &#039;&#039;&#039;jessie&#039;&#039;&#039;; urgency=low&lt;br /&gt;
 &lt;br /&gt;
   * Packaging for jessie.&lt;br /&gt;
 &lt;br /&gt;
  -- Your Name &amp;lt;[you]@csclub.uwaterloo.ca&amp;gt;  Thu, 10 Oct 2013 22:08:48 -0400&lt;br /&gt;
&lt;br /&gt;
Build! (You may need to install various dependencies, which it will yell at you if you don&#039;t have.)&lt;br /&gt;
&lt;br /&gt;
 debuild -k&#039;&#039;&#039;YOURKEYID&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Yay, it built now let&#039;s upload it to the repo. The build process which create a PACKAGE.changes file in the parent directory (replace PACKAGE with the actual package name).&lt;br /&gt;
&lt;br /&gt;
 dupload libpam-csc_1.10jessie0_amd64.changes&lt;br /&gt;
&lt;br /&gt;
Finally, log into mirror and type &amp;quot;sudo rrr-incoming&amp;quot;. This is supposed to happen once every few minutes however it is always faster to run it manually.&lt;br /&gt;
&lt;br /&gt;
And you&#039;re done. Just repeat the previous bit for other csc packages.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4299</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4299"/>
		<updated>2019-09-04T22:49:39Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use [http://www.openldap.org/ OpenLDAP] for directory services. Our primary LDAP server is [[Machine_List#auth1|auth1]] and our secondary LDAP server is [[Machine_List#auth2|auth2]].&lt;br /&gt;
&lt;br /&gt;
=== ehashman&#039;s Guide to Setting up OpenLDAP on Debian ===&lt;br /&gt;
&lt;br /&gt;
Welcome to my nightmare.&lt;br /&gt;
&lt;br /&gt;
==== What is LDAP? ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;LDAP:&#039;&#039;&#039; Lightweight Directory Access Protocol&lt;br /&gt;
&lt;br /&gt;
An open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. — [https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol Wikipedia: LDAP]&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
In this case, &amp;amp;quot;directory&amp;amp;quot; refers to the user directory, like on an old-school Rolodex. Many groups use LDAP to maintain their user directory, including the University (the &amp;amp;quot;WatIAM&amp;amp;quot; identity management system), the Computer Science Club, and even the UW Amateur Radio Club.&lt;br /&gt;
&lt;br /&gt;
This is a guide documenting how to set up LDAP on a Debian Linux system.&lt;br /&gt;
&lt;br /&gt;
==== First steps ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Ensure that openldap is installed on the machine:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# apt-get install slapd ldap-utils&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Debian will do a lot of magic and set up a skeleton LDAP server and get it running. We need to configure that further.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let&#039;s set up logging before we forget. Create the following files in &amp;lt;code&amp;gt;/var/log&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# mkdir /var/log/ldap&lt;br /&gt;
# touch /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set ownership correctly:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown openldap:openldap /var/log/ldap&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up rsyslog to dump the LDAP logs into &amp;lt;code&amp;gt;/var/log/ldap.log&amp;lt;/code&amp;gt; by adding the following lines:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/rsyslog.conf&lt;br /&gt;
...&lt;br /&gt;
# Grab ldap logs, don&#039;t duplicate in syslog&lt;br /&gt;
local4.*                        /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up log rotation for these by creating the file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/logrotate.d.ldap &amp;lt;code&amp;gt;/etc/logrotate.d/ldap&amp;lt;/code&amp;gt;] with the following contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;/var/log/ldap/*log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 1000&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
    create 0640 openldap adm&lt;br /&gt;
    postrotate&lt;br /&gt;
        if [ -f /var/run/slapd/slapd.pid ]; then&lt;br /&gt;
            /etc/init.d/slapd restart &amp;amp;gt;/dev/null 2&amp;amp;gt;&amp;amp;amp;1&lt;br /&gt;
        fi&lt;br /&gt;
    endscript&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
/var/log/ldap.log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 24&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;As of OpenLDAP 2.4, it doesn&#039;t actually create a config file for us. Apparently, this is a &amp;amp;quot;feature&amp;amp;quot;: LDAP maintainers think we should want to set this up via dynamic queries. We don&#039;t, so the first thing we need is our [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/slapd.conf &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;] file.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Building &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt; from scratch =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Get a copy to work with:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# scp uid@auth1.csclub.uwaterloo.ca:/etc/ldap/slapd.conf /etc/ldap/  ## you need CSC root for this&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You&#039;ll want to comment out the TLS lines, and anything referring to Kerberos and access for now. You&#039;ll also want to comment out lines specifically referring to syscom and office staff.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Make sure you remove the reference to &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; as an index, as we&#039;re going to remove this field.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You&#039;ll also need to generate a root password for the LDAP to bootstrap auth, like so:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slappasswd&lt;br /&gt;
New password: &lt;br /&gt;
Re-enter new password:&lt;br /&gt;
{SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Add this line below &amp;lt;code&amp;gt;rootdn&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;rootpw          {SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we want to edit all instances of &amp;amp;quot;csclub&amp;amp;quot; to be &amp;amp;quot;wics&amp;amp;quot; instead, e.g.:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;suffix     &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
rootdn     &amp;amp;quot;cn=root,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, we need to grab all the relevant schemas:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;scp -r uid@auth1.csclub.uwaterloo.ca:/etc/ldap/schema/ /tmp/schemas&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use the include directives to help you find the ones you need. I noticed we were missing &amp;lt;code&amp;gt;sudo.schema&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;rfc2307bis.schema&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open up the [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/csc.schema &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;] for editing; we&#039;re not using it verbatim. Remove the attributes &amp;lt;code&amp;gt;studentid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; and the objectclass &amp;lt;code&amp;gt;club&amp;lt;/code&amp;gt;. Also make sure you change the OID so we don&#039;t clash with the CSC. Because we didn&#039;t want to go through the process of requesting a [http://pen.iana.org/pen/PenApplication.page PEN number], we chose arbitrarily to use 26338, which belongs to IWICS Inc.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to can the auto-generated config files, so do that:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Also nuke the auto-generated database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm /var/lib/ldap/__db.*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Configure the database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# cp /usr/share/slapd/DB_CONFIG /var/lib/ldap/&lt;br /&gt;
# chown openldap:openldap /var/lib/ldap/DB_CONFIG &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we can generate the new configuration files:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And ensure that the permissions are all set correctly, lest this break something:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If at this point you get a nasty error, such as&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;5657d4db hdb_db_open: database &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;: db_open(/var/lib/ldap/id2entry.bdb) failed: No such file or directory (2).&lt;br /&gt;
5657d4db backend_startup_one (type=hdb, suffix=&amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;): bi_db_open failed! (2)&lt;br /&gt;
slap_startup failed (test would succeed using the -u switch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Just try restarting slapd, and see if that fixes the problem:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd stop&lt;br /&gt;
# service slapd start&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Congratulations! Your LDAP service is now configured and running.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting TLS Up and Running ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now that we have our LDAP service, we&#039;ll want to be able to serve encrypted traffic. This is especially important for any remote access, since binding to LDAP (i.e. sending it a password for auth) occurs over plaintext, and we don&#039;t want to leak our admin password.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Our first step is to copy our SSL certificates into the correct places. Public ones go into &amp;lt;code&amp;gt;/etc/ssl/certs/&amp;lt;/code&amp;gt; and private ones go into &amp;lt;code&amp;gt;/etc/ssl/private/&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Since the LDAP daemon needs to be able to read our private cert, we need to grant LDAP access to the private folder:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chgrp openldap /etc/ssl/private &lt;br /&gt;
# chmod g+x /etc/ssl/private&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, uncomment the TLS-related settings in &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;. These are &amp;lt;code&amp;gt;TLSCertificateFile&amp;lt;/code&amp;gt; (the public cert), &amp;lt;code&amp;gt;TLSCertificateKeyFile&amp;lt;/code&amp;gt; (the private key), &amp;lt;code&amp;gt;TLSCACertificateFile&amp;lt;/code&amp;gt; (the intermediate CA cert), and &amp;lt;code&amp;gt;TLSVerifyClient&amp;lt;/code&amp;gt; (set to &amp;amp;quot;allow&amp;amp;quot;).&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# enable TLS connections&lt;br /&gt;
TLSCertificateFile      /etc/ssl/certs/wics-wildcard.crt&lt;br /&gt;
TLSCertificateKeyFile   /etc/ssl/private/wics-wildcard.key&lt;br /&gt;
&lt;br /&gt;
# enable TLS client authentication&lt;br /&gt;
TLSCACertificateFile    /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
TLSVerifyClient         allow&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Update all your LDAP settings:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&lt;br /&gt;
# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&lt;br /&gt;
# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And last, ensure that LDAP will actually serve &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt; by modifying the init script variables in &amp;lt;code&amp;gt;/etc/default/&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/default/slapd&lt;br /&gt;
...&lt;br /&gt;
SLAPD_SERVICES=&amp;amp;quot;ldap:/// ldapi:/// ldaps:///&amp;amp;quot;&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now you can restart the LDAP server:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd restart&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And assuming this is successful, test to ensure LDAP is serving on port 636 for &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# netstat -ntaup&lt;br /&gt;
Active Internet connections (servers and established)&lt;br /&gt;
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name&lt;br /&gt;
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      22847/slapd     &lt;br /&gt;
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      22847/slapd &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Populating the Database ====&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ll need to start adding objects to the database. While we&#039;ll want to mostly do this programmatically, there are a few entries we&#039;ll need to bootstrap.&lt;br /&gt;
&lt;br /&gt;
===== Root Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Start by creating a file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/tree.ldif &amp;lt;code&amp;gt;tree.ldif&amp;lt;/code&amp;gt;] to create a few necessary &amp;amp;quot;roots&amp;amp;quot; in our LDAP tree, with the contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now attempt an LDAP add, using the password you set earlier:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f tree.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Test that everything turned out okay, by performing a query of the entire database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapsearch -x -h localhost&lt;br /&gt;
# extended LDIF&lt;br /&gt;
#&lt;br /&gt;
# LDAPv3&lt;br /&gt;
# base &amp;amp;lt;dc=wics,dc=uwaterloo,dc=ca&amp;amp;gt; (default) with scope subtree&lt;br /&gt;
# filter: (objectclass=*)&lt;br /&gt;
# requesting: ALL&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
# wics.uwaterloo.ca&lt;br /&gt;
dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
# People, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
# Group, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&lt;br /&gt;
&lt;br /&gt;
# search result&lt;br /&gt;
search: 2&lt;br /&gt;
result: 0 Success&lt;br /&gt;
&lt;br /&gt;
# numResponses: 4&lt;br /&gt;
# numEntries: 3&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Users and Groups =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, add users to track the current GID and UID. This will save us from querying the entire database every time we make a new user or group. Create this file, [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/nextxid.ldif &amp;lt;code&amp;gt;nextxid.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
cn: nextuid&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
homeDirectory: /dev/null&lt;br /&gt;
&lt;br /&gt;
dn: cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: group&lt;br /&gt;
objectClass: posixGroup&lt;br /&gt;
objectClass: top&lt;br /&gt;
gidNumber: 10000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;You&#039;ll see here that our first GID is 10000 and our first UID is 20000.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them, like you did with the roots of the tree:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f nextxid.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Special &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to add a sudoers OU with a defaults object for default sudo settings. We also need entries for syscom, such that members of the syscom group can use sudo on all hosts, and for termcom, whose members can use sudo on only the office terminals. Call this one [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/sudoers.ldif &amp;lt;code&amp;gt;sudoers.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: SUDOers&lt;br /&gt;
&lt;br /&gt;
dn: cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: defaults&lt;br /&gt;
sudoOption: !lecture&lt;br /&gt;
sudoOption: env_reset&lt;br /&gt;
sudoOption: listpw=never&lt;br /&gt;
sudoOption: mailto=&amp;amp;quot;wics-sys@lists.uwaterloo.ca&amp;amp;quot;&lt;br /&gt;
sudoOption: shell_noargs&lt;br /&gt;
&lt;br /&gt;
dn: cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %syscom&lt;br /&gt;
sudoUser: %syscom&lt;br /&gt;
sudoHost: ALL&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&lt;br /&gt;
&lt;br /&gt;
dn: cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %termcom&lt;br /&gt;
sudoUser: %termcom&lt;br /&gt;
sudoHost: honk&lt;br /&gt;
sudoHost: hiss&lt;br /&gt;
sudoHost: gosling&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f sudoers.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Last, add some special local groups via [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/local-groups.ldif &amp;lt;code&amp;gt;local-groups.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f local-groups.ldif&amp;lt;/pre&amp;gt;&lt;br /&gt;
The local groups are special because they usually are present on all systems, but we want to be able to add users to them at the LDAP level. For instance, the audio group controls access to sound equipment, and the adm group controls log read access.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;That&#039;s all the entries we have to add manually! Now we can use software for the rest. See [[weo|&amp;lt;code&amp;gt;weo&amp;lt;/code&amp;gt;]] for more details.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Querying LDAP ===&lt;br /&gt;
&lt;br /&gt;
There are many tools available for issuing LDAP queries. Queries should be issued to &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. The search base you almost certainly want is &amp;lt;tt&amp;gt;dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;. Read access is available without authentication; [[Kerberos]] is used to authenticate commands which require it.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h ldap1.csclub.uwaterloo.ca -b dc=csclub,dc=uwaterloo,dc=ca uid=ctdalek&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;-x&amp;lt;/tt&amp;gt; option causes &amp;lt;tt&amp;gt;ldapsearch&amp;lt;/tt&amp;gt; to switch to simple authentication rather than trying to authenticate via SASL (which will fail if you do not have a Kerberos ticket).&lt;br /&gt;
&lt;br /&gt;
The University LDAP server (uwldap.uwaterloo.ca) can also be queried like this. Again, use &amp;quot;simple authentication&amp;quot; as read access is available (from on campus) without authentication. SASL authentication will fail without additional parameters.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h uwldap.uwaterloo.ca -b dc=uwaterloo,dc=ca &amp;quot;cn=Prabhakar Ragde&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Replication ===&lt;br /&gt;
&lt;br /&gt;
While &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth1|auth1]]) is the LDAP master, an up-to-date replica is available on &amp;lt;tt&amp;gt;ldap2.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth2|auth2]]).&lt;br /&gt;
&lt;br /&gt;
In order to replicate changes from the master, the slave maintains an authenticated connection to the master which provides it with full read access to all changes.&lt;br /&gt;
&lt;br /&gt;
Specifically, &amp;lt;tt&amp;gt;/etc/systemd/system/k5start-slapd.service&amp;lt;/tt&amp;gt; maintains an active Kerberos ticket for &amp;lt;tt&amp;gt;ldap/auth2.csclub.uwaterloo.ca@CSCLUB.UWATERLOO.CA&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/var/run/slapd/krb5cc&amp;lt;/tt&amp;gt;. This is then used to authenticate the slave to the server, who maps this principal to &amp;lt;tt&amp;gt;cn=ldap-slave,dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;, which in turn has full read privileges.&lt;br /&gt;
&lt;br /&gt;
In the event of master failure, all hosts should fail LDAP reads seamlessly over to the slave.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing a user&#039;s username ==&lt;br /&gt;
&lt;br /&gt;
Only a member of the Systems Committee can change a user&#039;s username. &#039;&#039;&#039;At all times, a user&#039;s username must match the user&#039;s username in WatIAM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
All changes to an account MUST be done in person so that identity can be confirmed. If a member cannot attend in person, then an alternate method of identity verification may be chosen by the Systems Administrator.&lt;br /&gt;
&lt;br /&gt;
# Edit entries in LDAP (&amp;lt;code&amp;gt;ldapvi -Y GSSAPI&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Find and replace the user&#039;s old username with the new one (&amp;lt;code&amp;gt;%s/$OLD/$NEW/g&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Change the user&#039;s Kerberos principal (on auth1, &amp;lt;code&amp;gt;renprinc $OLD $NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Move the user&#039;s home directory (on aspartame, &amp;lt;code&amp;gt;mv /users/$OLD /users/$NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Change the user&#039;s csc-general (and csc-industry, if subscribed) email address for &amp;lt;code&amp;gt;$OLD@csclub.uwaterloo.ca&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;$NEW@csclub.uwaterloo.ca&amp;lt;/code&amp;gt;&lt;br /&gt;
#* https://mailman.csclub.uwaterloo.ca/admin/csc-general&lt;br /&gt;
# If the user has vhosts on caffeine, update them to point to their new username&lt;br /&gt;
&lt;br /&gt;
If the user&#039;s account has been around for a while, and they request it, forward email from their old username to their new one.&lt;br /&gt;
&lt;br /&gt;
# Edit &amp;lt;code&amp;gt;/etc/aliases&amp;lt;/code&amp;gt; on mail. &amp;lt;code&amp;gt;$OLD: $NEW&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;newaliases&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4298</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4298"/>
		<updated>2019-09-04T22:40:52Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Changing a user&amp;#039;s username */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use [http://www.openldap.org/ OpenLDAP] for directory services. Our primary LDAP server is [[Machine_List#auth1|auth1]] and our secondary LDAP server is [[Machine_List#auth2|auth2]].&lt;br /&gt;
&lt;br /&gt;
=== ehashman&#039;s Guide to Setting up OpenLDAP on Debian ===&lt;br /&gt;
&lt;br /&gt;
Welcome to my nightmare.&lt;br /&gt;
&lt;br /&gt;
==== What is LDAP? ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;LDAP:&#039;&#039;&#039; Lightweight Directory Access Protocol&lt;br /&gt;
&lt;br /&gt;
An open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. — [https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol Wikipedia: LDAP]&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
In this case, &amp;amp;quot;directory&amp;amp;quot; refers to the user directory, like on an old-school Rolodex. Many groups use LDAP to maintain their user directory, including the University (the &amp;amp;quot;WatIAM&amp;amp;quot; identity management system), the Computer Science Club, and even the UW Amateur Radio Club.&lt;br /&gt;
&lt;br /&gt;
This is a guide documenting how to set up LDAP on a Debian Linux system.&lt;br /&gt;
&lt;br /&gt;
==== First steps ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Ensure that openldap is installed on the machine:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# apt-get install slapd ldap-utils&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Debian will do a lot of magic and set up a skeleton LDAP server and get it running. We need to configure that further.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let&#039;s set up logging before we forget. Create the following files in &amp;lt;code&amp;gt;/var/log&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# mkdir /var/log/ldap&lt;br /&gt;
# touch /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set ownership correctly:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown openldap:openldap /var/log/ldap&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up rsyslog to dump the LDAP logs into &amp;lt;code&amp;gt;/var/log/ldap.log&amp;lt;/code&amp;gt; by adding the following lines:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/rsyslog.conf&lt;br /&gt;
...&lt;br /&gt;
# Grab ldap logs, don&#039;t duplicate in syslog&lt;br /&gt;
local4.*                        /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up log rotation for these by creating the file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/logrotate.d.ldap &amp;lt;code&amp;gt;/etc/logrotate.d/ldap&amp;lt;/code&amp;gt;] with the following contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;/var/log/ldap/*log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 1000&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
    create 0640 openldap adm&lt;br /&gt;
    postrotate&lt;br /&gt;
        if [ -f /var/run/slapd/slapd.pid ]; then&lt;br /&gt;
            /etc/init.d/slapd restart &amp;amp;gt;/dev/null 2&amp;amp;gt;&amp;amp;amp;1&lt;br /&gt;
        fi&lt;br /&gt;
    endscript&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
/var/log/ldap.log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 24&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;As of OpenLDAP 2.4, it doesn&#039;t actually create a config file for us. Apparently, this is a &amp;amp;quot;feature&amp;amp;quot;: LDAP maintainers think we should want to set this up via dynamic queries. We don&#039;t, so the first thing we need is our [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/slapd.conf &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;] file.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Building &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt; from scratch =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Get a copy to work with:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# scp uid@auth1.csclub.uwaterloo.ca:/etc/ldap/slapd.conf /etc/ldap/  ## you need CSC root for this&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You&#039;ll want to comment out the TLS lines, and anything referring to Kerberos and access for now. You&#039;ll also want to comment out lines specifically referring to syscom and office staff.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Make sure you remove the reference to &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; as an index, as we&#039;re going to remove this field.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You&#039;ll also need to generate a root password for the LDAP to bootstrap auth, like so:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slappasswd&lt;br /&gt;
New password: &lt;br /&gt;
Re-enter new password:&lt;br /&gt;
{SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Add this line below &amp;lt;code&amp;gt;rootdn&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;rootpw          {SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we want to edit all instances of &amp;amp;quot;csclub&amp;amp;quot; to be &amp;amp;quot;wics&amp;amp;quot; instead, e.g.:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;suffix     &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
rootdn     &amp;amp;quot;cn=root,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, we need to grab all the relevant schemas:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;scp -r uid@auth1.csclub.uwaterloo.ca:/etc/ldap/schema/ /tmp/schemas&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use the include directives to help you find the ones you need. I noticed we were missing &amp;lt;code&amp;gt;sudo.schema&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;rfc2307bis.schema&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open up the [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/csc.schema &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;] for editing; we&#039;re not using it verbatim. Remove the attributes &amp;lt;code&amp;gt;studentid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; and the objectclass &amp;lt;code&amp;gt;club&amp;lt;/code&amp;gt;. Also make sure you change the OID so we don&#039;t clash with the CSC. Because we didn&#039;t want to go through the process of requesting a [http://pen.iana.org/pen/PenApplication.page PEN number], we chose arbitrarily to use 26338, which belongs to IWICS Inc.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to can the auto-generated config files, so do that:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Also nuke the auto-generated database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm /var/lib/ldap/__db.*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Configure the database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# cp /usr/share/slapd/DB_CONFIG /var/lib/ldap/&lt;br /&gt;
# chown openldap:openldap /var/lib/ldap/DB_CONFIG &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we can generate the new configuration files:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And ensure that the permissions are all set correctly, lest this break something:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If at this point you get a nasty error, such as&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;5657d4db hdb_db_open: database &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;: db_open(/var/lib/ldap/id2entry.bdb) failed: No such file or directory (2).&lt;br /&gt;
5657d4db backend_startup_one (type=hdb, suffix=&amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;): bi_db_open failed! (2)&lt;br /&gt;
slap_startup failed (test would succeed using the -u switch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Just try restarting slapd, and see if that fixes the problem:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd stop&lt;br /&gt;
# service slapd start&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Congratulations! Your LDAP service is now configured and running.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting TLS Up and Running ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now that we have our LDAP service, we&#039;ll want to be able to serve encrypted traffic. This is especially important for any remote access, since binding to LDAP (i.e. sending it a password for auth) occurs over plaintext, and we don&#039;t want to leak our admin password.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Our first step is to copy our SSL certificates into the correct places. Public ones go into &amp;lt;code&amp;gt;/etc/ssl/certs/&amp;lt;/code&amp;gt; and private ones go into &amp;lt;code&amp;gt;/etc/ssl/private/&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Since the LDAP daemon needs to be able to read our private cert, we need to grant LDAP access to the private folder:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chgrp openldap /etc/ssl/private &lt;br /&gt;
# chmod g+x /etc/ssl/private&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, uncomment the TLS-related settings in &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;. These are &amp;lt;code&amp;gt;TLSCertificateFile&amp;lt;/code&amp;gt; (the public cert), &amp;lt;code&amp;gt;TLSCertificateKeyFile&amp;lt;/code&amp;gt; (the private key), &amp;lt;code&amp;gt;TLSCACertificateFile&amp;lt;/code&amp;gt; (the intermediate CA cert), and &amp;lt;code&amp;gt;TLSVerifyClient&amp;lt;/code&amp;gt; (set to &amp;amp;quot;allow&amp;amp;quot;).&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# enable TLS connections&lt;br /&gt;
TLSCertificateFile      /etc/ssl/certs/wics-wildcard.crt&lt;br /&gt;
TLSCertificateKeyFile   /etc/ssl/private/wics-wildcard.key&lt;br /&gt;
&lt;br /&gt;
# enable TLS client authentication&lt;br /&gt;
TLSCACertificateFile    /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
TLSVerifyClient         allow&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Update all your LDAP settings:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&lt;br /&gt;
# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&lt;br /&gt;
# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And last, ensure that LDAP will actually serve &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt; by modifying the init script variables in &amp;lt;code&amp;gt;/etc/default/&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/default/slapd&lt;br /&gt;
...&lt;br /&gt;
SLAPD_SERVICES=&amp;amp;quot;ldap:/// ldapi:/// ldaps:///&amp;amp;quot;&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now you can restart the LDAP server:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd restart&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And assuming this is successful, test to ensure LDAP is serving on port 636 for &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# netstat -ntaup&lt;br /&gt;
Active Internet connections (servers and established)&lt;br /&gt;
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name&lt;br /&gt;
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      22847/slapd     &lt;br /&gt;
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      22847/slapd &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Populating the Database ====&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ll need to start adding objects to the database. While we&#039;ll want to mostly do this programmatically, there are a few entries we&#039;ll need to bootstrap.&lt;br /&gt;
&lt;br /&gt;
===== Root Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Start by creating a file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/tree.ldif &amp;lt;code&amp;gt;tree.ldif&amp;lt;/code&amp;gt;] to create a few necessary &amp;amp;quot;roots&amp;amp;quot; in our LDAP tree, with the contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now attempt an LDAP add, using the password you set earlier:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f tree.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Test that everything turned out okay, by performing a query of the entire database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapsearch -x -h localhost&lt;br /&gt;
# extended LDIF&lt;br /&gt;
#&lt;br /&gt;
# LDAPv3&lt;br /&gt;
# base &amp;amp;lt;dc=wics,dc=uwaterloo,dc=ca&amp;amp;gt; (default) with scope subtree&lt;br /&gt;
# filter: (objectclass=*)&lt;br /&gt;
# requesting: ALL&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
# wics.uwaterloo.ca&lt;br /&gt;
dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
# People, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
# Group, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&lt;br /&gt;
&lt;br /&gt;
# search result&lt;br /&gt;
search: 2&lt;br /&gt;
result: 0 Success&lt;br /&gt;
&lt;br /&gt;
# numResponses: 4&lt;br /&gt;
# numEntries: 3&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Users and Groups =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, add users to track the current GID and UID. This will save us from querying the entire database every time we make a new user or group. Create this file, [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/nextxid.ldif &amp;lt;code&amp;gt;nextxid.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
cn: nextuid&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
homeDirectory: /dev/null&lt;br /&gt;
&lt;br /&gt;
dn: cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: group&lt;br /&gt;
objectClass: posixGroup&lt;br /&gt;
objectClass: top&lt;br /&gt;
gidNumber: 10000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;You&#039;ll see here that our first GID is 10000 and our first UID is 20000.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them, like you did with the roots of the tree:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f nextxid.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Special &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to add a sudoers OU with a defaults object for default sudo settings. We also need entries for syscom, such that members of the syscom group can use sudo on all hosts, and for termcom, whose members can use sudo on only the office terminals. Call this one [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/sudoers.ldif &amp;lt;code&amp;gt;sudoers.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: SUDOers&lt;br /&gt;
&lt;br /&gt;
dn: cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: defaults&lt;br /&gt;
sudoOption: !lecture&lt;br /&gt;
sudoOption: env_reset&lt;br /&gt;
sudoOption: listpw=never&lt;br /&gt;
sudoOption: mailto=&amp;amp;quot;wics-sys@lists.uwaterloo.ca&amp;amp;quot;&lt;br /&gt;
sudoOption: shell_noargs&lt;br /&gt;
&lt;br /&gt;
dn: cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %syscom&lt;br /&gt;
sudoUser: %syscom&lt;br /&gt;
sudoHost: ALL&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&lt;br /&gt;
&lt;br /&gt;
dn: cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %termcom&lt;br /&gt;
sudoUser: %termcom&lt;br /&gt;
sudoHost: honk&lt;br /&gt;
sudoHost: hiss&lt;br /&gt;
sudoHost: gosling&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f sudoers.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Last, add some special local groups via [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/local-groups.ldif &amp;lt;code&amp;gt;local-groups.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f local-groups.ldif&amp;lt;/pre&amp;gt;&lt;br /&gt;
The local groups are special because they usually are present on all systems, but we want to be able to add users to them at the LDAP level. For instance, the audio group controls access to sound equipment, and the adm group controls log read access.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;That&#039;s all the entries we have to add manually! Now we can use software for the rest. See [[weo|&amp;lt;code&amp;gt;weo&amp;lt;/code&amp;gt;]] for more details.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Querying LDAP ===&lt;br /&gt;
&lt;br /&gt;
There are many tools available for issuing LDAP queries. Queries should be issued to &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. The search base you almost certainly want is &amp;lt;tt&amp;gt;dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;. Read access is available without authentication; [[Kerberos]] is used to authenticate commands which require it.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h ldap1.csclub.uwaterloo.ca -b dc=csclub,dc=uwaterloo,dc=ca uid=ctdalek&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;-x&amp;lt;/tt&amp;gt; option causes &amp;lt;tt&amp;gt;ldapsearch&amp;lt;/tt&amp;gt; to switch to simple authentication rather than trying to authenticate via SASL (which will fail if you do not have a Kerberos ticket).&lt;br /&gt;
&lt;br /&gt;
The University LDAP server (uwldap.uwaterloo.ca) can also be queried like this. Again, use &amp;quot;simple authentication&amp;quot; as read access is available (from on campus) without authentication. SASL authentication will fail without additional parameters.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h uwldap.uwaterloo.ca -b dc=uwaterloo,dc=ca &amp;quot;cn=Prabhakar Ragde&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Replication ===&lt;br /&gt;
&lt;br /&gt;
While &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth1|auth1]]) is the LDAP master, an up-to-date replica is available on &amp;lt;tt&amp;gt;ldap2.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth2|auth2]]).&lt;br /&gt;
&lt;br /&gt;
In order to replicate changes from the master, the slave maintains an authenticated connection to the master which provides it with full read access to all changes.&lt;br /&gt;
&lt;br /&gt;
Specifically, &amp;lt;tt&amp;gt;/etc/systemd/system/k5start-slapd.service&amp;lt;/tt&amp;gt; maintains an active Kerberos ticket for &amp;lt;tt&amp;gt;ldap/auth2.csclub.uwaterloo.ca@CSCLUB.UWATERLOO.CA&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/var/run/slapd/krb5cc&amp;lt;/tt&amp;gt;. This is then used to authenticate the slave to the server, who maps this principal to &amp;lt;tt&amp;gt;cn=ldap-slave,dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;, which in turn has full read privileges.&lt;br /&gt;
&lt;br /&gt;
In the event of master failure, all hosts should fail LDAP reads seamlessly over to the slave.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing a user&#039;s username ==&lt;br /&gt;
&lt;br /&gt;
Only a member of the Systems Committee can change a user&#039;s username. &#039;&#039;&#039;At all times, a user&#039;s username must match the user&#039;s username in WatIAM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
All changes to an account MUST be done in person so that identity can be confirmed. If a member cannot attend in person, then an alternate method of identity verification may be chosen by the Systems Administrator.&lt;br /&gt;
&lt;br /&gt;
# Edit entries in LDAP (&amp;lt;code&amp;gt;ldapvi -Y GSSAPI&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Find and replace the user&#039;s old username with the new one&lt;br /&gt;
# Change the user&#039;s Kerberos principal (on auth1, &amp;lt;code&amp;gt;renprinc $OLD $NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Move the user&#039;s home directory (on aspartame, &amp;lt;code&amp;gt;mv /users/$OLD /users/$NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Change the user&#039;s csc-general (and csc-industry, if subscribed) email address for &amp;lt;code&amp;gt;$OLD@csclub.uwaterloo.ca&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;$NEW@csclub.uwaterloo.ca&amp;lt;/code&amp;gt;&lt;br /&gt;
#* https://mailman.csclub.uwaterloo.ca/admin/csc-general&lt;br /&gt;
# If the user has vhosts on caffeine, update them to point to their new username&lt;br /&gt;
&lt;br /&gt;
If the user&#039;s account has been around for a while, and they request it, forward email from their old username to their new one.&lt;br /&gt;
&lt;br /&gt;
# Edit &amp;lt;code&amp;gt;/etc/aliases&amp;lt;/code&amp;gt; on mail. &amp;lt;code&amp;gt;$OLD: $NEW&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;newaliases&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4297</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4297"/>
		<updated>2019-09-04T22:40:30Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Changing a user&amp;#039;s username */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use [http://www.openldap.org/ OpenLDAP] for directory services. Our primary LDAP server is [[Machine_List#auth1|auth1]] and our secondary LDAP server is [[Machine_List#auth2|auth2]].&lt;br /&gt;
&lt;br /&gt;
=== ehashman&#039;s Guide to Setting up OpenLDAP on Debian ===&lt;br /&gt;
&lt;br /&gt;
Welcome to my nightmare.&lt;br /&gt;
&lt;br /&gt;
==== What is LDAP? ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;LDAP:&#039;&#039;&#039; Lightweight Directory Access Protocol&lt;br /&gt;
&lt;br /&gt;
An open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. — [https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol Wikipedia: LDAP]&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
In this case, &amp;amp;quot;directory&amp;amp;quot; refers to the user directory, like on an old-school Rolodex. Many groups use LDAP to maintain their user directory, including the University (the &amp;amp;quot;WatIAM&amp;amp;quot; identity management system), the Computer Science Club, and even the UW Amateur Radio Club.&lt;br /&gt;
&lt;br /&gt;
This is a guide documenting how to set up LDAP on a Debian Linux system.&lt;br /&gt;
&lt;br /&gt;
==== First steps ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Ensure that openldap is installed on the machine:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# apt-get install slapd ldap-utils&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Debian will do a lot of magic and set up a skeleton LDAP server and get it running. We need to configure that further.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let&#039;s set up logging before we forget. Create the following files in &amp;lt;code&amp;gt;/var/log&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# mkdir /var/log/ldap&lt;br /&gt;
# touch /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set ownership correctly:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown openldap:openldap /var/log/ldap&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up rsyslog to dump the LDAP logs into &amp;lt;code&amp;gt;/var/log/ldap.log&amp;lt;/code&amp;gt; by adding the following lines:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/rsyslog.conf&lt;br /&gt;
...&lt;br /&gt;
# Grab ldap logs, don&#039;t duplicate in syslog&lt;br /&gt;
local4.*                        /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up log rotation for these by creating the file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/logrotate.d.ldap &amp;lt;code&amp;gt;/etc/logrotate.d/ldap&amp;lt;/code&amp;gt;] with the following contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;/var/log/ldap/*log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 1000&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
    create 0640 openldap adm&lt;br /&gt;
    postrotate&lt;br /&gt;
        if [ -f /var/run/slapd/slapd.pid ]; then&lt;br /&gt;
            /etc/init.d/slapd restart &amp;amp;gt;/dev/null 2&amp;amp;gt;&amp;amp;amp;1&lt;br /&gt;
        fi&lt;br /&gt;
    endscript&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
/var/log/ldap.log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 24&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;As of OpenLDAP 2.4, it doesn&#039;t actually create a config file for us. Apparently, this is a &amp;amp;quot;feature&amp;amp;quot;: LDAP maintainers think we should want to set this up via dynamic queries. We don&#039;t, so the first thing we need is our [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/slapd.conf &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;] file.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Building &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt; from scratch =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Get a copy to work with:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# scp uid@auth1.csclub.uwaterloo.ca:/etc/ldap/slapd.conf /etc/ldap/  ## you need CSC root for this&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You&#039;ll want to comment out the TLS lines, and anything referring to Kerberos and access for now. You&#039;ll also want to comment out lines specifically referring to syscom and office staff.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Make sure you remove the reference to &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; as an index, as we&#039;re going to remove this field.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You&#039;ll also need to generate a root password for the LDAP to bootstrap auth, like so:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slappasswd&lt;br /&gt;
New password: &lt;br /&gt;
Re-enter new password:&lt;br /&gt;
{SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Add this line below &amp;lt;code&amp;gt;rootdn&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;rootpw          {SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we want to edit all instances of &amp;amp;quot;csclub&amp;amp;quot; to be &amp;amp;quot;wics&amp;amp;quot; instead, e.g.:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;suffix     &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
rootdn     &amp;amp;quot;cn=root,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, we need to grab all the relevant schemas:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;scp -r uid@auth1.csclub.uwaterloo.ca:/etc/ldap/schema/ /tmp/schemas&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use the include directives to help you find the ones you need. I noticed we were missing &amp;lt;code&amp;gt;sudo.schema&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;rfc2307bis.schema&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open up the [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/csc.schema &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;] for editing; we&#039;re not using it verbatim. Remove the attributes &amp;lt;code&amp;gt;studentid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; and the objectclass &amp;lt;code&amp;gt;club&amp;lt;/code&amp;gt;. Also make sure you change the OID so we don&#039;t clash with the CSC. Because we didn&#039;t want to go through the process of requesting a [http://pen.iana.org/pen/PenApplication.page PEN number], we chose arbitrarily to use 26338, which belongs to IWICS Inc.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to can the auto-generated config files, so do that:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Also nuke the auto-generated database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm /var/lib/ldap/__db.*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Configure the database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# cp /usr/share/slapd/DB_CONFIG /var/lib/ldap/&lt;br /&gt;
# chown openldap:openldap /var/lib/ldap/DB_CONFIG &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we can generate the new configuration files:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And ensure that the permissions are all set correctly, lest this break something:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If at this point you get a nasty error, such as&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;5657d4db hdb_db_open: database &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;: db_open(/var/lib/ldap/id2entry.bdb) failed: No such file or directory (2).&lt;br /&gt;
5657d4db backend_startup_one (type=hdb, suffix=&amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;): bi_db_open failed! (2)&lt;br /&gt;
slap_startup failed (test would succeed using the -u switch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Just try restarting slapd, and see if that fixes the problem:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd stop&lt;br /&gt;
# service slapd start&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Congratulations! Your LDAP service is now configured and running.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting TLS Up and Running ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now that we have our LDAP service, we&#039;ll want to be able to serve encrypted traffic. This is especially important for any remote access, since binding to LDAP (i.e. sending it a password for auth) occurs over plaintext, and we don&#039;t want to leak our admin password.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Our first step is to copy our SSL certificates into the correct places. Public ones go into &amp;lt;code&amp;gt;/etc/ssl/certs/&amp;lt;/code&amp;gt; and private ones go into &amp;lt;code&amp;gt;/etc/ssl/private/&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Since the LDAP daemon needs to be able to read our private cert, we need to grant LDAP access to the private folder:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chgrp openldap /etc/ssl/private &lt;br /&gt;
# chmod g+x /etc/ssl/private&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, uncomment the TLS-related settings in &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;. These are &amp;lt;code&amp;gt;TLSCertificateFile&amp;lt;/code&amp;gt; (the public cert), &amp;lt;code&amp;gt;TLSCertificateKeyFile&amp;lt;/code&amp;gt; (the private key), &amp;lt;code&amp;gt;TLSCACertificateFile&amp;lt;/code&amp;gt; (the intermediate CA cert), and &amp;lt;code&amp;gt;TLSVerifyClient&amp;lt;/code&amp;gt; (set to &amp;amp;quot;allow&amp;amp;quot;).&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# enable TLS connections&lt;br /&gt;
TLSCertificateFile      /etc/ssl/certs/wics-wildcard.crt&lt;br /&gt;
TLSCertificateKeyFile   /etc/ssl/private/wics-wildcard.key&lt;br /&gt;
&lt;br /&gt;
# enable TLS client authentication&lt;br /&gt;
TLSCACertificateFile    /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
TLSVerifyClient         allow&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Update all your LDAP settings:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&lt;br /&gt;
# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&lt;br /&gt;
# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And last, ensure that LDAP will actually serve &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt; by modifying the init script variables in &amp;lt;code&amp;gt;/etc/default/&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/default/slapd&lt;br /&gt;
...&lt;br /&gt;
SLAPD_SERVICES=&amp;amp;quot;ldap:/// ldapi:/// ldaps:///&amp;amp;quot;&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now you can restart the LDAP server:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd restart&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And assuming this is successful, test to ensure LDAP is serving on port 636 for &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# netstat -ntaup&lt;br /&gt;
Active Internet connections (servers and established)&lt;br /&gt;
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name&lt;br /&gt;
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      22847/slapd     &lt;br /&gt;
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      22847/slapd &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Populating the Database ====&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ll need to start adding objects to the database. While we&#039;ll want to mostly do this programmatically, there are a few entries we&#039;ll need to bootstrap.&lt;br /&gt;
&lt;br /&gt;
===== Root Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Start by creating a file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/tree.ldif &amp;lt;code&amp;gt;tree.ldif&amp;lt;/code&amp;gt;] to create a few necessary &amp;amp;quot;roots&amp;amp;quot; in our LDAP tree, with the contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now attempt an LDAP add, using the password you set earlier:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f tree.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Test that everything turned out okay, by performing a query of the entire database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapsearch -x -h localhost&lt;br /&gt;
# extended LDIF&lt;br /&gt;
#&lt;br /&gt;
# LDAPv3&lt;br /&gt;
# base &amp;amp;lt;dc=wics,dc=uwaterloo,dc=ca&amp;amp;gt; (default) with scope subtree&lt;br /&gt;
# filter: (objectclass=*)&lt;br /&gt;
# requesting: ALL&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
# wics.uwaterloo.ca&lt;br /&gt;
dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
# People, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
# Group, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&lt;br /&gt;
&lt;br /&gt;
# search result&lt;br /&gt;
search: 2&lt;br /&gt;
result: 0 Success&lt;br /&gt;
&lt;br /&gt;
# numResponses: 4&lt;br /&gt;
# numEntries: 3&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Users and Groups =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, add users to track the current GID and UID. This will save us from querying the entire database every time we make a new user or group. Create this file, [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/nextxid.ldif &amp;lt;code&amp;gt;nextxid.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
cn: nextuid&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
homeDirectory: /dev/null&lt;br /&gt;
&lt;br /&gt;
dn: cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: group&lt;br /&gt;
objectClass: posixGroup&lt;br /&gt;
objectClass: top&lt;br /&gt;
gidNumber: 10000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;You&#039;ll see here that our first GID is 10000 and our first UID is 20000.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them, like you did with the roots of the tree:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f nextxid.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Special &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to add a sudoers OU with a defaults object for default sudo settings. We also need entries for syscom, such that members of the syscom group can use sudo on all hosts, and for termcom, whose members can use sudo on only the office terminals. Call this one [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/sudoers.ldif &amp;lt;code&amp;gt;sudoers.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: SUDOers&lt;br /&gt;
&lt;br /&gt;
dn: cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: defaults&lt;br /&gt;
sudoOption: !lecture&lt;br /&gt;
sudoOption: env_reset&lt;br /&gt;
sudoOption: listpw=never&lt;br /&gt;
sudoOption: mailto=&amp;amp;quot;wics-sys@lists.uwaterloo.ca&amp;amp;quot;&lt;br /&gt;
sudoOption: shell_noargs&lt;br /&gt;
&lt;br /&gt;
dn: cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %syscom&lt;br /&gt;
sudoUser: %syscom&lt;br /&gt;
sudoHost: ALL&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&lt;br /&gt;
&lt;br /&gt;
dn: cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %termcom&lt;br /&gt;
sudoUser: %termcom&lt;br /&gt;
sudoHost: honk&lt;br /&gt;
sudoHost: hiss&lt;br /&gt;
sudoHost: gosling&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f sudoers.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Last, add some special local groups via [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/local-groups.ldif &amp;lt;code&amp;gt;local-groups.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f local-groups.ldif&amp;lt;/pre&amp;gt;&lt;br /&gt;
The local groups are special because they usually are present on all systems, but we want to be able to add users to them at the LDAP level. For instance, the audio group controls access to sound equipment, and the adm group controls log read access.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;That&#039;s all the entries we have to add manually! Now we can use software for the rest. See [[weo|&amp;lt;code&amp;gt;weo&amp;lt;/code&amp;gt;]] for more details.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Querying LDAP ===&lt;br /&gt;
&lt;br /&gt;
There are many tools available for issuing LDAP queries. Queries should be issued to &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. The search base you almost certainly want is &amp;lt;tt&amp;gt;dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;. Read access is available without authentication; [[Kerberos]] is used to authenticate commands which require it.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h ldap1.csclub.uwaterloo.ca -b dc=csclub,dc=uwaterloo,dc=ca uid=ctdalek&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;-x&amp;lt;/tt&amp;gt; option causes &amp;lt;tt&amp;gt;ldapsearch&amp;lt;/tt&amp;gt; to switch to simple authentication rather than trying to authenticate via SASL (which will fail if you do not have a Kerberos ticket).&lt;br /&gt;
&lt;br /&gt;
The University LDAP server (uwldap.uwaterloo.ca) can also be queried like this. Again, use &amp;quot;simple authentication&amp;quot; as read access is available (from on campus) without authentication. SASL authentication will fail without additional parameters.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h uwldap.uwaterloo.ca -b dc=uwaterloo,dc=ca &amp;quot;cn=Prabhakar Ragde&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Replication ===&lt;br /&gt;
&lt;br /&gt;
While &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth1|auth1]]) is the LDAP master, an up-to-date replica is available on &amp;lt;tt&amp;gt;ldap2.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth2|auth2]]).&lt;br /&gt;
&lt;br /&gt;
In order to replicate changes from the master, the slave maintains an authenticated connection to the master which provides it with full read access to all changes.&lt;br /&gt;
&lt;br /&gt;
Specifically, &amp;lt;tt&amp;gt;/etc/systemd/system/k5start-slapd.service&amp;lt;/tt&amp;gt; maintains an active Kerberos ticket for &amp;lt;tt&amp;gt;ldap/auth2.csclub.uwaterloo.ca@CSCLUB.UWATERLOO.CA&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/var/run/slapd/krb5cc&amp;lt;/tt&amp;gt;. This is then used to authenticate the slave to the server, who maps this principal to &amp;lt;tt&amp;gt;cn=ldap-slave,dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;, which in turn has full read privileges.&lt;br /&gt;
&lt;br /&gt;
In the event of master failure, all hosts should fail LDAP reads seamlessly over to the slave.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing a user&#039;s username ==&lt;br /&gt;
&lt;br /&gt;
Only a member of the Systems Committee can change a user&#039;s username. &#039;&#039;&#039;At all times, a user&#039;s username must match the user&#039;s username in WatIAM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
All changes to an account MUST be done in person so that identity can be confirmed. If a member cannot attend in person, then an alternate method of identity verification may be chosen by the Systems Administrator.&lt;br /&gt;
&lt;br /&gt;
# Edit entries in LDAP (&amp;lt;code&amp;gt;ldapvi -Y GSSAPI&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Find and replace the user&#039;s old username with the new one&lt;br /&gt;
# Change the user&#039;s Kerberos principal (on auth1, &amp;lt;code&amp;gt;renprinc $OLD $NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Move the user&#039;s home directory (on aspartame, &amp;lt;code&amp;gt;mv /users/$OLD /users/$NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Change the user&#039;s csc-general (and csc-industry, if subscribed) email address for $OLD@csclub.uwaterloo.ca to $NEW@csclub.uwaterloo.ca&lt;br /&gt;
#* https://mailman.csclub.uwaterloo.ca/admin/csc-general&lt;br /&gt;
# If the user has vhosts on caffeine, update them to point to their new username&lt;br /&gt;
&lt;br /&gt;
If the user&#039;s account has been around for a while, and they request it, forward email from their old username to their new one.&lt;br /&gt;
&lt;br /&gt;
# Edit &amp;lt;code&amp;gt;/etc/aliases&amp;lt;/code&amp;gt; on mail. &amp;lt;code&amp;gt;$OLD: $NEW&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;newaliases&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4296</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4296"/>
		<updated>2019-09-04T22:39:21Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use [http://www.openldap.org/ OpenLDAP] for directory services. Our primary LDAP server is [[Machine_List#auth1|auth1]] and our secondary LDAP server is [[Machine_List#auth2|auth2]].&lt;br /&gt;
&lt;br /&gt;
=== ehashman&#039;s Guide to Setting up OpenLDAP on Debian ===&lt;br /&gt;
&lt;br /&gt;
Welcome to my nightmare.&lt;br /&gt;
&lt;br /&gt;
==== What is LDAP? ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;LDAP:&#039;&#039;&#039; Lightweight Directory Access Protocol&lt;br /&gt;
&lt;br /&gt;
An open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. — [https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol Wikipedia: LDAP]&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
In this case, &amp;amp;quot;directory&amp;amp;quot; refers to the user directory, like on an old-school Rolodex. Many groups use LDAP to maintain their user directory, including the University (the &amp;amp;quot;WatIAM&amp;amp;quot; identity management system), the Computer Science Club, and even the UW Amateur Radio Club.&lt;br /&gt;
&lt;br /&gt;
This is a guide documenting how to set up LDAP on a Debian Linux system.&lt;br /&gt;
&lt;br /&gt;
==== First steps ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Ensure that openldap is installed on the machine:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# apt-get install slapd ldap-utils&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Debian will do a lot of magic and set up a skeleton LDAP server and get it running. We need to configure that further.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let&#039;s set up logging before we forget. Create the following files in &amp;lt;code&amp;gt;/var/log&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# mkdir /var/log/ldap&lt;br /&gt;
# touch /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set ownership correctly:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown openldap:openldap /var/log/ldap&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up rsyslog to dump the LDAP logs into &amp;lt;code&amp;gt;/var/log/ldap.log&amp;lt;/code&amp;gt; by adding the following lines:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/rsyslog.conf&lt;br /&gt;
...&lt;br /&gt;
# Grab ldap logs, don&#039;t duplicate in syslog&lt;br /&gt;
local4.*                        /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up log rotation for these by creating the file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/logrotate.d.ldap &amp;lt;code&amp;gt;/etc/logrotate.d/ldap&amp;lt;/code&amp;gt;] with the following contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;/var/log/ldap/*log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 1000&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
    create 0640 openldap adm&lt;br /&gt;
    postrotate&lt;br /&gt;
        if [ -f /var/run/slapd/slapd.pid ]; then&lt;br /&gt;
            /etc/init.d/slapd restart &amp;amp;gt;/dev/null 2&amp;amp;gt;&amp;amp;amp;1&lt;br /&gt;
        fi&lt;br /&gt;
    endscript&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
/var/log/ldap.log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 24&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;As of OpenLDAP 2.4, it doesn&#039;t actually create a config file for us. Apparently, this is a &amp;amp;quot;feature&amp;amp;quot;: LDAP maintainers think we should want to set this up via dynamic queries. We don&#039;t, so the first thing we need is our [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/slapd.conf &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;] file.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Building &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt; from scratch =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Get a copy to work with:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# scp uid@auth1.csclub.uwaterloo.ca:/etc/ldap/slapd.conf /etc/ldap/  ## you need CSC root for this&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You&#039;ll want to comment out the TLS lines, and anything referring to Kerberos and access for now. You&#039;ll also want to comment out lines specifically referring to syscom and office staff.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Make sure you remove the reference to &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; as an index, as we&#039;re going to remove this field.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You&#039;ll also need to generate a root password for the LDAP to bootstrap auth, like so:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slappasswd&lt;br /&gt;
New password: &lt;br /&gt;
Re-enter new password:&lt;br /&gt;
{SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Add this line below &amp;lt;code&amp;gt;rootdn&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;rootpw          {SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we want to edit all instances of &amp;amp;quot;csclub&amp;amp;quot; to be &amp;amp;quot;wics&amp;amp;quot; instead, e.g.:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;suffix     &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
rootdn     &amp;amp;quot;cn=root,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, we need to grab all the relevant schemas:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;scp -r uid@auth1.csclub.uwaterloo.ca:/etc/ldap/schema/ /tmp/schemas&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use the include directives to help you find the ones you need. I noticed we were missing &amp;lt;code&amp;gt;sudo.schema&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;rfc2307bis.schema&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open up the [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/csc.schema &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;] for editing; we&#039;re not using it verbatim. Remove the attributes &amp;lt;code&amp;gt;studentid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; and the objectclass &amp;lt;code&amp;gt;club&amp;lt;/code&amp;gt;. Also make sure you change the OID so we don&#039;t clash with the CSC. Because we didn&#039;t want to go through the process of requesting a [http://pen.iana.org/pen/PenApplication.page PEN number], we chose arbitrarily to use 26338, which belongs to IWICS Inc.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to can the auto-generated config files, so do that:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Also nuke the auto-generated database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm /var/lib/ldap/__db.*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Configure the database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# cp /usr/share/slapd/DB_CONFIG /var/lib/ldap/&lt;br /&gt;
# chown openldap:openldap /var/lib/ldap/DB_CONFIG &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we can generate the new configuration files:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And ensure that the permissions are all set correctly, lest this break something:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If at this point you get a nasty error, such as&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;5657d4db hdb_db_open: database &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;: db_open(/var/lib/ldap/id2entry.bdb) failed: No such file or directory (2).&lt;br /&gt;
5657d4db backend_startup_one (type=hdb, suffix=&amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;): bi_db_open failed! (2)&lt;br /&gt;
slap_startup failed (test would succeed using the -u switch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Just try restarting slapd, and see if that fixes the problem:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd stop&lt;br /&gt;
# service slapd start&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Congratulations! Your LDAP service is now configured and running.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting TLS Up and Running ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now that we have our LDAP service, we&#039;ll want to be able to serve encrypted traffic. This is especially important for any remote access, since binding to LDAP (i.e. sending it a password for auth) occurs over plaintext, and we don&#039;t want to leak our admin password.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Our first step is to copy our SSL certificates into the correct places. Public ones go into &amp;lt;code&amp;gt;/etc/ssl/certs/&amp;lt;/code&amp;gt; and private ones go into &amp;lt;code&amp;gt;/etc/ssl/private/&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Since the LDAP daemon needs to be able to read our private cert, we need to grant LDAP access to the private folder:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chgrp openldap /etc/ssl/private &lt;br /&gt;
# chmod g+x /etc/ssl/private&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, uncomment the TLS-related settings in &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;. These are &amp;lt;code&amp;gt;TLSCertificateFile&amp;lt;/code&amp;gt; (the public cert), &amp;lt;code&amp;gt;TLSCertificateKeyFile&amp;lt;/code&amp;gt; (the private key), &amp;lt;code&amp;gt;TLSCACertificateFile&amp;lt;/code&amp;gt; (the intermediate CA cert), and &amp;lt;code&amp;gt;TLSVerifyClient&amp;lt;/code&amp;gt; (set to &amp;amp;quot;allow&amp;amp;quot;).&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# enable TLS connections&lt;br /&gt;
TLSCertificateFile      /etc/ssl/certs/wics-wildcard.crt&lt;br /&gt;
TLSCertificateKeyFile   /etc/ssl/private/wics-wildcard.key&lt;br /&gt;
&lt;br /&gt;
# enable TLS client authentication&lt;br /&gt;
TLSCACertificateFile    /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
TLSVerifyClient         allow&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Update all your LDAP settings:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&lt;br /&gt;
# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&lt;br /&gt;
# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And last, ensure that LDAP will actually serve &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt; by modifying the init script variables in &amp;lt;code&amp;gt;/etc/default/&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/default/slapd&lt;br /&gt;
...&lt;br /&gt;
SLAPD_SERVICES=&amp;amp;quot;ldap:/// ldapi:/// ldaps:///&amp;amp;quot;&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now you can restart the LDAP server:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd restart&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And assuming this is successful, test to ensure LDAP is serving on port 636 for &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# netstat -ntaup&lt;br /&gt;
Active Internet connections (servers and established)&lt;br /&gt;
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name&lt;br /&gt;
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      22847/slapd     &lt;br /&gt;
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      22847/slapd &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Populating the Database ====&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ll need to start adding objects to the database. While we&#039;ll want to mostly do this programmatically, there are a few entries we&#039;ll need to bootstrap.&lt;br /&gt;
&lt;br /&gt;
===== Root Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Start by creating a file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/tree.ldif &amp;lt;code&amp;gt;tree.ldif&amp;lt;/code&amp;gt;] to create a few necessary &amp;amp;quot;roots&amp;amp;quot; in our LDAP tree, with the contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now attempt an LDAP add, using the password you set earlier:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f tree.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Test that everything turned out okay, by performing a query of the entire database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapsearch -x -h localhost&lt;br /&gt;
# extended LDIF&lt;br /&gt;
#&lt;br /&gt;
# LDAPv3&lt;br /&gt;
# base &amp;amp;lt;dc=wics,dc=uwaterloo,dc=ca&amp;amp;gt; (default) with scope subtree&lt;br /&gt;
# filter: (objectclass=*)&lt;br /&gt;
# requesting: ALL&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
# wics.uwaterloo.ca&lt;br /&gt;
dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
# People, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
# Group, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&lt;br /&gt;
&lt;br /&gt;
# search result&lt;br /&gt;
search: 2&lt;br /&gt;
result: 0 Success&lt;br /&gt;
&lt;br /&gt;
# numResponses: 4&lt;br /&gt;
# numEntries: 3&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Users and Groups =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, add users to track the current GID and UID. This will save us from querying the entire database every time we make a new user or group. Create this file, [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/nextxid.ldif &amp;lt;code&amp;gt;nextxid.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
cn: nextuid&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
homeDirectory: /dev/null&lt;br /&gt;
&lt;br /&gt;
dn: cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: group&lt;br /&gt;
objectClass: posixGroup&lt;br /&gt;
objectClass: top&lt;br /&gt;
gidNumber: 10000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;You&#039;ll see here that our first GID is 10000 and our first UID is 20000.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them, like you did with the roots of the tree:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f nextxid.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Special &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to add a sudoers OU with a defaults object for default sudo settings. We also need entries for syscom, such that members of the syscom group can use sudo on all hosts, and for termcom, whose members can use sudo on only the office terminals. Call this one [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/sudoers.ldif &amp;lt;code&amp;gt;sudoers.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: SUDOers&lt;br /&gt;
&lt;br /&gt;
dn: cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: defaults&lt;br /&gt;
sudoOption: !lecture&lt;br /&gt;
sudoOption: env_reset&lt;br /&gt;
sudoOption: listpw=never&lt;br /&gt;
sudoOption: mailto=&amp;amp;quot;wics-sys@lists.uwaterloo.ca&amp;amp;quot;&lt;br /&gt;
sudoOption: shell_noargs&lt;br /&gt;
&lt;br /&gt;
dn: cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %syscom&lt;br /&gt;
sudoUser: %syscom&lt;br /&gt;
sudoHost: ALL&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&lt;br /&gt;
&lt;br /&gt;
dn: cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %termcom&lt;br /&gt;
sudoUser: %termcom&lt;br /&gt;
sudoHost: honk&lt;br /&gt;
sudoHost: hiss&lt;br /&gt;
sudoHost: gosling&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f sudoers.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Last, add some special local groups via [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/local-groups.ldif &amp;lt;code&amp;gt;local-groups.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f local-groups.ldif&amp;lt;/pre&amp;gt;&lt;br /&gt;
The local groups are special because they usually are present on all systems, but we want to be able to add users to them at the LDAP level. For instance, the audio group controls access to sound equipment, and the adm group controls log read access.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;That&#039;s all the entries we have to add manually! Now we can use software for the rest. See [[weo|&amp;lt;code&amp;gt;weo&amp;lt;/code&amp;gt;]] for more details.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Querying LDAP ===&lt;br /&gt;
&lt;br /&gt;
There are many tools available for issuing LDAP queries. Queries should be issued to &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. The search base you almost certainly want is &amp;lt;tt&amp;gt;dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;. Read access is available without authentication; [[Kerberos]] is used to authenticate commands which require it.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h ldap1.csclub.uwaterloo.ca -b dc=csclub,dc=uwaterloo,dc=ca uid=ctdalek&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;-x&amp;lt;/tt&amp;gt; option causes &amp;lt;tt&amp;gt;ldapsearch&amp;lt;/tt&amp;gt; to switch to simple authentication rather than trying to authenticate via SASL (which will fail if you do not have a Kerberos ticket).&lt;br /&gt;
&lt;br /&gt;
The University LDAP server (uwldap.uwaterloo.ca) can also be queried like this. Again, use &amp;quot;simple authentication&amp;quot; as read access is available (from on campus) without authentication. SASL authentication will fail without additional parameters.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h uwldap.uwaterloo.ca -b dc=uwaterloo,dc=ca &amp;quot;cn=Prabhakar Ragde&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Replication ===&lt;br /&gt;
&lt;br /&gt;
While &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth1|auth1]]) is the LDAP master, an up-to-date replica is available on &amp;lt;tt&amp;gt;ldap2.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth2|auth2]]).&lt;br /&gt;
&lt;br /&gt;
In order to replicate changes from the master, the slave maintains an authenticated connection to the master which provides it with full read access to all changes.&lt;br /&gt;
&lt;br /&gt;
Specifically, &amp;lt;tt&amp;gt;/etc/systemd/system/k5start-slapd.service&amp;lt;/tt&amp;gt; maintains an active Kerberos ticket for &amp;lt;tt&amp;gt;ldap/auth2.csclub.uwaterloo.ca@CSCLUB.UWATERLOO.CA&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/var/run/slapd/krb5cc&amp;lt;/tt&amp;gt;. This is then used to authenticate the slave to the server, who maps this principal to &amp;lt;tt&amp;gt;cn=ldap-slave,dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;, which in turn has full read privileges.&lt;br /&gt;
&lt;br /&gt;
In the event of master failure, all hosts should fail LDAP reads seamlessly over to the slave.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing a user&#039;s username ==&lt;br /&gt;
&lt;br /&gt;
Only a member of the Systems Committee can change a user&#039;s username. &#039;&#039;&#039;At all times, a user&#039;s username must match the user&#039;s username in WatIAM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
All changes to an account MUST be done in person so that identity can be confirmed. If a member cannot attend in person, then an alternate method of identity verification may be chosen by the Systems Administrator.&lt;br /&gt;
&lt;br /&gt;
# Edit entries in LDAP (&amp;lt;code&amp;gt;ldapvi -Y GSSAPI&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Find and replace the user&#039;s old username with the new one&lt;br /&gt;
# Change the user&#039;s Kerberos principal (on auth1, &amp;lt;code&amp;gt;renprinc $OLD $NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Move the user&#039;s home directory (on aspartame)&lt;br /&gt;
# Change the user&#039;s csc-general (and csc-industry, if subscribed) email address for $OLD@csclub.uwaterloo.ca to $NEW@csclub.uwaterloo.ca&lt;br /&gt;
#* https://mailman.csclub.uwaterloo.ca/admin/csc-general&lt;br /&gt;
# If the user has vhosts on caffeine, update them to point to their new username&lt;br /&gt;
&lt;br /&gt;
If the user&#039;s account has been around for a while, and they request it, forward email from their old username to their new one.&lt;br /&gt;
&lt;br /&gt;
# Edit &amp;lt;code&amp;gt;/etc/aliases&amp;lt;/code&amp;gt; on mail. &amp;lt;code&amp;gt;$OLD: $NEW&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;newaliases&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=4294</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=4294"/>
		<updated>2019-08-22T04:56:30Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: taurine caught fire&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently a virtual machine hosted on [[#biloba|biloba]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [[Apache]]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
* mail was migrated to [[#mail|mail]]&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no bitcoin mining).&lt;br /&gt;
&lt;br /&gt;
Most people use either taurine and clones or (high-fructose-)corn-syrup. hfcs is probably our beefiest machine at the moment, if you are wanting to do some heavy computation. Again, if you have a long-running computationally intensive job, it&#039;s good to&lt;br /&gt;
nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
* has ipmi on corn-syrup-impi.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is our more powerful version of corn-syrup. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;taurine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, this machine is currently unavailable (likely permanently). More details will be shared when available.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
* [[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sucrose&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sucrose is a [[#taurine|taurine]] clone donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
bit-shifter is an office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel(R) Core(TM)2 Quad CPU    Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
gwem is an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin is an office terminal. It was upgraded in Spring 2014 after an unidentified failure.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;nullsleep&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminals&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
It is named after Gordon Strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Core2 Quad Q8200 @ 2.33GHz&lt;br /&gt;
* 4 GB RAM&lt;br /&gt;
* nVidia GeForce 8600 GTS&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/strombola.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems may only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
== &#039;&#039;aspartame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
aspartame is a taurine clone donated by CSCF. It currently is our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;dextrose&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
dextrose is a [[#taurine|taurine]] clone donated by CSCF. It currently hosts [[#mathnews|the mathNEWS server]], [[#auth1|auth1]], [[#rt|rt]] and [[#munin|munin]].&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 72GB drives in RAID1 (LVM dextrose)&lt;br /&gt;
* 2 1TB drives in RAID1 (LVM dextrose2)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;auth1&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Container on [[#dextrose|dextrose]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[LDAP]] master&lt;br /&gt;
* [[Kerberos]] master&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;coffee&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on [[#ginkgo|ginkgo]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Database#MySQL|MySQL]]&lt;br /&gt;
* [[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cobalamin&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics. Will act as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
* http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]]&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
&lt;br /&gt;
* For some reason, the keyboard is shit. Try to avoid having to use it. It&#039;s doable, but painful. IPMI works now, and then we don&#039;t need to bug about physical access so it&#039;s better anyway.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;auth2&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[LDAP]] slave&lt;br /&gt;
* [[Kerberos]] slave&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Virtualization#Linux_Container|Linux containers]]; see [[#caffeine|caffeine]], [[#mail|mail]], [[#munin|munin]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;mail&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#biloba|biloba]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;psilodump&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, host disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;phlogiston&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, host disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sodium-benzoate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-benzoate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 1 x Intel Xeon E5-2630 (8 cores, 2.40 GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Talks]] mirror&lt;br /&gt;
* [[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;munin&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#dextrose|dextrose]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;yerba-mate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 Hz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
* mattermost (under development)&lt;br /&gt;
* shibboleth (under development)&lt;br /&gt;
&lt;br /&gt;
Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;guayusa&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 Hz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
Currently in use for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* logstash (testing of logstash)&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* block1.cloud&lt;br /&gt;
* object1.cloud&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;ginkgo&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;biloba&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;fs00&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
fs00 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;fs01&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
fs01 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;goto80&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located in the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;binaerpilot&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;anamanaguchi&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;digital cutter&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;mathnews&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
[[#dextrose|dextrose]] hosts a container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;glomag&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decomissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;Lisp machine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;ginseng&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29 RAID 10] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;calum&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
The server from back before recorded memory.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;paza&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;romana&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sodium-citrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;acesulfame-potassium&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;skynet&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
* [http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
* [http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
* [http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;freebsd&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;rainbowdragoneyes&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;denardo&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;artificial-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-citrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-nitrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Potassium-Benzoate_Drives&amp;diff=4288</id>
		<title>Potassium-Benzoate Drives</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Potassium-Benzoate_Drives&amp;diff=4288"/>
		<updated>2019-05-21T04:25:21Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;To make it easier to find a drive in Potassium-Benzoate, drives have their serial numbers labelled on the drive trays. Therefore, this table no longer needs to be maintained.&lt;br /&gt;
&lt;br /&gt;
Right now we only have the 4TB WD Gold drives acquired from a MEF proposal in the machine and they should all be covered under warranty until 06/10/2022.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=New_NetApp&amp;diff=4281</id>
		<title>New NetApp</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=New_NetApp&amp;diff=4281"/>
		<updated>2019-03-10T17:21:54Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* music */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;At some point in 2017, CSCF and MFCF donated us their FAS&#039;&#039;&#039;XXXX&#039;&#039;&#039; NetApp filers. These filers are to replace the FAS3000 filers currently in use.&lt;br /&gt;
&lt;br /&gt;
Additionally, since we were approaching maximum disk capactiy, the Math Endowment Fund funded a new 24x2TB disk shelf to go with the new filers.&lt;br /&gt;
&lt;br /&gt;
== NetApp Support + Documentation ==&lt;br /&gt;
&lt;br /&gt;
As the filers were decommishioned by both CSCF and MFCF, there is no support of the filers.&lt;br /&gt;
&lt;br /&gt;
Official NetApp documentation is available at https://csclub.uwaterloo.ca/~syscom/netapp-docs/.&lt;br /&gt;
&lt;br /&gt;
At one point, we had access to full information about the NetApp filers on the NetApp support site. At some point, unfortunately, that stopped working. The information provided includes the license keys. We have a copy of the license keys for one of the filers (FS00) but not the other. &#039;&#039;Someone should ask CSCF or MFCF if they have this information recorded somewhere&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Physical Installation ==&lt;br /&gt;
&lt;br /&gt;
Both of the NetApp filers are installed in the MC 3015 machine room. One filer and two disk shelves are located in rack E. The other filer was installed in rack F.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For simplicity, we decided to only use of the of the filers. We haven’t decided yet what to do with the other filer.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== Networking ===&lt;br /&gt;
&lt;br /&gt;
FS00 is connected via two 1gbps to mc-rt-3015-mso-a using LACP. Therefore, traffic should be balance between the two connections. If one of the connections goes down, the NetApp will continue to function with just the one connection.&lt;br /&gt;
&lt;br /&gt;
=== Power ===&lt;br /&gt;
&lt;br /&gt;
It is important that we keep the NetApp filer + disk shelves running as long as possible. At the time of installation, the UPS in rack E (mc-3015-e1-ups1) was dedicated for critical services (networking, network file shares and web hosting).&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
You can SSH into the NetApp from dextrose by running &amp;lt;code&amp;gt;ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oCiphers=+3des-cbc root@fs00.csclub.uwaterloo.ca&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you need information about the NetApp, run &amp;lt;code&amp;gt;sysconfig -a&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Modifying &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; on the NetApp ===&lt;br /&gt;
&lt;br /&gt;
The easiest way to change configuration on the NetApp is to mount its system directory on a different machine (only aspartame or dextrose are allowed to mount it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre lang=&amp;quot;sh&amp;quot;&amp;gt;mkdir /mnt/fs00&lt;br /&gt;
mount -t nfs -o vers=3,sec=sys fs00.csclub.uwaterloo.ca:/vol/vol0 /mnt/fs00&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The NetApp system directory is currently mounted on dextrose, at /mnt/fs00.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Networking ===&lt;br /&gt;
&lt;br /&gt;
The NetApp is configured in VLAN 530 (CSC Storage).&lt;br /&gt;
&lt;br /&gt;
Here is the networking configuration in &amp;lt;code&amp;gt;etc/rc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# create lacp link&lt;br /&gt;
ifgrp create lacp csc_storage -b ip e0a e0b&lt;br /&gt;
ifconfig csc_storage inet 172.19.168.35 netmask 255.255.255.224 mtusize 1500&lt;br /&gt;
ifconfig csc_storage inet6 fd74:6b6a:8eca:4903:c5c::35 prefixlen 64&lt;br /&gt;
route add default 172.19.168.33 1&lt;br /&gt;
route add inet6 default fd74:6b6a:8eca:4903::1 1&lt;br /&gt;
routed on&lt;br /&gt;
options dns.domainname csclub.uwaterloo.ca&lt;br /&gt;
options dns.enable on&lt;br /&gt;
options nis.enable off&lt;br /&gt;
savecore&amp;lt;/pre&amp;gt;&lt;br /&gt;
The CSC DNS servers are configured in &amp;lt;code&amp;gt;etc/hosts&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;nameserver 2620:101:f000:4901:c5c::4&lt;br /&gt;
nameserver 2620:101:f000:7300:c5c::20&lt;br /&gt;
nameserver 129.97.134.4&lt;br /&gt;
nameserver 129.97.18.20&lt;br /&gt;
nameserver 129.97.2.1&lt;br /&gt;
nameserver 129.97.2.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;TODO&#039;&#039;&#039;: The NetApp has a dedicated management port. We should take advantage of this and connect that directly to a machine which only the Systems Committee can access. Configuring this port should disable SSH via the non-management ports (this may need additional configuration).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== Disks ===&lt;br /&gt;
&lt;br /&gt;
There are two disk shelves connected to the FS00 NetApp.&lt;br /&gt;
&lt;br /&gt;
# 14x136GB 10 000RPM FibreChannel disks&lt;br /&gt;
#* This was unused from our old NetApp system and was originally used for testing.&lt;br /&gt;
#* (ztseguin) I can’t remember, but I don’t think all disks are present.&lt;br /&gt;
# DS4243: 24x2TB 7 200RPM SATA disks&lt;br /&gt;
#* Funded by the Math Endowment Fund (MEF)&lt;br /&gt;
#* Purchased from Enterasource in Winter 2018&lt;br /&gt;
&lt;br /&gt;
=== Aggregates ===&lt;br /&gt;
&lt;br /&gt;
All aggregates are configured with RAID-DP.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: any other aggregate on the NetApp is for testing only.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr0&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
NetApp system aggregate. Disks assigned to this aggregate are located on the old disk shelf.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;vol0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr_users&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
Aggregate dedicated to user home directories.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;users&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr_misc&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
Aggregate for miscellaneous purposes.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;music&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;backup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Volumes ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: any other volume on the NetApp is for testing only.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;vol0&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
NetApp system volume.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;users&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
For user home directories. Each user is given a quota of 12GB.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 12 hourly, 4 nightly and 2 weekly&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;music&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
For music.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 2 nightly and 16 weekly&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;backup&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
For backups of LDAP, Kerberos.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 2 nightly and 16 weekly&lt;br /&gt;
&lt;br /&gt;
=== Exporting Volumes ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;In general, &amp;lt;code&amp;gt;sec=sys&amp;lt;/code&amp;gt; should only be exported to MC VLAN 530 (172.19.168.32/27, fd74:6b6a:8eca:4903::/64). This VLAN is only connected to trusted machines (NetApp, CSC servers in the MC 3015 or DC 3558 machine rooms).&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;All other machines should be given &amp;lt;code&amp;gt;sec=krb5p&amp;lt;/code&amp;gt; permissions only.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The NetApp exports are stored in &amp;lt;code&amp;gt;/etc/exports&amp;lt;/code&amp;gt;. If you update the exports, they can be reloaded by running &amp;lt;code&amp;gt;exportfs -r&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Quotas ===&lt;br /&gt;
&lt;br /&gt;
Quotas are configured on the NetApp, in &amp;lt;code&amp;gt;/etc/quotas&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After updating the quotas, the NetApp must be instructed to reload them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre lang=&amp;quot;bash&amp;quot;&amp;gt;# this will work for most quota changes&lt;br /&gt;
quota resize &amp;amp;lt;volume&amp;amp;gt;&lt;br /&gt;
&lt;br /&gt;
# however, some changes might need a full re-initialization of quotas&lt;br /&gt;
#   note: while re-initializing, quotas will not be enforced.&lt;br /&gt;
quota off &amp;amp;lt;volume&amp;amp;gt;&lt;br /&gt;
quota on &amp;amp;lt;volume&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quota Reports ====&lt;br /&gt;
&lt;br /&gt;
Users can view their current usage + quota by running &amp;lt;code&amp;gt;quota -s&amp;lt;/code&amp;gt; on any machine.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee can run a report of everyone’s usage by running &amp;lt;code&amp;gt;quota report&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots ===&lt;br /&gt;
&lt;br /&gt;
Most volumes have snapshots enabled. Snapshots only use space when files change contained within them change (as it’s copy on write).&lt;br /&gt;
&lt;br /&gt;
Snapshots are available in a special directory called &amp;lt;code&amp;gt;.snapshot&amp;lt;/code&amp;gt;. This directory is available everywhere and will not show up in a directory listing (except at the volume root).&lt;br /&gt;
&lt;br /&gt;
Current schedules can be viewed by running &amp;lt;code&amp;gt;snap sched &amp;amp;lt;volume&amp;amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== inodes ===&lt;br /&gt;
&lt;br /&gt;
The number of inodes can be increased with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;maxfiles $VOLUME $NEW_VALUE&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is not possible to decrease the number of inodes.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=New_NetApp&amp;diff=4280</id>
		<title>New NetApp</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=New_NetApp&amp;diff=4280"/>
		<updated>2019-03-10T17:21:32Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Add backup volume&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;At some point in 2017, CSCF and MFCF donated us their FAS&#039;&#039;&#039;XXXX&#039;&#039;&#039; NetApp filers. These filers are to replace the FAS3000 filers currently in use.&lt;br /&gt;
&lt;br /&gt;
Additionally, since we were approaching maximum disk capactiy, the Math Endowment Fund funded a new 24x2TB disk shelf to go with the new filers.&lt;br /&gt;
&lt;br /&gt;
== NetApp Support + Documentation ==&lt;br /&gt;
&lt;br /&gt;
As the filers were decommishioned by both CSCF and MFCF, there is no support of the filers.&lt;br /&gt;
&lt;br /&gt;
Official NetApp documentation is available at https://csclub.uwaterloo.ca/~syscom/netapp-docs/.&lt;br /&gt;
&lt;br /&gt;
At one point, we had access to full information about the NetApp filers on the NetApp support site. At some point, unfortunately, that stopped working. The information provided includes the license keys. We have a copy of the license keys for one of the filers (FS00) but not the other. &#039;&#039;Someone should ask CSCF or MFCF if they have this information recorded somewhere&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Physical Installation ==&lt;br /&gt;
&lt;br /&gt;
Both of the NetApp filers are installed in the MC 3015 machine room. One filer and two disk shelves are located in rack E. The other filer was installed in rack F.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;For simplicity, we decided to only use of the of the filers. We haven’t decided yet what to do with the other filer.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== Networking ===&lt;br /&gt;
&lt;br /&gt;
FS00 is connected via two 1gbps to mc-rt-3015-mso-a using LACP. Therefore, traffic should be balance between the two connections. If one of the connections goes down, the NetApp will continue to function with just the one connection.&lt;br /&gt;
&lt;br /&gt;
=== Power ===&lt;br /&gt;
&lt;br /&gt;
It is important that we keep the NetApp filer + disk shelves running as long as possible. At the time of installation, the UPS in rack E (mc-3015-e1-ups1) was dedicated for critical services (networking, network file shares and web hosting).&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
You can SSH into the NetApp from dextrose by running &amp;lt;code&amp;gt;ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oCiphers=+3des-cbc root@fs00.csclub.uwaterloo.ca&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you need information about the NetApp, run &amp;lt;code&amp;gt;sysconfig -a&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Modifying &amp;lt;code&amp;gt;/etc&amp;lt;/code&amp;gt; on the NetApp ===&lt;br /&gt;
&lt;br /&gt;
The easiest way to change configuration on the NetApp is to mount its system directory on a different machine (only aspartame or dextrose are allowed to mount it).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre lang=&amp;quot;sh&amp;quot;&amp;gt;mkdir /mnt/fs00&lt;br /&gt;
mount -t nfs -o vers=3,sec=sys fs00.csclub.uwaterloo.ca:/vol/vol0 /mnt/fs00&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The NetApp system directory is currently mounted on dextrose, at /mnt/fs00.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=== Networking ===&lt;br /&gt;
&lt;br /&gt;
The NetApp is configured in VLAN 530 (CSC Storage).&lt;br /&gt;
&lt;br /&gt;
Here is the networking configuration in &amp;lt;code&amp;gt;etc/rc&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# create lacp link&lt;br /&gt;
ifgrp create lacp csc_storage -b ip e0a e0b&lt;br /&gt;
ifconfig csc_storage inet 172.19.168.35 netmask 255.255.255.224 mtusize 1500&lt;br /&gt;
ifconfig csc_storage inet6 fd74:6b6a:8eca:4903:c5c::35 prefixlen 64&lt;br /&gt;
route add default 172.19.168.33 1&lt;br /&gt;
route add inet6 default fd74:6b6a:8eca:4903::1 1&lt;br /&gt;
routed on&lt;br /&gt;
options dns.domainname csclub.uwaterloo.ca&lt;br /&gt;
options dns.enable on&lt;br /&gt;
options nis.enable off&lt;br /&gt;
savecore&amp;lt;/pre&amp;gt;&lt;br /&gt;
The CSC DNS servers are configured in &amp;lt;code&amp;gt;etc/hosts&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;nameserver 2620:101:f000:4901:c5c::4&lt;br /&gt;
nameserver 2620:101:f000:7300:c5c::20&lt;br /&gt;
nameserver 129.97.134.4&lt;br /&gt;
nameserver 129.97.18.20&lt;br /&gt;
nameserver 129.97.2.1&lt;br /&gt;
nameserver 129.97.2.2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;TODO&#039;&#039;&#039;: The NetApp has a dedicated management port. We should take advantage of this and connect that directly to a machine which only the Systems Committee can access. Configuring this port should disable SSH via the non-management ports (this may need additional configuration).&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
=== Disks ===&lt;br /&gt;
&lt;br /&gt;
There are two disk shelves connected to the FS00 NetApp.&lt;br /&gt;
&lt;br /&gt;
# 14x136GB 10 000RPM FibreChannel disks&lt;br /&gt;
#* This was unused from our old NetApp system and was originally used for testing.&lt;br /&gt;
#* (ztseguin) I can’t remember, but I don’t think all disks are present.&lt;br /&gt;
# DS4243: 24x2TB 7 200RPM SATA disks&lt;br /&gt;
#* Funded by the Math Endowment Fund (MEF)&lt;br /&gt;
#* Purchased from Enterasource in Winter 2018&lt;br /&gt;
&lt;br /&gt;
=== Aggregates ===&lt;br /&gt;
&lt;br /&gt;
All aggregates are configured with RAID-DP.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: any other aggregate on the NetApp is for testing only.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr0&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
NetApp system aggregate. Disks assigned to this aggregate are located on the old disk shelf.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;vol0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr_users&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
Aggregate dedicated to user home directories.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;users&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;aggr_misc&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
Aggregate for miscellaneous purposes.&lt;br /&gt;
&lt;br /&gt;
Volumes:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;music&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;backup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Volumes ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: any other volume on the NetApp is for testing only.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;vol0&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
NetApp system volume.&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;users&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
For user home directories. Each user is given a quota of 12GB.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 12 hourly, 4 nightly and 2 weekly&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;music&amp;lt;/code&amp;gt; ====&lt;br /&gt;
&lt;br /&gt;
For music.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 2 nightly and 16 weekly&lt;br /&gt;
&lt;br /&gt;
==== &amp;lt;code&amp;gt;backup&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For backups of LDAP, Kerberos.&lt;br /&gt;
&lt;br /&gt;
Snapshots:&lt;br /&gt;
&lt;br /&gt;
* 2 nightly and 16 weekly&lt;br /&gt;
&lt;br /&gt;
=== Exporting Volumes ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;In general, &amp;lt;code&amp;gt;sec=sys&amp;lt;/code&amp;gt; should only be exported to MC VLAN 530 (172.19.168.32/27, fd74:6b6a:8eca:4903::/64). This VLAN is only connected to trusted machines (NetApp, CSC servers in the MC 3015 or DC 3558 machine rooms).&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;All other machines should be given &amp;lt;code&amp;gt;sec=krb5p&amp;lt;/code&amp;gt; permissions only.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The NetApp exports are stored in &amp;lt;code&amp;gt;/etc/exports&amp;lt;/code&amp;gt;. If you update the exports, they can be reloaded by running &amp;lt;code&amp;gt;exportfs -r&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Quotas ===&lt;br /&gt;
&lt;br /&gt;
Quotas are configured on the NetApp, in &amp;lt;code&amp;gt;/etc/quotas&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
After updating the quotas, the NetApp must be instructed to reload them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre lang=&amp;quot;bash&amp;quot;&amp;gt;# this will work for most quota changes&lt;br /&gt;
quota resize &amp;amp;lt;volume&amp;amp;gt;&lt;br /&gt;
&lt;br /&gt;
# however, some changes might need a full re-initialization of quotas&lt;br /&gt;
#   note: while re-initializing, quotas will not be enforced.&lt;br /&gt;
quota off &amp;amp;lt;volume&amp;amp;gt;&lt;br /&gt;
quota on &amp;amp;lt;volume&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Quota Reports ====&lt;br /&gt;
&lt;br /&gt;
Users can view their current usage + quota by running &amp;lt;code&amp;gt;quota -s&amp;lt;/code&amp;gt; on any machine.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee can run a report of everyone’s usage by running &amp;lt;code&amp;gt;quota report&amp;lt;/code&amp;gt; on the NetApp.&lt;br /&gt;
&lt;br /&gt;
=== Snapshots ===&lt;br /&gt;
&lt;br /&gt;
Most volumes have snapshots enabled. Snapshots only use space when files change contained within them change (as it’s copy on write).&lt;br /&gt;
&lt;br /&gt;
Snapshots are available in a special directory called &amp;lt;code&amp;gt;.snapshot&amp;lt;/code&amp;gt;. This directory is available everywhere and will not show up in a directory listing (except at the volume root).&lt;br /&gt;
&lt;br /&gt;
Current schedules can be viewed by running &amp;lt;code&amp;gt;snap sched &amp;amp;lt;volume&amp;amp;gt;&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== inodes ===&lt;br /&gt;
&lt;br /&gt;
The number of inodes can be increased with the command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;maxfiles $VOLUME $NEW_VALUE&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is not possible to decrease the number of inodes.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4276</id>
		<title>LDAP</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=LDAP&amp;diff=4276"/>
		<updated>2019-02-04T23:17:44Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Add username change documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We use [http://www.openldap.org/ OpenLDAP] for directory services. Our primary LDAP server is [[Machine_List#auth1|auth1]] and our secondary LDAP server is [[Machine_List#auth2|auth2]].&lt;br /&gt;
&lt;br /&gt;
=== ehashman&#039;s Guide to Setting up OpenLDAP on Debian ===&lt;br /&gt;
&lt;br /&gt;
Welcome to my nightmare.&lt;br /&gt;
&lt;br /&gt;
==== What is LDAP? ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&#039;&#039;&#039;LDAP:&#039;&#039;&#039; Lightweight Directory Access Protocol&lt;br /&gt;
&lt;br /&gt;
An open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an Internet Protocol (IP) network. — [https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol Wikipedia: LDAP]&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
In this case, &amp;amp;quot;directory&amp;amp;quot; refers to the user directory, like on an old-school Rolodex. Many groups use LDAP to maintain their user directory, including the University (the &amp;amp;quot;WatIAM&amp;amp;quot; identity management system), the Computer Science Club, and even the UW Amateur Radio Club.&lt;br /&gt;
&lt;br /&gt;
This is a guide documenting how to set up LDAP on a Debian Linux system.&lt;br /&gt;
&lt;br /&gt;
==== First steps ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Ensure that openldap is installed on the machine:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# apt-get install slapd ldap-utils&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Debian will do a lot of magic and set up a skeleton LDAP server and get it running. We need to configure that further.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Let&#039;s set up logging before we forget. Create the following files in &amp;lt;code&amp;gt;/var/log&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# mkdir /var/log/ldap&lt;br /&gt;
# touch /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set ownership correctly:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown openldap:openldap /var/log/ldap&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up rsyslog to dump the LDAP logs into &amp;lt;code&amp;gt;/var/log/ldap.log&amp;lt;/code&amp;gt; by adding the following lines:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/rsyslog.conf&lt;br /&gt;
...&lt;br /&gt;
# Grab ldap logs, don&#039;t duplicate in syslog&lt;br /&gt;
local4.*                        /var/log/ldap.log&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Set up log rotation for these by creating the file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/logrotate.d.ldap &amp;lt;code&amp;gt;/etc/logrotate.d/ldap&amp;lt;/code&amp;gt;] with the following contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;/var/log/ldap/*log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 1000&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
    create 0640 openldap adm&lt;br /&gt;
    postrotate&lt;br /&gt;
        if [ -f /var/run/slapd/slapd.pid ]; then&lt;br /&gt;
            /etc/init.d/slapd restart &amp;amp;gt;/dev/null 2&amp;amp;gt;&amp;amp;amp;1&lt;br /&gt;
        fi&lt;br /&gt;
    endscript&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
/var/log/ldap.log {&lt;br /&gt;
    weekly&lt;br /&gt;
    missingok&lt;br /&gt;
    rotate 24&lt;br /&gt;
    compress&lt;br /&gt;
    delaycompress&lt;br /&gt;
    notifempty&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;As of OpenLDAP 2.4, it doesn&#039;t actually create a config file for us. Apparently, this is a &amp;amp;quot;feature&amp;amp;quot;: LDAP maintainers think we should want to set this up via dynamic queries. We don&#039;t, so the first thing we need is our [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/slapd.conf &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;] file.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Building &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt; from scratch =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Get a copy to work with:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# scp uid@auth1.csclub.uwaterloo.ca:/etc/ldap/slapd.conf /etc/ldap/  ## you need CSC root for this&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You&#039;ll want to comment out the TLS lines, and anything referring to Kerberos and access for now. You&#039;ll also want to comment out lines specifically referring to syscom and office staff.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Make sure you remove the reference to &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; as an index, as we&#039;re going to remove this field.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;You&#039;ll also need to generate a root password for the LDAP to bootstrap auth, like so:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slappasswd&lt;br /&gt;
New password: &lt;br /&gt;
Re-enter new password:&lt;br /&gt;
{SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Add this line below &amp;lt;code&amp;gt;rootdn&amp;lt;/code&amp;gt; in the &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;rootpw          {SSHA}longhash&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we want to edit all instances of &amp;amp;quot;csclub&amp;amp;quot; to be &amp;amp;quot;wics&amp;amp;quot; instead, e.g.:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;suffix     &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
rootdn     &amp;amp;quot;cn=root,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, we need to grab all the relevant schemas:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;scp -r uid@auth1.csclub.uwaterloo.ca:/etc/ldap/schema/ /tmp/schemas&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Use the include directives to help you find the ones you need. I noticed we were missing &amp;lt;code&amp;gt;sudo.schema&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;rfc2307bis.schema&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open up the [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/csc.schema &amp;lt;code&amp;gt;csc.schema&amp;lt;/code&amp;gt;] for editing; we&#039;re not using it verbatim. Remove the attributes &amp;lt;code&amp;gt;studentid&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;nonMemberTerm&amp;lt;/code&amp;gt; and the objectclass &amp;lt;code&amp;gt;club&amp;lt;/code&amp;gt;. Also make sure you change the OID so we don&#039;t clash with the CSC. Because we didn&#039;t want to go through the process of requesting a [http://pen.iana.org/pen/PenApplication.page PEN number], we chose arbitrarily to use 26338, which belongs to IWICS Inc.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to can the auto-generated config files, so do that:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Also nuke the auto-generated database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm /var/lib/ldap/__db.*&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Configure the database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# cp /usr/share/slapd/DB_CONFIG /var/lib/ldap/&lt;br /&gt;
# chown openldap:openldap /var/lib/ldap/DB_CONFIG &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now we can generate the new configuration files:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And ensure that the permissions are all set correctly, lest this break something:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;If at this point you get a nasty error, such as&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;5657d4db hdb_db_open: database &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;: db_open(/var/lib/ldap/id2entry.bdb) failed: No such file or directory (2).&lt;br /&gt;
5657d4db backend_startup_one (type=hdb, suffix=&amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;): bi_db_open failed! (2)&lt;br /&gt;
slap_startup failed (test would succeed using the -u switch)&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;Just try restarting slapd, and see if that fixes the problem:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd stop&lt;br /&gt;
# service slapd start&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Congratulations! Your LDAP service is now configured and running.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Getting TLS Up and Running ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now that we have our LDAP service, we&#039;ll want to be able to serve encrypted traffic. This is especially important for any remote access, since binding to LDAP (i.e. sending it a password for auth) occurs over plaintext, and we don&#039;t want to leak our admin password.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Our first step is to copy our SSL certificates into the correct places. Public ones go into &amp;lt;code&amp;gt;/etc/ssl/certs/&amp;lt;/code&amp;gt; and private ones go into &amp;lt;code&amp;gt;/etc/ssl/private/&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Since the LDAP daemon needs to be able to read our private cert, we need to grant LDAP access to the private folder:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# chgrp openldap /etc/ssl/private &lt;br /&gt;
# chmod g+x /etc/ssl/private&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, uncomment the TLS-related settings in &amp;lt;code&amp;gt;slapd.conf&amp;lt;/code&amp;gt;. These are &amp;lt;code&amp;gt;TLSCertificateFile&amp;lt;/code&amp;gt; (the public cert), &amp;lt;code&amp;gt;TLSCertificateKeyFile&amp;lt;/code&amp;gt; (the private key), &amp;lt;code&amp;gt;TLSCACertificateFile&amp;lt;/code&amp;gt; (the intermediate CA cert), and &amp;lt;code&amp;gt;TLSVerifyClient&amp;lt;/code&amp;gt; (set to &amp;amp;quot;allow&amp;amp;quot;).&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# enable TLS connections&lt;br /&gt;
TLSCertificateFile      /etc/ssl/certs/wics-wildcard.crt&lt;br /&gt;
TLSCertificateKeyFile   /etc/ssl/private/wics-wildcard.key&lt;br /&gt;
&lt;br /&gt;
# enable TLS client authentication&lt;br /&gt;
TLSCACertificateFile    /etc/ssl/certs/GlobalSign_Intermediate_Root_SHA256_G2.pem&lt;br /&gt;
TLSVerifyClient         allow&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Update all your LDAP settings:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# rm -rf /etc/openldap/slapd.d/*&lt;br /&gt;
# slaptest -f /etc/ldap/slapd.conf -F /etc/ldap/slapd.d/&lt;br /&gt;
# chown -R openldap:openldap /etc/ldap/slapd.d&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And last, ensure that LDAP will actually serve &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt; by modifying the init script variables in &amp;lt;code&amp;gt;/etc/default/&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# vim /etc/default/slapd&lt;br /&gt;
...&lt;br /&gt;
SLAPD_SERVICES=&amp;amp;quot;ldap:/// ldapi:/// ldaps:///&amp;amp;quot;&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now you can restart the LDAP server:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# service slapd restart&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;And assuming this is successful, test to ensure LDAP is serving on port 636 for &amp;lt;code&amp;gt;ldaps://&amp;lt;/code&amp;gt;:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# netstat -ntaup&lt;br /&gt;
Active Internet connections (servers and established)&lt;br /&gt;
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name&lt;br /&gt;
tcp        0      0 0.0.0.0:389             0.0.0.0:*               LISTEN      22847/slapd     &lt;br /&gt;
tcp        0      0 0.0.0.0:636             0.0.0.0:*               LISTEN      22847/slapd &amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Populating the Database ====&lt;br /&gt;
&lt;br /&gt;
Now you&#039;ll need to start adding objects to the database. While we&#039;ll want to mostly do this programmatically, there are a few entries we&#039;ll need to bootstrap.&lt;br /&gt;
&lt;br /&gt;
===== Root Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Start by creating a file [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/tree.ldif &amp;lt;code&amp;gt;tree.ldif&amp;lt;/code&amp;gt;] to create a few necessary &amp;amp;quot;roots&amp;amp;quot; in our LDAP tree, with the contents:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now attempt an LDAP add, using the password you set earlier:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f tree.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Test that everything turned out okay, by performing a query of the entire database:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapsearch -x -h localhost&lt;br /&gt;
# extended LDIF&lt;br /&gt;
#&lt;br /&gt;
# LDAPv3&lt;br /&gt;
# base &amp;amp;lt;dc=wics,dc=uwaterloo,dc=ca&amp;amp;gt; (default) with scope subtree&lt;br /&gt;
# filter: (objectclass=*)&lt;br /&gt;
# requesting: ALL&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
# wics.uwaterloo.ca&lt;br /&gt;
dn: dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: dcObject&lt;br /&gt;
objectClass: organization&lt;br /&gt;
o: Women in Computer Science&lt;br /&gt;
dc: wics&lt;br /&gt;
&lt;br /&gt;
# People, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: People&lt;br /&gt;
&lt;br /&gt;
# Group, wics.uwaterloo.ca&lt;br /&gt;
dn: ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: Group&lt;br /&gt;
&lt;br /&gt;
# search result&lt;br /&gt;
search: 2&lt;br /&gt;
result: 0 Success&lt;br /&gt;
&lt;br /&gt;
# numResponses: 4&lt;br /&gt;
# numEntries: 3&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Users and Groups =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Next, add users to track the current GID and UID. This will save us from querying the entire database every time we make a new user or group. Create this file, [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/nextxid.ldif &amp;lt;code&amp;gt;nextxid.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
cn: nextuid&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
homeDirectory: /dev/null&lt;br /&gt;
&lt;br /&gt;
dn: cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: group&lt;br /&gt;
objectClass: posixGroup&lt;br /&gt;
objectClass: top&lt;br /&gt;
gidNumber: 10000&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;p&amp;gt;You&#039;ll see here that our first GID is 10000 and our first UID is 20000.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them, like you did with the roots of the tree:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f nextxid.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;uid=nextuid,ou=People,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=nextgid,ou=Group,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== Special &amp;lt;code&amp;gt;sudo&amp;lt;/code&amp;gt; Entries =====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;We also need to add a sudoers OU with a defaults object for default sudo settings. We also need entries for syscom, such that members of the syscom group can use sudo on all hosts, and for termcom, whose members can use sudo on only the office terminals. Call this one [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/sudoers.ldif &amp;lt;code&amp;gt;sudoers.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;dn: ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: organizationalUnit&lt;br /&gt;
ou: SUDOers&lt;br /&gt;
&lt;br /&gt;
dn: cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: defaults&lt;br /&gt;
sudoOption: !lecture&lt;br /&gt;
sudoOption: env_reset&lt;br /&gt;
sudoOption: listpw=never&lt;br /&gt;
sudoOption: mailto=&amp;amp;quot;wics-sys@lists.uwaterloo.ca&amp;amp;quot;&lt;br /&gt;
sudoOption: shell_noargs&lt;br /&gt;
&lt;br /&gt;
dn: cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %syscom&lt;br /&gt;
sudoUser: %syscom&lt;br /&gt;
sudoHost: ALL&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&lt;br /&gt;
&lt;br /&gt;
dn: cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: sudoRole&lt;br /&gt;
cn: %termcom&lt;br /&gt;
sudoUser: %termcom&lt;br /&gt;
sudoHost: honk&lt;br /&gt;
sudoHost: hiss&lt;br /&gt;
sudoHost: gosling&lt;br /&gt;
sudoCommand: ALL&lt;br /&gt;
sudoRunAsUser: ALL&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Now add them:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f sudoers.ldif&lt;br /&gt;
Enter LDAP Password:&lt;br /&gt;
adding new entry &amp;amp;quot;ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=defaults,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%syscom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&lt;br /&gt;
&lt;br /&gt;
adding new entry &amp;amp;quot;cn=%termcom,ou=SUDOers,dc=wics,dc=uwaterloo,dc=ca&amp;amp;quot;&amp;lt;/pre&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;Last, add some special local groups via [https://git.uwaterloo.ca/wics/documentation/blob/master/ldap/local-groups.ldif &amp;lt;code&amp;gt;local-groups.ldif&amp;lt;/code&amp;gt;]:&amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;# ldapadd -cxWD cn=root,dc=wics,dc=uwaterloo,dc=ca -f local-groups.ldif&amp;lt;/pre&amp;gt;&lt;br /&gt;
The local groups are special because they usually are present on all systems, but we want to be able to add users to them at the LDAP level. For instance, the audio group controls access to sound equipment, and the adm group controls log read access.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&amp;lt;p&amp;gt;That&#039;s all the entries we have to add manually! Now we can use software for the rest. See [[weo|&amp;lt;code&amp;gt;weo&amp;lt;/code&amp;gt;]] for more details.&amp;lt;/p&amp;gt;&amp;lt;/li&amp;gt;&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Querying LDAP ===&lt;br /&gt;
&lt;br /&gt;
There are many tools available for issuing LDAP queries. Queries should be issued to &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. The search base you almost certainly want is &amp;lt;tt&amp;gt;dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;. Read access is available without authentication; [[Kerberos]] is used to authenticate commands which require it.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h ldap1.csclub.uwaterloo.ca -b dc=csclub,dc=uwaterloo,dc=ca uid=ctdalek&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;tt&amp;gt;-x&amp;lt;/tt&amp;gt; option causes &amp;lt;tt&amp;gt;ldapsearch&amp;lt;/tt&amp;gt; to switch to simple authentication rather than trying to authenticate via SASL (which will fail if you do not have a Kerberos ticket).&lt;br /&gt;
&lt;br /&gt;
The University LDAP server (uwldap.uwaterloo.ca) can also be queried like this. Again, use &amp;quot;simple authentication&amp;quot; as read access is available (from on campus) without authentication. SASL authentication will fail without additional parameters.&lt;br /&gt;
&lt;br /&gt;
Example:&lt;br /&gt;
&lt;br /&gt;
 ldapsearch -x -h uwldap.uwaterloo.ca -b dc=uwaterloo,dc=ca &amp;quot;cn=Prabhakar Ragde&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Replication ===&lt;br /&gt;
&lt;br /&gt;
While &amp;lt;tt&amp;gt;ldap1.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth1|auth1]]) is the LDAP master, an up-to-date replica is available on &amp;lt;tt&amp;gt;ldap2.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt; ([[Machine_List#auth2|auth2]]).&lt;br /&gt;
&lt;br /&gt;
In order to replicate changes from the master, the slave maintains an authenticated connection to the master which provides it with full read access to all changes.&lt;br /&gt;
&lt;br /&gt;
Specifically, &amp;lt;tt&amp;gt;/etc/systemd/system/k5start-slapd.service&amp;lt;/tt&amp;gt; maintains an active Kerberos ticket for &amp;lt;tt&amp;gt;ldap/auth2.csclub.uwaterloo.ca@CSCLUB.UWATERLOO.CA&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/var/run/slapd/krb5cc&amp;lt;/tt&amp;gt;. This is then used to authenticate the slave to the server, who maps this principal to &amp;lt;tt&amp;gt;cn=ldap-slave,dc=csclub,dc=uwaterloo,dc=ca&amp;lt;/tt&amp;gt;, which in turn has full read privileges.&lt;br /&gt;
&lt;br /&gt;
In the event of master failure, all hosts should fail LDAP reads seamlessly over to the slave.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Changing a user&#039;s username ==&lt;br /&gt;
&lt;br /&gt;
Only a member of the Systems Committee can change a user&#039;s username. &#039;&#039;&#039;At all times, a user&#039;s username must match the user&#039;s username in WatIAM.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
# Edit entries in LDAP (&amp;lt;code&amp;gt;ldapvi -Y GSSAPI&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Find and replace the user&#039;s old username with the new one&lt;br /&gt;
# Change the user&#039;s Kerberos principal (on auth1, &amp;lt;code&amp;gt;renprinc $OLD $NEW&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Move the user&#039;s home directory (on aspartame)&lt;br /&gt;
# Change the user&#039;s csc-general (and csc-industry, if subscribed) email address for $OLD@csclub.uwaterloo.ca to $NEW@csclub.uwaterloo.ca&lt;br /&gt;
#* https://mailman.csclub.uwaterloo.ca/admin/csc-general&lt;br /&gt;
# If the user has vhosts on caffeine, update them to point to their new username&lt;br /&gt;
&lt;br /&gt;
If the user&#039;s account has been around for a while, and they request it, forward email from their old username to their new one.&lt;br /&gt;
&lt;br /&gt;
# Edit &amp;lt;code&amp;gt;/etc/aliases&amp;lt;/code&amp;gt; on mail. &amp;lt;code&amp;gt;$OLD: $NEW&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;newaliases&amp;lt;/code&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4264</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=4264"/>
		<updated>2018-12-10T00:06:59Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Added information about mirror-dc&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on one of two zfs zpools. There are 8 drives per array, configured as raidz2, and there is an additional drive that can be swapped in (in the event of a disk failure).&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror1&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror2&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem under one of the two pools. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
&lt;br /&gt;
The synchronization process is run by a Python script called &amp;amp;quot;merlin&amp;amp;quot;, written by a2brenna. The script is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The list of repositories and their configuration (synch frequency, location, etc.) is configured in &amp;lt;code&amp;gt;merlin.py&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/arthur.py sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Winter 2010, it is now generated by a Python script in &amp;lt;code&amp;gt;~mirror/mirror-index&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/make-index&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run at 5:40am on the 14th and 28th of each month. The script can be run manually when needed (for example, when the archive list is updated) by running:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;sudo -u mirror /home/mirror/mirror-index/make-index.py&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This causes an instance of &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt; which computes the size of each directory. This list is then sorted alphabetically by directory name and returned to the Python script. If any errors occur during this process, the script conservatively chooses to exit rather than risk generating an index file that is incorrect.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;make-index.py&amp;lt;/code&amp;gt; is configured by means of a [https://yaml.org YAML] file, &amp;lt;code&amp;gt;config.yaml&amp;lt;/code&amp;gt;, in the same directory. Its format is as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre class=&amp;quot;yaml&amp;quot;&amp;gt;docroot: /mirror/root&lt;br /&gt;
duflags: --human-readable --max-depth=1&lt;br /&gt;
output: /mirror/root/index.html&lt;br /&gt;
&lt;br /&gt;
exclude:&lt;br /&gt;
   - include&lt;br /&gt;
   - lost+found&lt;br /&gt;
   - pub&lt;br /&gt;
# (...)&lt;br /&gt;
&lt;br /&gt;
directories:&lt;br /&gt;
  apache:&lt;br /&gt;
    site: apache.org&lt;br /&gt;
    url: http://www.apache.org/&lt;br /&gt;
&lt;br /&gt;
  archlinux:&lt;br /&gt;
    site: archlinux.org&lt;br /&gt;
    url: http://www.archlinux.org/&lt;br /&gt;
&lt;br /&gt;
# (...)&amp;lt;/pre&amp;gt;&lt;br /&gt;
The docroot is the directory which is to be scanned; this will probably always be the mirror root from which Apache serves. &amp;lt;code&amp;gt;duflags&amp;lt;/code&amp;gt; specifies the flags to be passed to &amp;lt;code&amp;gt;du&amp;lt;/code&amp;gt;. This is here so that it&#039;s easy to find and alter. For instance, we could change &amp;lt;code&amp;gt;--human-readable&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;--si&amp;lt;/code&amp;gt; if we ever decided that, like hard disk manufacturers, we want sizes to appear larger than they are. &amp;lt;code&amp;gt;output&amp;lt;/code&amp;gt; defines the file to which the generated index will be written.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;exclude&amp;lt;/code&amp;gt; specifies the list of directories which will not be included in the generated index page (since, by default, all folders are included in the generated index page).&lt;br /&gt;
&lt;br /&gt;
Finally, &amp;lt;code&amp;gt;directories&amp;lt;/code&amp;gt; specifies the list of directories to be listed. The format is fairly straightforward: simply name the directory and provide a site (the display name in the &amp;amp;quot;Project Site&amp;amp;quot; column) and URL. One caveat here is that YAML does not allow tabs for whitespace. Indent with two spaces to remain consistent with the existing file format, please. Also note that the directory name is case-sensitive, as is always the case on Unix.&lt;br /&gt;
&lt;br /&gt;
Finally, the HTML index file is generated from &amp;lt;code&amp;gt;index.mako&amp;lt;/code&amp;gt;, a Mako template (which is mostly HTML anyhow). If you really can&#039;t figure out how it works, look up the Mako documentation.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#* Find the pool with least current disk usage&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs create cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#* &amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror{1,2}/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror{1,2}/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-dc. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin.py&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/logs/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/logs/transfer.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index/config.yaml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=4250</id>
		<title>SSL</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=4250"/>
		<updated>2018-09-26T20:34:33Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Add coffee as a place to install SSL cert&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== GlobalSign ==&lt;br /&gt;
&lt;br /&gt;
The CSC currently has an SSL Certificate from GlobalSign for *.csclub.uwaterloo.ca provided at no cost to us through IST.  GlobalSign likes to take a long time to respond to certificate signing requests (CSR) for wildcard certs, so our CSR really needs to be handed off to IST at least 2 weeks in advance. You can do it sooner – the certificate expiry date will be the old expiry date + 1 year (+ a bonus )  Having an invalid cert for any length of time leads to terrible breakage, followed by terrible workarounds and prolonged problems.&lt;br /&gt;
&lt;br /&gt;
When the certificate is due to expire in a month or two, syscom should (but apparently doesn&#039;t always) get an email notification. This will include a renewal link. Otherwise, use the [https://uwaterloo.ca/information-systems-technology/about/organizational-structure/information-security-services/certificate-authority/globalsign-signed-x5093-certificates/self-service-globalsign-ssl-certificates IST-CA self service system]. Please keep a copy of the key, CSR and (once issued) certificate in &amp;lt;tt&amp;gt;/home/sysadmin/certs&amp;lt;/tt&amp;gt;. The OpenSSL examples linked there are good to generate a 2048-bit RSA key and a corresponding CSR. It&#039;s probably a good idea to change the private key (as it&#039;s not that much effort anyways). Just sure your CSR is for &amp;lt;tt&amp;gt;*.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
At the self-service portal, these options worked in 2013. If you need IST assistance, [mailto:ist-ca@uwaterloo.ca ist-ca@uwaterloo.ca] is the email address you should contact.&lt;br /&gt;
  Products: OrganizationSSL&lt;br /&gt;
  SSL Certificate Type: Wildcard SSL Certificate&lt;br /&gt;
  Validity Period: 1 year&lt;br /&gt;
  Are you switching from a Competitor? No, I am not switching&lt;br /&gt;
  Are you renewing this Certificate? Yes (paste current certificate)&lt;br /&gt;
  30-day bonus: Yes (why not?)&lt;br /&gt;
  Add specific Subject Alternative Names (SANs): No (*.csclub.uwaterloo.ca automatically adds csclub.uwaterloo.ca as a SAN)&lt;br /&gt;
  Enter Certificate Signing Request (CSR): Yes (paste CSR)&lt;br /&gt;
  Contact Information:&lt;br /&gt;
    First Name: Computer Science Club&lt;br /&gt;
    Last Name: Systems Committee&lt;br /&gt;
    Telephone: +1 519 888 4567 x33870&lt;br /&gt;
    Email Address: syscom@csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
== Certificate Location ==&lt;br /&gt;
&lt;br /&gt;
Keep a copy of newly generated certificates in /home/sysadmin/certs on the NFS server (currently [[Machine_List#aspartame|aspartame]]).&lt;br /&gt;
&lt;br /&gt;
A list of places you&#039;ll need to put the new certificate to keep our services running. Private key (if applicable) should be kept next to the certificate with the extension .key.&lt;br /&gt;
&lt;br /&gt;
* caffeine:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* coffee:/etc/ssl/private/csclub.uwaterloo.ca (for PostgreSQL and MariaDB)&lt;br /&gt;
* mail:/etc/ssl/private/csclub-wildcard.crt (for Apache, Postfix and Dovecot)&lt;br /&gt;
* rt:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* potassium-benzoate:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* auth1:/etc/ssl/private/csclub-wildcard.crt (for slapd)&lt;br /&gt;
* auth2:/etc/ssl/private/csclub-wildcard.crt (for slapd)&lt;br /&gt;
* logstash:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* mattermost:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* load-balancer-0(1|2):/etc/ssl/private/csclub.uwaterloo.ca (for haproxy)&lt;br /&gt;
&lt;br /&gt;
Some services (e.g. Dovecot, Postfix) prefer to have the certificate chain in one file. Concatenate the appropriate intermediate root to the end of the certificate and store this as csclub-wildcard-chain.crt.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=4248</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=4248"/>
		<updated>2018-09-04T18:36:26Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Add notice that taurine and sucrose at temporarily unavailable.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently a virtual machine hosted on [[#ginkgo|ginkgo]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [[Apache]]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
* mail was migrated to [[#mail|mail]]&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no bitcoin mining).&lt;br /&gt;
&lt;br /&gt;
Most people use either taurine and clones or (high-fructose-)corn-syrup. hfcs is probably our beefiest machine at the moment, if you are wanting to do some heavy computation. Again, if you have a long-running computationally intensive job, it&#039;s good to&lt;br /&gt;
nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
* has ipmi on corn-syrup-impi.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is our more powerful version of corn-syrup. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;taurine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This machine is temporarily unavailable. (Sept. 4, 2018)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
* [[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sucrose&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;This machine is temporarily unavailable. (Sept. 4, 2018)&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
sucrose is a [[#taurine|taurine]] clone donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-citrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-nitrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
It is a Sun Fire E2900 from a decommissioned MFCF compute cluster, on loan for an extended period. It has a SPARC architecture and runs OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian.&lt;br /&gt;
&lt;br /&gt;
It is available for general use. Due to an &amp;quot;interesting&amp;quot; SSH server configuration, Kerberos authentication is &#039;&#039;&#039;required&#039;&#039;&#039; to access this machine. This means that from a CSC machine, run &#039;kinit -p&#039; to obtain credentials before SSH&#039;ing in. From a non-CSC machine, follow the instructions on [[Kerberos#Running_Kerberos_Locally|running Kerberos locally]].&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
bit-shifter is an office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel(R) Core(TM)2 Quad CPU    Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
gwem is an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin is an office terminal. It was upgraded in Spring 2014 after an unidentified failure.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;nullsleep&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminals&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
It is named after Gordon Strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Core2 Quad Q8200 @ 2.33GHz&lt;br /&gt;
* 4 GB RAM&lt;br /&gt;
* nVidia GeForce 8600 GTS&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/strombola.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems may only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
== &#039;&#039;aspartame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
aspartame is a taurine clone donated by CSCF. It currently is our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;dextrose&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
dextrose is a [[#taurine|taurine]] clone donated by CSCF. It currently hosts [[#mathnews|the mathNEWS server]], [[#auth1|auth1]], [[#mail|mail]], [[#rt|rt]] and [[#munin|munin]].&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 72GB drives in RAID1 (LVM dextrose)&lt;br /&gt;
* 2 1TB drives in RAID1 (LVM dextrose2)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;auth1&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Container on [[#dextrose|dextrose]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[LDAP]] master&lt;br /&gt;
* [[Kerberos]] master&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;coffee&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on [[#ginkgo|ginkgo]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Database#MySQL|MySQL]]&lt;br /&gt;
* [[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cobalamin&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics. Will act as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
* http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]]&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
&lt;br /&gt;
* For some reason, the keyboard is shit. Try to avoid having to use it. It&#039;s doable, but painful. IPMI works now, and then we don&#039;t need to bug about physical access so it&#039;s better anyway.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;auth2&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[LDAP]] slave&lt;br /&gt;
* [[Kerberos]] slave&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Virtualization#Linux_Container|Linux containers]]; see [[#caffeine|caffeine]], [[#mail|mail]], [[#munin|munin]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;mail&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#dextrose|dextrose]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;psilodump&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, host disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;phlogiston&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, host disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sodium-benzoate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;potassium-benzoate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 1 x Intel Xeon E5-2630 (8 cores, 2.40 GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Talks]] mirror&lt;br /&gt;
* [[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;munin&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#dextrose|dextrose]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* [http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;yerba-mate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 Hz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
* mattermost (under development)&lt;br /&gt;
* shibboleth (under development)&lt;br /&gt;
&lt;br /&gt;
Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;guayusa&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 Hz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
Currently in use for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* logstash (testing of logstash)&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* block1.cloud&lt;br /&gt;
* object1.cloud&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;ginkgo&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;biloba&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;fs00&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
fs00 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;fs01&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
fs01 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;goto80&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located in the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;binaerpilot&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;anamanaguchi&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;digital cutter&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;mathnews&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
[[#dextrose|dextrose]] hosts a container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;glomag&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decomissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;Lisp machine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;ginseng&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [http://en.wikipedia.org/wiki/Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29 RAID 10] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;calum&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
The server from back before recorded memory.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;paza&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;romana&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sodium-citrate&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;acesulfame-potassium&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;skynet&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
* [http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
* [http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
* [http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;freebsd&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;rainbowdragoneyes&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;denardo&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;artificial-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* [http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4247</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4247"/>
		<updated>2018-08-18T19:50:23Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There are also two outages in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
* Take backups of important containers/machines (whole things or just config): auth1, mail, caffeine&lt;br /&gt;
&lt;br /&gt;
=== Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Copy the CSC website to caffeine-dr&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Shutdown csclub.cloud components (they won&#039;t really work since not everything is redundant yet)&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
Note that the University&#039;s core network and external links will be operating with reduced redundancy.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
CSCF will provided some generator power for mirror in MC.&lt;br /&gt;
&lt;br /&gt;
CSCF is also setting up a second node in DC.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
(Note: Aug 18, 2018 - we might be able to power the netapp with generator power. If that&#039;s the case, then websites will be up during the outage)&lt;br /&gt;
&lt;br /&gt;
A copy of the CSC website will be hosted on caffeine-dr. All pages not found on the local machine (including member and club sites) will return a 503 Service Unavailable error page.&lt;br /&gt;
&lt;br /&gt;
Sample status page: [https://www-dr.csclub.uwaterloo.ca/test https://www-dr.csclub.uwaterloo.ca/test]&lt;br /&gt;
&lt;br /&gt;
The following IP addresses should be added to caffeine-dr during the outage to serve the error page for other CSC services:&lt;br /&gt;
&lt;br /&gt;
* caffeine: 129.97.134.17 / 2620:101:f000:4901:c5c::caff:e12e&lt;br /&gt;
* git: 129.97.134.49 / 2620:101:f000:4901:c5c:3eb::49&lt;br /&gt;
* wiki: 129.97.134.44 / 2620:101:f000:4901:c5c:3eb::44&lt;br /&gt;
* munin: 129.97.134.51 / 2620:101:f000:4901:c5c::51&lt;br /&gt;
* prometheus: 129.97.134.15 / 2620:101:f000:4901:c5c::15&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4246</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4246"/>
		<updated>2018-08-05T04:18:57Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Before Sunday, August 19 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
* Take backups of important containers/machines (whole things or just config): auth1, mail, caffeine&lt;br /&gt;
&lt;br /&gt;
=== Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Copy the CSC website to caffeine-dr&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Shutdown csclub.cloud components (they won&#039;t really work since not everything is redundant yet)&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
CSCF will provided some generator power for mirror in MC.&lt;br /&gt;
&lt;br /&gt;
CSCF is also setting up a second node in DC.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
A copy of the CSC website will be hosted on caffeine-dr. All pages not found on the local machine (including member and club sites) will return a 503 Service Unavailable error page.&lt;br /&gt;
&lt;br /&gt;
Sample status page: [https://www-dr.csclub.uwaterloo.ca/test https://www-dr.csclub.uwaterloo.ca/test]&lt;br /&gt;
&lt;br /&gt;
The following IP addresses should be added to caffeine-dr during the outage to serve the error page for other CSC services:&lt;br /&gt;
&lt;br /&gt;
* caffeine: 129.97.134.17 / 2620:101:f000:4901:c5c::caff:e12e&lt;br /&gt;
* git: 129.97.134.49 / 2620:101:f000:4901:c5c:3eb::49&lt;br /&gt;
* wiki: 129.97.134.44 / 2620:101:f000:4901:c5c:3eb::44&lt;br /&gt;
* munin: 129.97.134.51 / 2620:101:f000:4901:c5c::51&lt;br /&gt;
* prometheus: 129.97.134.15 / 2620:101:f000:4901:c5c::15&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4245</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4245"/>
		<updated>2018-08-05T04:13:33Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: Update web hosting plan&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Copy the CSC website to caffeine-dr&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Shutdown csclub.cloud components (they won&#039;t really work since not everything is redundant yet)&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
CSCF will provided some generator power for mirror in MC.&lt;br /&gt;
&lt;br /&gt;
CSCF is also setting up a second node in DC.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
A copy of the CSC website will be hosted on caffeine-dr. All pages not found on the local machine (including member and club sites) will return a 503 Service Unavailable error page.&lt;br /&gt;
&lt;br /&gt;
Sample status page: [https://www-dr.csclub.uwaterloo.ca/test https://www-dr.csclub.uwaterloo.ca/test]&lt;br /&gt;
&lt;br /&gt;
The following IP addresses should be added to caffeine-dr during the outage to serve the error page for other CSC services:&lt;br /&gt;
&lt;br /&gt;
* caffeine: 129.97.134.17 / 2620:101:f000:4901:c5c::caff:e12e&lt;br /&gt;
* git: 129.97.134.49 / 2620:101:f000:4901:c5c:3eb::49&lt;br /&gt;
* wiki: 129.97.134.44 / 2620:101:f000:4901:c5c:3eb::44&lt;br /&gt;
* munin: 129.97.134.51 / 2620:101:f000:4901:c5c::51&lt;br /&gt;
* prometheus: 129.97.134.15 / 2620:101:f000:4901:c5c::15&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4244</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4244"/>
		<updated>2018-08-05T01:58:36Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Sunday, August 19 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Shutdown csclub.cloud components (they won&#039;t really work since not everything is redundant yet)&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
CSCF will provided some generator power for mirror in MC.&lt;br /&gt;
&lt;br /&gt;
CSCF is also setting up a second node in DC.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4243</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4243"/>
		<updated>2018-08-02T22:51:03Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Mirror */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
CSCF will provided some generator power for mirror in MC.&lt;br /&gt;
&lt;br /&gt;
CSCF is also setting up a second node in DC.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4242</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4242"/>
		<updated>2018-08-02T22:50:19Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Thursday, August 30.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-30, and services in DC for two days during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4241</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4241"/>
		<updated>2018-07-19T21:39:40Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Impact */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Wednesday, August 29.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-29, and services in DC for one day during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4240</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4240"/>
		<updated>2018-07-19T21:39:30Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Wednesday, August 29.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-24, and services in DC for one day during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4239</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4239"/>
		<updated>2018-07-16T23:48:58Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-24, and services in DC for one day during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be possible.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4238</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4238"/>
		<updated>2018-07-16T23:47:21Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Impact ==&lt;br /&gt;
&lt;br /&gt;
All services in MC for Aug. 21-24, and services in DC for one day during that window.&lt;br /&gt;
&lt;br /&gt;
Services in PHY are not affected (This is redundant DNS and Authentication services. There are no other services (general-use or otherwise) in PHY.). It&#039;s also on a different network then MC and DC.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4237</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4237"/>
		<updated>2018-07-16T23:03:29Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
&lt;br /&gt;
Our network is announced from both MC and DC. No impact to networking is expected when MC goes offline.&lt;br /&gt;
&lt;br /&gt;
DHCP is hosted in MC (on caffeine). This is not strictly required as our servers use static IPs, but we can move it to DC so it&#039;s available.&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4236</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4236"/>
		<updated>2018-07-16T22:57:41Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* DNS */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, so we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4235</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4235"/>
		<updated>2018-07-16T22:55:31Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, and we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4234</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4234"/>
		<updated>2018-07-16T22:51:29Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Send notifications (and reminders) to csc-general&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available if the MC node is down.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, and we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4233</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4233"/>
		<updated>2018-07-16T22:47:53Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Monday, August 20 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
* Revoke access to home directories on aspartame to all machines&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available if the MC node is down.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, and we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4232</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4232"/>
		<updated>2018-07-16T22:46:57Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Before Monday, August 20 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
* Take backups of system passwords, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available if the MC node is down.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, and we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4231</id>
		<title>August 2018 Power Outage Plan</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=August_2018_Power_Outage_Plan&amp;diff=4231"/>
		<updated>2018-07-16T22:46:40Z</updated>

		<summary type="html">&lt;p&gt;Ztseguin: /* Timeline */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There is a planned power outage in MC from Tuesday, August 21 to Friday, August 24.&lt;br /&gt;
&lt;br /&gt;
There is also a one-day outage in DC, which will complicate keeping services up during the entire outage.&lt;br /&gt;
&lt;br /&gt;
== Timeline ==&lt;br /&gt;
&lt;br /&gt;
=== Before Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Complete plan for outage&lt;br /&gt;
* Move equipment to DC (if necessary)&lt;br /&gt;
* Take backups of LDAP and Kerberos, and download offsite&lt;br /&gt;
&lt;br /&gt;
=== Monday, August 20 ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown general-use computing services&lt;br /&gt;
* Transfer computing services to redundant / temporary systems&lt;br /&gt;
&lt;br /&gt;
=== Sometime during the outage window ===&lt;br /&gt;
&lt;br /&gt;
* Shutdown DC systems before the building outage&lt;br /&gt;
&lt;br /&gt;
=== After the outage ===&lt;br /&gt;
&lt;br /&gt;
* Being restoring normal services&lt;br /&gt;
&lt;br /&gt;
== Systems ==&lt;br /&gt;
&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&lt;br /&gt;
TODO. Syscom is currently working with CSCF to identify a plan for mirror.&lt;br /&gt;
&lt;br /&gt;
=== Website ===&lt;br /&gt;
&lt;br /&gt;
The CSC website is a static site, and will be straightforward to maintain during the outage.&lt;br /&gt;
&lt;br /&gt;
All user and club sites are hosted in home directories (which are unavailable), so we will display an outage page (with a 503 status code).&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&lt;br /&gt;
Since the outage is for a week, we need to maintain email services during the outage. An initial plan by ztseguin and jxpryde:&lt;br /&gt;
&lt;br /&gt;
* rsync users&#039; .forward, .procmailrc and .maildir to a local directory on mail, allowing mail to continue as expected&lt;br /&gt;
&lt;br /&gt;
However, this requires:&lt;br /&gt;
&lt;br /&gt;
* Users not reference any scripts, programs, etc. in their procmailrc file that reference things in their home directory&lt;br /&gt;
&lt;br /&gt;
=== Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
While the MC node is down, the PHY node can continue to answer to authentication requests. However, updating membership and changing passwords will not be available if the MC node is down.&lt;br /&gt;
&lt;br /&gt;
We may consider moving auth1 to DC for the outage.&lt;br /&gt;
&lt;br /&gt;
=== DNS ===&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s DNS service is located in both MC and PHY.&lt;br /&gt;
&lt;br /&gt;
We may consider moving the MC DNS node to DC, but this is not necessary to maintain services during the outage.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;NOTE: The MC node is the master node, and we will need to ensure that the SOA record contains a long enough expiry time so the PHY doesn&#039;t stop serving zones.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Additional Resources ==&lt;br /&gt;
&lt;br /&gt;
* https://uwaterloo.ca/information-systems-technology/news/important-significant-interruption-service-delivery-due&lt;br /&gt;
* https://istns.uwaterloo.ca/uwna/index.php?s=3561&lt;/div&gt;</summary>
		<author><name>Ztseguin</name></author>
	</entry>
</feed>