<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.csclub.uwaterloo.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Y266shen</id>
	<title>CSCWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.csclub.uwaterloo.ca/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Y266shen"/>
	<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/Special:Contributions/Y266shen"/>
	<updated>2026-05-14T07:34:58Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.5</generator>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Nextcloud&amp;diff=5478</id>
		<title>Nextcloud</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Nextcloud&amp;diff=5478"/>
		<updated>2025-11-21T19:21:10Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Migration and debian 13 upgrade&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Installation Details ==&lt;br /&gt;
=== Container setup ===&lt;br /&gt;
See https://wiki.csclub.uwaterloo.ca/Systemd-nspawn .&lt;br /&gt;
&lt;br /&gt;
=== Inside the container ===&lt;br /&gt;
Use &amp;lt;code&amp;gt;machinectl shell nextcloud&amp;lt;/code&amp;gt; to obtain a root shell inside the container.&lt;br /&gt;
&lt;br /&gt;
=== Network configuration ===&lt;br /&gt;
Add IPv4 and IPv6 address to &amp;lt;code&amp;gt;/etc/network/interfaces&amp;lt;/code&amp;gt; as usual.&lt;br /&gt;
&lt;br /&gt;
=== Install server software ===&lt;br /&gt;
Grab the essentials first.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;apt install apt-transport-https curl unzip&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We use PHP 8.4 from the upstream debian trixie (13) repository.&lt;br /&gt;
&amp;lt;x&amp;gt;&lt;br /&gt;
apt install nginx php8.4-fpm php8.4-curl php8.4-gd php8.4-mbstring php8.4-zip php8.4-mysql php8.4-bz2 php8.4-intl php8.4-redis php8.4-imagick ffmpeg php8.4-bcmath php8.4-ldap php8.4-apcu php8.4-xml php8.4-gmp&lt;br /&gt;
&amp;lt;/x&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Setup Nginx ===&lt;br /&gt;
See full configuration at https://docs.nextcloud.com/server/latest/admin_manual/installation/nginx.html. Change the PHP upstream to this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;nginx&amp;quot;&amp;gt;upstream php-handler {&lt;br /&gt;
    server unix:/var/run/php/php8.4-fpm.sock;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Also change &amp;lt;code&amp;gt;root&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;/var/www/nextcloud&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We will use the Mozilla intermediate SSL configuration. See https://ssl-config.mozilla.org/. As of SSL certificate, we will use our wildcard &amp;lt;code&amp;gt;csclub.uwaterloo.ca&amp;lt;/code&amp;gt; certificate. Copy them from xylitol.&lt;br /&gt;
&lt;br /&gt;
=== Database setup ===&lt;br /&gt;
We&#039;ll use the MariaDB instance at coffee. Create a db user and database for nextcloud there. Make sure it will allow connection from ip address of the nextcloud container.&lt;br /&gt;
&lt;br /&gt;
=== Install Nextcloud ===&lt;br /&gt;
Download zip from https://nextcloud.com/install/ (find the Archive version). Extract to &amp;lt;code&amp;gt;/var/www/nextcloud&amp;lt;/code&amp;gt;. Change owner of the folder to &amp;lt;code&amp;gt;www-data:www-data&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The DNS should be configured by now. Go to https://files.csclub.uwaterloo.ca. Installation page should be up. Fill in the details to finish the installation.&lt;br /&gt;
&lt;br /&gt;
=== Setup cron job ===&lt;br /&gt;
We goes the &amp;lt;code&amp;gt;systemd&amp;lt;/code&amp;gt; approach. See https://docs.nextcloud.com/server/stable/admin_manual/configuration_server/background_jobs_configuration.html#systemd.&lt;br /&gt;
&lt;br /&gt;
Basically, setup a service and a timer. Enable the timer.&lt;br /&gt;
&lt;br /&gt;
=== LDAP and OIDC setup ===&lt;br /&gt;
In our setup, OIDC will be used for SSO (Single Sign On) only. User and group information will then be handled via the LDAP plugin. This ensures user can sign in with their WatIM credential (just like Quest and Learn), and their group information is correctly assigned in Nextcloud.&lt;br /&gt;
&lt;br /&gt;
First setup LDAP plugin. Enable &amp;lt;code&amp;gt;LDAP/AD integration&amp;lt;/code&amp;gt; in Apps, then navigate to Settings-Administration-LDAP/AD integration (must be admin). Fill in the information as follows:&lt;br /&gt;
&lt;br /&gt;
* Server&lt;br /&gt;
** ldaps://ldap1.csclub.uwaterloo.ca 636&lt;br /&gt;
** User DN &amp;amp;amp;&amp;amp;amp; Password: blank&lt;br /&gt;
** Base DN: dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
* User&lt;br /&gt;
** LDAP Query: (&amp;amp;amp;(objectClass=member)(!(shadowExpire=1)))&lt;br /&gt;
* Login attributes&lt;br /&gt;
** LDAP Query: (&amp;amp;amp;(|(objectclass=member))(uid=%uid))&lt;br /&gt;
* Group&lt;br /&gt;
** LDAP Query: (&amp;amp;amp;(objectClass=posixGroup)(uniqueMember=*))&lt;br /&gt;
* Advanced&lt;br /&gt;
** Backup (Replica) Host: ldaps://ldap2.csclub.uwaterloo.ca&lt;br /&gt;
** Base User Tree: ou=People,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
** Base Group Tree: ou=Group,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
** Group-Member association: uniqueMember&lt;br /&gt;
** Special Attributes: mailLocalAddress&lt;br /&gt;
** Internal Username: uid&lt;br /&gt;
&lt;br /&gt;
If things goes okay, csc users should appear in Nextcloud&#039;s user list.&lt;br /&gt;
&lt;br /&gt;
For OIDC, we use [https://github.com/nextcloud/user_oidc user_oidc] plugin maintained by Nextcloud. First, make sure OIDC won&#039;t create any user account (we use LDAP for that) by adding this line to &amp;lt;code&amp;gt;config.php&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  &#039;user_oidc&#039; =&amp;gt; [&lt;br /&gt;
    &#039;auto_provision&#039; =&amp;gt; false,&lt;br /&gt;
  ]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, create an app in Keycloak (see https://github.com/pulsejet/nextcloud-oidc-login#usage-with-keycloak and [[Keycloak]]) and navigate to &amp;quot;OpenID Connect&amp;quot; tab on the admin panel and add a registered provider. You should only need to fill in &amp;quot;Client ID&amp;quot;, &amp;quot;Client secret&amp;quot; and &amp;quot;Discovery endpoint&amp;quot; taken from Keycloak.&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;Speed&#039;&#039; ===&lt;br /&gt;
&lt;br /&gt;
==== Memory caching ====&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;redis&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;APCu&amp;lt;/code&amp;gt; are used for caching. See https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html. Note that since it&#039;s a local setup, we use a UNIX socket to connect to Redis.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;dedicated-push-notification-server&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
==== Dedicated push notification server ====&lt;br /&gt;
&lt;br /&gt;
There&#039;s a dedicated nextcloud client push notification server available, which should drastically reduce server load if a lot of people are using the Nextcloud client.&lt;br /&gt;
&lt;br /&gt;
See https://github.com/nextcloud/notify_push. To set it up:&lt;br /&gt;
&lt;br /&gt;
# Install &amp;amp;quot;Client Push&amp;amp;quot; app from Nextcloud app store&lt;br /&gt;
# Create and enable a systemd service&lt;br /&gt;
# Add reverse proxy configuration to nextcloud&#039;s Nginx config file&lt;br /&gt;
&lt;br /&gt;
You should test the setup, use https://github.com/nextcloud/notify_push/tree/main/test_client. To those who are unfamiliar with Rust, just clone it and run &amp;lt;code&amp;gt;cargo build --release&amp;lt;/code&amp;gt;. You&#039;ll find the binary at &amp;lt;code&amp;gt;target/release&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span id=&amp;quot;miscellaneous&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
=== Miscellaneous ===&lt;br /&gt;
&lt;br /&gt;
* Email setup&lt;br /&gt;
** Just fill it in admin panel.&lt;br /&gt;
* Theme&lt;br /&gt;
** We use https://github.com/mwalbeck/nextcloud-breeze-dark for some KDE vibe. Oh, change the icon too.&lt;br /&gt;
&lt;br /&gt;
== Common Issues ==&lt;br /&gt;
&lt;br /&gt;
=== 409: Resource in conflict, when auto-uploading ===&lt;br /&gt;
Normally caused by Nextcloud not being able to create a folder for whatever, you can just manually create the folder&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
No-longer-valid information under here.&lt;br /&gt;
&lt;br /&gt;
=== General Administration Notes (2022) ===&lt;br /&gt;
* To use the admin account, use https://files.csclub.uwaterloo.ca/login?direct=1&amp;amp;noredir=1 with the admin credentials found in syscom machine.&lt;br /&gt;
&lt;br /&gt;
=== Storage setup (2022) ===&lt;br /&gt;
NFS mount. Add this to &amp;lt;code&amp;gt;/etc/fstab&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;fs00.csclub.uwaterloo.ca:/nextcloud /var/lib/machines/nextcloud/data nfs bg,vers=3,sec=sys,nosuid,nodev 0 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;container-setup&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installing PHP (2022) ===&lt;br /&gt;
Nextcloud recommends PHP 8.0 (have JIT support, performance go brrr), but debian bullseye doesn&#039;t have it in official repository. So add a thrid-party repository.&lt;br /&gt;
&lt;br /&gt;
Create &amp;lt;code&amp;gt;/etc/apt/sources.list.d/sury-php.list&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;deb https://packages.sury.org/php/ bullseye main&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And obtain the repository signing key.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;curl -O /etc/apt/trusted.gpg.d/sury-php.gpg https://packages.sury.org/php/apt.gpg&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Finally we can install server software packages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;apt install nginx php8.0-fpm php8.0-curl php8.0-dom php8.0-gd php8.0-mbstring php8.0-zip php8.0-mysql php8.0-bz2 php8.0-intl php8.0-redis php8.0-imagick ffmpeg php8.0-bcmath php8.0-ldap php8.0-apcu libmagickcore-6.q16-6-extra/stable&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;setup-nginx&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== OIDC setup (2022) ===&lt;br /&gt;
We use https://github.com/pulsejet/nextcloud-oidc-login for OIDC integration with KeyCloak.&lt;br /&gt;
&lt;br /&gt;
First setup Keycloak. See https://github.com/pulsejet/nextcloud-oidc-login#usage-with-keycloak.&lt;br /&gt;
&lt;br /&gt;
Then install this plugin in Nextcloud. Then, edit Nextcloud&#039;s config file at &amp;lt;code&amp;gt;/var/www/nextcloud/config/config.php&amp;lt;/code&amp;gt;. Here&#039;re some highlights.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;php&amp;quot;&amp;gt;&#039;oidc_login_client_id&#039; =&amp;gt; &#039;nextcloud&#039;,&lt;br /&gt;
&#039;oidc_login_client_secret&#039; =&amp;gt; &#039;REDACTED&#039;,&lt;br /&gt;
&#039;oidc_login_provider_url&#039; =&amp;gt; &#039;https://keycloak.csclub.uwaterloo.ca/auth/realms/csc&#039;,&lt;br /&gt;
&#039;oidc_login_end_session_redirect&#039; =&amp;gt; true,&lt;br /&gt;
&#039;oidc_login_logout_url&#039; =&amp;gt; &#039;https://files.csclub.uwaterloo.ca/apps/oidc_login/oidc&#039;,&lt;br /&gt;
&#039;oidc_login_auto_redirect&#039; =&amp;gt; true,&lt;br /&gt;
&#039;oidc_login_redir_fallback&#039; =&amp;gt; true,&lt;br /&gt;
&#039;oidc_login_attributes&#039; =&amp;gt;&lt;br /&gt;
array (&lt;br /&gt;
  &#039;id&#039; =&amp;gt; &#039;preferred_username&#039;,&lt;br /&gt;
  &#039;mail&#039; =&amp;gt; &#039;email&#039;,&lt;br /&gt;
  &#039;ldap_uid&#039; =&amp;gt; &#039;preferred_username&#039;,&lt;br /&gt;
),&lt;br /&gt;
&#039;oidc_login_webdav_enabled&#039; =&amp;gt; true,&lt;br /&gt;
&#039;oidc_login_disable_registration&#039; =&amp;gt; false,&lt;br /&gt;
&#039;oidc_login_proxy_ldap&#039; =&amp;gt; true,&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5476</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5476"/>
		<updated>2025-11-14T02:34:43Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Hardware Infrastructure (the bare metals) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Club Operation ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Systems Documentation ==&lt;br /&gt;
=== Introductions ===&lt;br /&gt;
Start here if you have no clue how a subsystem works&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Intro to Authentication]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[Decommissioned Machines]]&lt;br /&gt;
* [[Filer]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[CUDA]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Matrix]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Proxmox]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[CodeyBot]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Immich]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5472</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5472"/>
		<updated>2025-11-11T15:10:37Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: minor typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 4 major parts:&lt;br /&gt;
; LDAP : directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
; Kerberos (krb5) : authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
; pyceo : CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
; Keycloak: SSO support for web applications, allows you to use WatIAM/CSC OTP to log into web based services&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[LDAP]]&lt;br /&gt;
&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member. It&#039;s also responsible for tracking which terms do a member have a valid membership of, and that&#039;s how we implement general-use server&#039;s access control based on membership.&lt;br /&gt;
&lt;br /&gt;
In essence, it&#039;s just a database that keeps track of a bunch of unique entities (called Distinguished Name, or DN in LDAP world) and each entity can have a bunch of attributes associated with them. Here&#039;s the LDAP entry for our club mascot, C.T. Dalek:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dn: uid=ctdalek,ou=People,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
uid: ctdalek&lt;br /&gt;
homeDirectory: /users/ctdalek&lt;br /&gt;
cn: Calum T. Dalek&lt;br /&gt;
gecos: Calum T. Dalek,MC 3036,,,0&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
description: Prototypical Member Account&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: member&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: shadowAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: inetLocalMailRecipient&lt;br /&gt;
loginShell: /bin/false&lt;br /&gt;
userPassword: {SASL}ctdalek@CSCLUB.UWATERLOO.CA&lt;br /&gt;
term: f2016&lt;br /&gt;
term: f2017&lt;br /&gt;
term: f2018&lt;br /&gt;
givenName: Calum&lt;br /&gt;
sn: Dalek&lt;br /&gt;
mailLocalAddress: ctdalek@csclub.uwaterloo.ca&lt;br /&gt;
program: Alumni&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It&#039;s entirely possible to do all of these in a NoSQL database, but of course LDAP is a standard, and almost all SSO-capable system can speak LDAP, so that&#039;s why we use it.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[CEO]]&lt;br /&gt;
&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
For security reasons, the university requires us to implement 2FA (two factor authentication) in some way for all of our services. For SSH we did it via [https://duo.com/docs/duounix pam_duo], and for web we use [https://www.keycloak.org/ Keycloak]. Keycloak behaves like an adapter to provide the &amp;quot;web&amp;quot; way of authentication (i.e. OpenID Connect) using data from the &amp;quot;UNIX&amp;quot; way of authentication (LDAP, Krb5), and provide additional security features like OTP (one time password).&lt;br /&gt;
&lt;br /&gt;
A special thing we do with Keycloak is that we use it as an identity broker for WatIAM, which allow you to login via the university&#039;s WatIAM login portal and then use that session to log into CSC services without another password/OTP prompt.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5471</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5471"/>
		<updated>2025-11-11T15:08:10Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add keycloak&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
; LDAP : directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
; Kerberos (krb5) : authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
; pyceo : CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
; Keycloak: SSO support for web applications, allows you to use WatIAM/CSC OTP to log into web based services&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[LDAP]]&lt;br /&gt;
&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member. It&#039;s also responsible for tracking which terms do a member have a valid membership of, and that&#039;s how we implement general-use server&#039;s access control based on membership.&lt;br /&gt;
&lt;br /&gt;
In essence, it&#039;s just a database that keeps track of a bunch of unique entities (called Distinguished Name, or DN in LDAP world) and each entity can have a bunch of attributes associated with them. Here&#039;s the LDAP entry for our club mascot, C.T. Dalek:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dn: uid=ctdalek,ou=People,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
uid: ctdalek&lt;br /&gt;
homeDirectory: /users/ctdalek&lt;br /&gt;
cn: Calum T. Dalek&lt;br /&gt;
gecos: Calum T. Dalek,MC 3036,,,0&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
description: Prototypical Member Account&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: member&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: shadowAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: inetLocalMailRecipient&lt;br /&gt;
loginShell: /bin/false&lt;br /&gt;
userPassword: {SASL}ctdalek@CSCLUB.UWATERLOO.CA&lt;br /&gt;
term: f2016&lt;br /&gt;
term: f2017&lt;br /&gt;
term: f2018&lt;br /&gt;
givenName: Calum&lt;br /&gt;
sn: Dalek&lt;br /&gt;
mailLocalAddress: ctdalek@csclub.uwaterloo.ca&lt;br /&gt;
program: Alumni&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It&#039;s entirely possible to do all of these in a NoSQL database, but of course LDAP is a standard, and almost all SSO-capable system can speak LDAP, so that&#039;s why we use it.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[CEO]]&lt;br /&gt;
&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
For security reasons, the university requires us to implement 2FA (two factor authentication) in some way for all of our services. For SSH we did it via [https://duo.com/docs/duounix pam_duo], and for web we use [https://www.keycloak.org/ Keycloak]. Keycloak behaves like an adapter to provide the &amp;quot;web&amp;quot; way of authentication (i.e. OpenID Connect) using data from the &amp;quot;UNIX&amp;quot; way of authentication (LDAP, Krb5), and provide additional security features like OTP (one time password).&lt;br /&gt;
&lt;br /&gt;
A special thing we do with Keycloak is that we use it as an identity broker for WatIAM, which allow you to login via the university&#039;s WatIAM login portal and then use that session to log into CSC services without another password/OTP prompt.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Decommissioned_Machines&amp;diff=5469</id>
		<title>Decommissioned Machines</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Decommissioned_Machines&amp;diff=5469"/>
		<updated>2025-11-08T22:06:55Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Created page with &amp;quot;= 2025 = ==&amp;#039;&amp;#039;biloba&amp;#039;&amp;#039;==  Dead, died after the poweroutage of May 9th, 2025, previously served as Cloudstack master node or wtv.  Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558. TODO: rack??  ==== Specs ====  * 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each] * 384GB RAM * 12 3.5&amp;quot; Hot Swap Drive Bays ** 2 x 480 GB SSD * 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)  ==== Services ====  * OpenStack Compute machine &amp;#039;&amp;#039;&amp;#039;Notes&amp;#039;&amp;#039;&amp;#039; * TODO: cloudst...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= 2025 =&lt;br /&gt;
==&#039;&#039;biloba&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dead, died after the poweroutage of May 9th, 2025, previously served as Cloudstack master node or wtv.&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558. TODO: rack??&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Pre-2021 =&lt;br /&gt;
==&#039;&#039;aspartame&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
aspartame was a taurine clone donated by CSCF. It was once our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. Decomissioned in March 2021 after refusing to boot following a power outage.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;psilodump&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
psilodump was plugged into aspartame. It&#039;s still installed but inaccessible.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phlogiston&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes from before decommissioning ====&lt;br /&gt;
&lt;br /&gt;
* The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;glomag&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decommissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Before its decommissioning, glomag hosted [[#caffeine|caffeine]], [[#mail|mail]], and [[#munin|munin]] as [[Virtualization#Linux_Container|Linux containers]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;Lisp machine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginseng&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [[wikipedia:Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29|RAID 10]] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;calum&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Calum used to be our main server and was named after Calum T Dalek.  Purchased new by the club in 1994. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* SPARCserver 10 (headless SPARCstation 10)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;paza&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;romana&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;acesulfame-potassium&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;skynet&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
*[http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
*[http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
*[http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;freebsd&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;rainbowdragoneyes&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;denardo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
==&#039;&#039;artificial-flavours&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-nitrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;taurine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, taurine has been decommissioned as of Fall 2019.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
*[[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;dextrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
dextrose was a [[#taurine|taurine]] clone donated by CSCF and was decommissioned in Fall 2019 after being replaced with a more powerful server.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sucrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
sucrose was a [[#taurine|taurine]] clone donated by CSCF. It was decommissioned in Fall 2019 following multiple hardware failures.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;goto80&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Note (2022-10-25): This seems to have gone missing or otherwise left our hands.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
This was small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located was the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;nullsleep&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It&#039;s decommissioned on 2023-03-20 due to repeated unexpected shutdown. Replaced by [[#powernap|powernap]]. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminal&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
bit-shifter was an office terminal, decommissioned April 2023 due to extended age. It was upgraded to the same specs as Strombola at an unknown point in time.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core 2 Quad CPU Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
Strombola was an office terminal named after Gordon Strombola. It was retired in April 2023.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium G4600 2 cores @ 3.6Ghz&lt;br /&gt;
* 8 GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
gwem was an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
(*specs are outdated at least as of 2023-05-27*)&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin was an office terminal. It was upgraded in Spring 2014 after an unidentified failure. Not operational (no video output) as of July 2022.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5468</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5468"/>
		<updated>2025-11-08T22:05:44Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: move decommissioned section to a separate page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our machines are in the E7, F7, G7 and H7 racks (as of Jan. 2022) in the MC 3015 server room. There is an additional rack in the DC 3558 machine room on the third floor. Our office terminals are in the CSC office, in MC 3036/3037.&lt;br /&gt;
&lt;br /&gt;
= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Redundant active backup coming soon...)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* LXC virtual machine hosted on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [https://www.apache.org/ Apache]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no cryptocurrency mining).&lt;br /&gt;
&lt;br /&gt;
For computationally-intensive jobs (CPU/memory bound) we recommend running on high-fructose-corn-syrup, carbonated-water, sorbitol, mannitol, or corn-syrup, listed in roughly decreasing order of available resources. For low-intensity interactive jobs, such as IRC clients, we recommend running on neotame. &#039;&#039;&#039;&amp;lt;u&amp;gt;If you have a long-running computationally intensive job, it&#039;s good to nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is a large SuperMicro server. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Missing moba IO shield (as of January 2024)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;neotame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
neotame is a SuperMicro server funded by MEF. It is the successor to taurine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We strongly discourage running computationally-intensive jobs&#039;&#039;&#039; on neotame as many users run interactive applications such as IRC clients on it and any significant service degradation will be more likely to affect other users (who will probably notice right away).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* SSH server also listens on ports 21, 22, 53, 80, 81, 443, 8000, 8080 for your convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sorbitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sorbitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
== &#039;&#039;mannitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mannitol is a SuperMicro server funded by MEF. CUDA is available on this node.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* NVIDIA GeForce RTX 3050 6G&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cyanide&#039;&#039; ==&lt;br /&gt;
cyanide is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)], identical in specification to powernap&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;suika&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Suika is an office terminal built from various components donated by our members.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* AMD Ryzen 7 2700X&lt;br /&gt;
* 2x 8GB DDR4&lt;br /&gt;
* 1x Samsung 256GB SSD&lt;br /&gt;
* AMD Radeon RX 550 4GB&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;powernap&#039;&#039;==&lt;br /&gt;
powernap is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)].&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
=== Speaker === &lt;br /&gt;
powernap has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
* MPD for playing music. Only office/termcom/syscom can log into powernap. Use `ncmpcpp` to control MPD.&lt;br /&gt;
** TODO: this is not the case anymore&lt;br /&gt;
* Bluetooth audio receiver. Only syscom can control bluetooth pairing. Use `bluetoothctl` to control bluetooth.&lt;br /&gt;
&lt;br /&gt;
Music is located in `/music` on the office terminals.&lt;br /&gt;
&lt;br /&gt;
= Progcom Only =&lt;br /&gt;
The Programme Committee has access to a VM on corn-syrup called &#039;progcom&#039;. They have sudo rights in this VM so they may install and run their own software inside it. This VM should only be accessible by members of progcom or syscom.&lt;br /&gt;
&lt;br /&gt;
The CI/CD stuff for the csclub.uwaterloo.ca runs on this vm (drone).&lt;br /&gt;
&lt;br /&gt;
= Codey Bot Only =&lt;br /&gt;
Ran on CSC Cloud in a separate Cloudstack project. codey-staging, codey-dev, codey-prod.&lt;br /&gt;
&lt;br /&gt;
TODO: migrating from cloudstack&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems are only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;xylitol&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
xylitol is a Dell PowerEdge R815 donated by CSCF. It is primarily a container host for services previously hosted on aspartame and dextrose, including munin, rt, mathnews, auth1, and dns1. It was provisioned with the intent to replace both of those hosts.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Dual AMD Opteron 6176 (2.3 GHz, 48 cores total)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 500GB volume group on RAID1 SSD (xylitol-mirrored)&lt;br /&gt;
* 500ish-GB volume group on RAID10 HDD (xylitol-raidten)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth1&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] primary&lt;br /&gt;
*[[Kerberos]] primary&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chat&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* The Lounge web IRC client (https://chat.csclub.uwaterloo.ca)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phosphoric-acid&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phosphoric-acid is a Dell PowerEdge R815 donated by CSCF and is a clone of xylitol. It may be used to provide redundant cloud services in the future.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* (clone of Xylitol)&lt;br /&gt;
* 4x 2TB Kingston KC3000 (ZFS Z2 [Sustain 2-failures]) (KIN-SKC3000D2048G)&lt;br /&gt;
** Mounted on 2x Startech Dual M.2 PCIE SSD Adapter Cards (STA-PEX8M2E2)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[#caffeine|caffeine]]&lt;br /&gt;
*[[#coffee|coffee]]&lt;br /&gt;
*prometheus&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;coffee&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on phosphoric-acid.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Database#MySQL|MySQL]]&lt;br /&gt;
*[[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;cobalamin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics, on Science Computing Rack 2. NICs are plugged into A1 and A2 on the adjacent rack. Acts as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
TODO: should replace with another Syscom server when Science Computing clears out the rack (ETA before 09/2024)&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
*http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]] (kerberos)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TODO: Mega unreliable.&#039;&#039;&#039; (Goes down once every few weeks... due to power outages in the PHYS server room)&lt;br /&gt;
** It is plugged into a UPS but the UPS has dead batteries.&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
* Physical access to the PHYS server rooms can be acquired by visiting Science Computing in PHYS 2006.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth2&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] secondary&lt;br /&gt;
*[[Kerberos]] secondary&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mail&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
TODO: &amp;quot;HA&amp;quot;-ish configuration&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
*[[Webmail]]&lt;br /&gt;
*[[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
TODO: gone??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 2 x Intel Xeon E5-2695 v4 (18 cores, 2.10GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RDIMM RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 9 x 18TB Seagate Exos X18 (8 ZFS, Z2,1 hot-spare)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
Spec before 2025-03-27:&lt;br /&gt;
* 1 x Intel Xeon E5-2630 v3 (8 cores, 2.40 GHz)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mirror]]&lt;br /&gt;
*[[Talks]] mirror&lt;br /&gt;
*[[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;munin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
TODO: Debian 9?&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;yerba-mate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* TODO: use as backup server&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;citric-acid&#039;&#039;==&lt;br /&gt;
A Dell PowerEdge R815 (TODO: check model) provided by CSCF to replace [[Machine List#aspartame|aspartame]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* 2 x AMD Opteron 6174 (12 cores, 2.20 GHz)&lt;br /&gt;
* 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Services&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Configured for [https://pass.uwaterloo.ca pass.uwaterloo.ca], a university-wide password manager hosted by CSC as a demo service for all Nexus (ADFS) user.&lt;br /&gt;
* [[Plane]], an internal (CSC) project management tool.&lt;br /&gt;
* Minio&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Being repurposed for Termcom training and development.&lt;br /&gt;
* Being used for Matrix &amp;amp; Proxmox Testing&lt;br /&gt;
* UFW opened-ports: SSH, HTTP/HTTPS&lt;br /&gt;
* Upgraded to Podman 4.x&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;Tahini&#039;&#039; ==&lt;br /&gt;
Server was funded via SLEF, and MEF. More info coming soon. Part of Proxmox Project, and mass migration&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;Teriyaki&#039;&#039; ==&lt;br /&gt;
Server was funded via SLEF, and MEF. More info coming soon. Part of Proxmox Project, and mass migration&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chamomile&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x 2.20GHz 12-core processors (AMD Opteron(tm) Processor 6174)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Cloudstack host&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;riboflavin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R515 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 2.6 GHz 8-core processors (AMD Opteron(tm) Processor 4376 HE)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
* 2x 500GB internal SSD&lt;br /&gt;
* 12x Seagate 4TB SSHD&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack block and object storage for csclub.cloud&lt;br /&gt;
* ????&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;guayusa&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
&lt;br /&gt;
Was used to experiment the following then-new CSC services:&lt;br /&gt;
&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* TODO: ???&lt;br /&gt;
** block1.cloud&lt;br /&gt;
** object1.cloud&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
* TODO: ditch... Currently being used to set up NextCloud.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginkgo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs00&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs00 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* dual SFP connection to core switch&lt;br /&gt;
&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs01&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs01 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
TODO: disconnected??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs10&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs10 is a &#039;&#039;&#039;NetApp FAS8040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* FAS8040 (dual heads)&lt;br /&gt;
** ... TODO&lt;br /&gt;
* 6 DS4324 HDD shelves (24-disks each)&lt;br /&gt;
** 24 x 2TB HDDs (assorted brands/models)&lt;br /&gt;
** Dual IOM3 controllers.&lt;br /&gt;
** Loop 1: bottom 4 shelves&lt;br /&gt;
** Loop 2: top 2 shelves + SSD shelf&lt;br /&gt;
* 1 DS2246 SSD shelf (TODO: right model?)&lt;br /&gt;
** 24 Samsung SM1625 SSDs (MZ-6ER2000/0G3), 200GB (SAS 2, 2.5&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
== ps3 ==&lt;br /&gt;
This is just a very wide PS3, the model that supported running Linux natively before it was removed. Firmware was updated to remove this feature, however it can still be done via. homebrew. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* It&#039;s a PS3.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022-10-24&#039;&#039;&#039; - Thermal paste replaced + firmware updated to latest supported version, also modded.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;binaerpilot&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;anamanaguchi&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: May have disappeared at some point&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;digital cutter&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the MC 3015 machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5460</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5460"/>
		<updated>2025-11-04T00:45:07Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add links&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
; LDAP : directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
; Kerberos (krb5) : authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
; pyceo : CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[LDAP]]&lt;br /&gt;
&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member. It&#039;s also responsible for tracking which terms do a member have a valid membership of, and that&#039;s how we implement general-use server&#039;s access control based on membership.&lt;br /&gt;
&lt;br /&gt;
In essence, it&#039;s just a database that keeps track of a bunch of unique entities (called Distinguished Name, or DN in LDAP world) and each entity can have a bunch of attributes associated with them. Here&#039;s the LDAP entry for our club mascot, C.T. Dalek:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dn: uid=ctdalek,ou=People,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
uid: ctdalek&lt;br /&gt;
homeDirectory: /users/ctdalek&lt;br /&gt;
cn: Calum T. Dalek&lt;br /&gt;
gecos: Calum T. Dalek,MC 3036,,,0&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
description: Prototypical Member Account&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: member&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: shadowAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: inetLocalMailRecipient&lt;br /&gt;
loginShell: /bin/false&lt;br /&gt;
userPassword: {SASL}ctdalek@CSCLUB.UWATERLOO.CA&lt;br /&gt;
term: f2016&lt;br /&gt;
term: f2017&lt;br /&gt;
term: f2018&lt;br /&gt;
givenName: Calum&lt;br /&gt;
sn: Dalek&lt;br /&gt;
mailLocalAddress: ctdalek@csclub.uwaterloo.ca&lt;br /&gt;
program: Alumni&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It&#039;s entirely possible to do all of these in a NoSQL database, but of course LDAP is a standard, and almost all SSO-capable system can speak LDAP, so that&#039;s why we use it.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
&#039;&#039;Read more on:&#039;&#039; [[CEO]]&lt;br /&gt;
&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5459</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5459"/>
		<updated>2025-11-04T00:41:34Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add example to LDAP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
; LDAP : directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
; Kerberos (krb5) : authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
; pyceo : CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member. It&#039;s also responsible for tracking which terms do a member have a valid membership of, and that&#039;s how we implement general-use server&#039;s access control based on membership.&lt;br /&gt;
&lt;br /&gt;
In essence, it&#039;s just a database that keeps track of a bunch of unique entities (called Distinguished Name, or DN in LDAP world) and each entity can have a bunch of attributes associated with them. Here&#039;s the LDAP entry for our club mascot, C.T. Dalek:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dn: uid=ctdalek,ou=People,dc=csclub,dc=uwaterloo,dc=ca&lt;br /&gt;
uid: ctdalek&lt;br /&gt;
homeDirectory: /users/ctdalek&lt;br /&gt;
cn: Calum T. Dalek&lt;br /&gt;
gecos: Calum T. Dalek,MC 3036,,,0&lt;br /&gt;
uidNumber: 20000&lt;br /&gt;
description: Prototypical Member Account&lt;br /&gt;
gidNumber: 20000&lt;br /&gt;
objectClass: account&lt;br /&gt;
objectClass: member&lt;br /&gt;
objectClass: posixAccount&lt;br /&gt;
objectClass: shadowAccount&lt;br /&gt;
objectClass: top&lt;br /&gt;
objectClass: inetLocalMailRecipient&lt;br /&gt;
loginShell: /bin/false&lt;br /&gt;
userPassword: {SASL}ctdalek@CSCLUB.UWATERLOO.CA&lt;br /&gt;
term: f2016&lt;br /&gt;
term: f2017&lt;br /&gt;
term: f2018&lt;br /&gt;
givenName: Calum&lt;br /&gt;
sn: Dalek&lt;br /&gt;
mailLocalAddress: ctdalek@csclub.uwaterloo.ca&lt;br /&gt;
program: Alumni&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It&#039;s entirely possible to do all of these in a NoSQL database, but of course LDAP is a standard, and almost all SSO-capable system can speak LDAP, so that&#039;s why we use it.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5458</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5458"/>
		<updated>2025-11-04T00:26:26Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: rearrange to promote &amp;quot;Club Operation&amp;quot; to top level and rename &amp;quot;Committees Documentation&amp;quot; to &amp;quot;Systems Documentation&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Club Operation ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Systems Documentation ==&lt;br /&gt;
=== Introductions ===&lt;br /&gt;
Start here if you have no clue how a subsystem works&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Intro to Authentication]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[Filer]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Matrix]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Proxmox]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[CodeyBot]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Immich]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5457</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5457"/>
		<updated>2025-11-04T00:16:30Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: fix list&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
; LDAP : directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
; Kerberos (krb5) : authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
; pyceo : CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5456</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5456"/>
		<updated>2025-11-04T00:09:21Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
+ LDAP: directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
+ Kerberos: authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
+ pyceo: CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a &amp;lt;code&amp;gt;host/$HOSTNAME.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5455</id>
		<title>Intro to Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Intro_to_Authentication&amp;diff=5455"/>
		<updated>2025-11-04T00:05:02Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Created page with &amp;quot;CSC&amp;#039;s user directory and authentication system consists of 3 major parts: + LDAP: directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that) + Kerberos: authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5) + pyceo: CSC&amp;#039;s home-grown frontend for interacting with user directory and passwords  Basically, this is a UNIX-based Active Directory sy...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CSC&#039;s user directory and authentication system consists of 3 major parts:&lt;br /&gt;
+ LDAP: directory service. stores all public user information (your name, program, WatIAM, UNIX groups, things like that)&lt;br /&gt;
+ Kerberos: authentication service. stores passwords, provide authentication to user logins/inter-server integrations (ceo and NFS uses krb5)&lt;br /&gt;
+ pyceo: CSC&#039;s home-grown frontend for interacting with user directory and passwords&lt;br /&gt;
&lt;br /&gt;
Basically, this is a UNIX-based Active Directory system. Why not using Microsoft&#039;s offering? Well, we love UNIX, just look at the office, there&#039;s no Windows there.&lt;br /&gt;
&lt;br /&gt;
== Client Side ==&lt;br /&gt;
=== LDAP ===&lt;br /&gt;
LDAP itself is just a protocol. In practice, it has 2 parts: a server that stores and serves user information, and a client that queries these information for various applications. All servers in CSC, no matter general-use or syscom-only, are hooked up to LDAP so that they can automatically sync users and groups so that we don&#039;t need to manually create/delete account for every member.&lt;br /&gt;
&lt;br /&gt;
=== Kerberos ===&lt;br /&gt;
Kerberos is a very complicated protocol and deserve its own lecture, but the crash course is that you do an initial authentication with Kerberos (usually with username/password), and it grants you a &amp;quot;ticket&amp;quot; so that you can use such ticket to access other services without going through the central authentication node again.&lt;br /&gt;
&lt;br /&gt;
One use case you might find useful is if you want to hop between CSC servers (for example, you are outside the campus network but you want to access a termcom machine that is campus-network only), you can use &amp;lt;code&amp;gt;kinit&amp;lt;/code&amp;gt; to get yourself a ticket, and you won&#039;t need to input password any more because your SSH client will send the ticket first and will be accepted as your proof of identity.&lt;br /&gt;
&lt;br /&gt;
Other than user authentication, we also use Kerberos for inter-machine authentication. All CSC machines have a \`host/\$NAME.csclub.uwaterloo.ca\` ticket installed on them, and they use this to authenticate and mount the &amp;lt;code&amp;gt;/users&amp;lt;/code&amp;gt; NFS file share, so that you can use one single home folder on all of the CSC machines.&lt;br /&gt;
&lt;br /&gt;
=== pyceo ===&lt;br /&gt;
Since editing LDAP databases is tediously and dangerous (you might bring down the whole fleet!), account management is handled by pyceo. Things like adding/renewing members and resetting passwords are in essence just modifying LDAP/Kerberos databases, but pyceo does autofilling and sanity check on day-to-day operations.&lt;br /&gt;
&lt;br /&gt;
Note that pyceo&#039;s scope has expanded quite a bit after the introduction of CSC cloud, now it&#039;s more than just a frontend for LDAP/Krb5, but more of a frontend for most of the member services.&lt;br /&gt;
&lt;br /&gt;
== Server implementations ==&lt;br /&gt;
We have two nspawn containers, &amp;lt;code&amp;gt;auth1&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;auth2&amp;lt;/code&amp;gt;, on two physical locations (xylitol@MC and cobalamin@Science Machine Room). Both of them run &amp;lt;code&amp;gt;slapd&amp;lt;/code&amp;gt; (the free LDAP server implementation) for LDAP and &amp;lt;code&amp;gt;krb5-kdc + krb5-admin-server&amp;lt;/code&amp;gt; for Kerberos.&lt;br /&gt;
&lt;br /&gt;
If you have read the LDAP spec or slapd/OpenLDAP documentation closely, you will know that there&#039;s an authentication mechanism in LDAP as well. But the documentation will also tell you it&#039;s generally a good idea to separate the public LDAP server and the secret-keeping ones, and that&#039;s exactly what we did: we store (hashed) passwords in Kerberos. But a lot of services (example being Nextcloud) uses the LDAP authentication; thus, when that happens, we proxy the authentication to Kerberos and just return the result. This is done via configuring slapd to use SASL auth mechanism, and run &amp;lt;code&amp;gt;saslauthd&amp;lt;/code&amp;gt; on auth1/2 with backend set to kerberos5.&lt;br /&gt;
&lt;br /&gt;
auth1 is the master server, and it will either actively send changes to auth2 (slapd) or auth2 will periodically pull changes from auth1 (krb5). There&#039;s also a cron job to periodically dump the complete database onto NFS.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Filer&amp;diff=5441</id>
		<title>Filer</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Filer&amp;diff=5441"/>
		<updated>2025-10-17T23:18:14Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add zfs info&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;NOTE&#039;&#039;&#039; This page describes Filer Generation 3, which is put into production at Fall 2025. To see previous generations of filers, see [[New NetApp]] (2017-2025) and [[NetApp]] (2013-2017).&lt;br /&gt;
&lt;br /&gt;
At Fall 2023, MFCF donated us their FAS8040 NetApp filers alongside several DS4243 disk shelves.&lt;br /&gt;
&lt;br /&gt;
We decided to connect the disk shelves directly to one of our servers, since it&#039;s hard to keep syscom/termcom trained to use NetApp&#039;s proprietary system, and we can mostly get away with using just 1/2 disk shelves for our storage need anyways.&lt;br /&gt;
&lt;br /&gt;
== Physical Configuration ==&lt;br /&gt;
&lt;br /&gt;
Currently ranch is used as the head unit, and only one (one of the middle) disk shelves is connected to it.&lt;br /&gt;
A QSFP+ (SFF-8436) to External Mini-SAS (SFF-8088) Cable is used to connect the disk shelf to a SAS2308 HBA card. Note that according [https://forums.unraid.net/topic/89444-how-to-configure-a-netapp-ds4243-shelf-in-unraid/ Unraid Forum] ([https://web.archive.org/web/20250108225045/https://forums.unraid.net/topic/89444-how-to-configure-a-netapp-ds4243-shelf-in-unraid/ Wayback Machine]), it should be connected to the port marked with a &#039;&#039;&#039;black rectangle&#039;&#039;&#039; on the &#039;&#039;&#039;top IOM&#039;&#039;&#039; of the back of the disk shelf. Also make sure all PSUs (we currently have 2 PSU and 2 blank filler) are connected and powered on. If everything is connected correctly, you should not see any amber LED on the PSUs. After the filer is booted, you should see green/blue LNK LED next to the connected QSFP port on the disk shelf.&lt;br /&gt;
&lt;br /&gt;
A total of 24x 2T disks are available, but 3 of them has shown signs of failure, so we only use 21 of them right now.&lt;br /&gt;
&lt;br /&gt;
They are all 2TB drives from ~2010, so we should consider replacing them.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
ranch runs regular Debian with ZFS, so that we can share the technology stack with mirror.&lt;br /&gt;
&lt;br /&gt;
A &#039;&#039;&#039;pitfall&#039;&#039;&#039; of the disk shelf is that they only do disk spinups after your system has booted, and takes quite some time to do so (6 disks every 12s according to [https://docs.netapp.com/p/ontap-systems/platforms/Installation-And-Service-Guide.pdf]), so ZFS freaks out when only part of the pool is visible and report the pool as SUSPENDED. Do &amp;lt;code&amp;gt;systemctl edit zfs-import-cache.service&amp;lt;/code&amp;gt; and put these in should fix this by delaying ZFS import for 3 minutes so the disk shelves have time to finish initialization:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
ExecStartPre=/bin/sleep 180&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== ZFS ===&lt;br /&gt;
&lt;br /&gt;
We use [https://github.com/jimsalterjrs/sanoid sanoid] for auto snapshot management. To see a previous version of your file, go to &amp;lt;code&amp;gt;/users/.zfs/snapshots/$snapshot_name&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In the future we will want to send a pair of server/disk shelf to a different machine room so that we can use sanoid&#039;s friend, syncoid to automatically send the ZFS snapshot over for off-site backup.&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
&lt;br /&gt;
We use ZFS&#039;s &amp;lt;code&amp;gt;sharenfs&amp;lt;/code&amp;gt; property to set NFS configuration for each datasets. This is done so that those NFS shares only start after ZFS is ready.&lt;br /&gt;
&lt;br /&gt;
As before, we export &amp;lt;code&amp;gt;sec=sys&amp;lt;/code&amp;gt; (so no authentication) on the special MC storage VLAN (VLAN 530, containing 172.19.168.32/27 and fd74:6b6a:8eca:4903::/64). This VLAN is only connected to trusted machines (NetApp, CSC servers in the MC 3015 or DC 3558 machine rooms).&lt;br /&gt;
&lt;br /&gt;
All other machines uses &amp;lt;code&amp;gt;sec=krb5p&amp;lt;/code&amp;gt;. By default, NFS clients will need a nfs/ Krb5 principal, but since all of CSC machines will need to mount /users anyways, we just reuse the host/ krb5 principal. This is done by running &amp;lt;code&amp;gt;systemctl edit rpc-svcgssd.service&amp;lt;/code&amp;gt; and adding these (see rpc.svcgssd manual for more information):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
ExecStart=&lt;br /&gt;
ExecStart=/usr/sbin/rpc.svcgssd -n&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We disabled NFSv3 and NFSv4.0 in &amp;lt;code&amp;gt;/etc/nfs.conf&amp;lt;/code&amp;gt; since all of the machines are expected to run recent versions of debian.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5440</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5440"/>
		<updated>2025-10-17T15:40:01Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Historical */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Committees Documentation ==&lt;br /&gt;
=== Club Operation ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[Filer]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Matrix]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Proxmox]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[CodeyBot]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Immich]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5439</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5439"/>
		<updated>2025-10-17T15:39:48Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Hardware Infrastructure (the bare metals) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Committees Documentation ==&lt;br /&gt;
=== Club Operation ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[Filer]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Matrix]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Proxmox]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[CodeyBot]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Immich]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5438</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5438"/>
		<updated>2025-10-17T13:55:19Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add filer&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Committees Documentation ==&lt;br /&gt;
=== Club Operation ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[Filer]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Matrix]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Proxmox]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[CodeyBot]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Immich]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Filer&amp;diff=5436</id>
		<title>Filer</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Filer&amp;diff=5436"/>
		<updated>2025-10-17T04:38:45Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Initial edit&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;NOTE&#039;&#039;&#039; This page describes Filer Generation 3, which is put into production at Fall 2025. To see previous generations of filers, see [[New NetApp]] (2017-2025) and [[NetApp]] (2013-2017).&lt;br /&gt;
&lt;br /&gt;
At Fall 2023, MFCF donated us their FAS8040 NetApp filers alongside several DS4243 disk shelves.&lt;br /&gt;
&lt;br /&gt;
We decided to connect the disk shelves directly to one of our servers, since it&#039;s hard to keep syscom/termcom trained to use NetApp&#039;s proprietary system, and we can mostly get away with using just 1/2 disk shelves for our storage need anyways.&lt;br /&gt;
&lt;br /&gt;
== Physical Configuration ==&lt;br /&gt;
&lt;br /&gt;
Currently ranch is used as the head unit, and only one (one of the middle) disk shelves is connected to it.&lt;br /&gt;
A QSFP+ (SFF-8436) to External Mini-SAS (SFF-8088) Cable is used to connect the disk shelf to a SAS2308 HBA card. Note that based on homelab community&#039;s wisdom, it has to be the port on the top left corner (marked with a black rectangle) of the back of the disk shelf. Also make sure all 4 PSUs are connected and powered on.&lt;br /&gt;
&lt;br /&gt;
A total of 24 disks are available, but 3 of them has shown signs of failure, so we only use 21 of them right now.&lt;br /&gt;
&lt;br /&gt;
They are all 2TB drives from ~2010, so we should consider replacing them.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
ranch runs regular Debian with ZFS, so that we can share the technology stack with mirror.&lt;br /&gt;
&lt;br /&gt;
A quirk of the disk shelf is that they only do disk spinups after your system has booted, and takes quite some time to do so, so ZFS freaks out when only part of the pool is visible and report the pool as SUSPENDED. Do &amp;lt;code&amp;gt;systemctl edit zfs-import-cache.service&amp;lt;/code&amp;gt; and put these in should fix this by delaying ZFS import for 3 minutes so the disk shelves have time to finish initialization:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
ExecStartPre=/bin/sleep 180&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
&lt;br /&gt;
We use ZFS&#039;s &amp;lt;code&amp;gt;sharenfs&amp;lt;/code&amp;gt; property to set NFS configuration for each datasets. This is done so that those NFS shares only start after ZFS is ready.&lt;br /&gt;
&lt;br /&gt;
As before, we export &amp;lt;code&amp;gt;sec=sys&amp;lt;/code&amp;gt; (so no authentication) on the special MC storage VLAN (VLAN 530, containing 172.19.168.32/27 and fd74:6b6a:8eca:4903::/64). This VLAN is only connected to trusted machines (NetApp, CSC servers in the MC 3015 or DC 3558 machine rooms).&lt;br /&gt;
&lt;br /&gt;
All other machines uses &amp;lt;code&amp;gt;sec=krb5p&amp;lt;/code&amp;gt;. By default, NFS clients will need a nfs/ Krb5 principal, but since all of CSC machines will need to mount /users anyways, we just reuse the host/ krb5 principal. This is done by running &amp;lt;code&amp;gt;systemctl edit rpc-svcgssd.service&amp;lt;/code&amp;gt; and adding these (see rpc.svcgssd manual for more information):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
ExecStart=&lt;br /&gt;
ExecStart=/usr/sbin/rpc.svcgssd -n&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We disabled NFSv3 and NFSv4.0 in &amp;lt;code&amp;gt;/etc/nfs.conf&amp;lt;/code&amp;gt; since all of the machines are expected to run recent versions of debian.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Proxmox&amp;diff=5423</id>
		<title>Proxmox</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Proxmox&amp;diff=5423"/>
		<updated>2025-09-16T00:26:00Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: networking&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Setting up Proxmox ==&lt;br /&gt;
To setup proxmox, from `Server View`, open the `Datacenter` page. Then go to `Permissions -&amp;gt; Realms`.&lt;br /&gt;
&lt;br /&gt;
Then just make sure pam is setup lol&lt;br /&gt;
&lt;br /&gt;
== Networking ==&lt;br /&gt;
There are two ways to do networking: network bridge and NAT. Network bridge will put the container/virtual machine on the CSC network (basically side-by-side to proxmox itself), while NAT will encapsulate the container/VM inside a private subnet that is only visible to proxmox host itself.&lt;br /&gt;
&lt;br /&gt;
For services that only exposes HTTP/HTTPS, NAT is more desirable since multiple services can share a host nginx instance, only requiring the host IP to have 80/443 port opened to the Internet, thus saving some IP address in our pool and save some trips to the IST for firewall exemption. But for services that requires custom ports to be opened (for example, BigBlueButton requires a range of UDP ports to be exposed for relaying video streams), using the network bridge and giving the container/VM its own public IP might be easier.&lt;br /&gt;
&lt;br /&gt;
Currently, &amp;lt;code&amp;gt;vmbr0&amp;lt;/code&amp;gt; is used for bridged network and &amp;lt;code&amp;gt;vmbr1&amp;lt;/code&amp;gt; is used for NAT (see [https://pve.proxmox.com/wiki/Network_Configuration#sysadmin_network_masquerading Proxmox&#039;s wiki on NAT networking] for setup instruction). &amp;lt;code&amp;gt;vmbr0&amp;lt;/code&amp;gt; uses the CSC DHCP server, so you can use DHCP there, but &amp;lt;code&amp;gt;vmbr1&amp;lt;/code&amp;gt; requires manual IP assignment.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Cloud_Gen_2&amp;diff=5372</id>
		<title>Cloud Gen 2</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Cloud_Gen_2&amp;diff=5372"/>
		<updated>2025-06-22T01:51:01Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Note&amp;#039;&amp;#039;&amp;#039; This is currently just a proposal.  Since CloudStack blown up during Winter 2025 and the general architecture is deemed beyond current syscom&amp;#039;s capacity to maintain, we propose to set up a new cloud cluster using more understood and simpler (for current syscom fleet, as of Spring 2025) technologies.  Which includes: * proxmox as virtualization host * pyceo for virtual machine lifecycle management (creation, expiry, deletion if no longer a member for long enoug...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Note&#039;&#039;&#039; This is currently just a proposal.&lt;br /&gt;
&lt;br /&gt;
Since CloudStack blown up during Winter 2025 and the general architecture is deemed beyond current syscom&#039;s capacity to maintain, we propose to set up a new cloud cluster using more understood and simpler (for current syscom fleet, as of Spring 2025) technologies.&lt;br /&gt;
&lt;br /&gt;
Which includes:&lt;br /&gt;
* proxmox as virtualization host&lt;br /&gt;
* pyceo for virtual machine lifecycle management (creation, expiry, deletion if no longer a member for long enough)&lt;br /&gt;
* NFS for inter-node VM disk exchange&lt;br /&gt;
&lt;br /&gt;
More specifically:&lt;br /&gt;
* proxmox will be connected to LDAP for user/group synchronization and authentication&lt;br /&gt;
* each user/club will have a dedicated resource pool for resource isolation&lt;br /&gt;
* all virtual machine creation/deletion will be done by pyceo, due to proxmox not having resource pool quota&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5366</id>
		<title>Service List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5366"/>
		<updated>2025-06-12T01:40:27Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Mail */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A list of services we run and when they&#039;ve been last updated.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure ==&lt;br /&gt;
=== LDAP/Kerberos ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[LDAP]] and [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Member information storage and authentication backend.&lt;br /&gt;
* Location: &#039;&#039;auth1&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
SSO provider.&lt;br /&gt;
* Location: somewhere on k8s&lt;br /&gt;
* Last updated: Unknown, before Spring 2022&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mail]]&lt;br /&gt;
&lt;br /&gt;
Postfix/Dovecot mail server&lt;br /&gt;
* Location: &#039;&#039;mail&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Fall 2024&lt;br /&gt;
* Roundcube last updated: Spring 2025&lt;br /&gt;
&lt;br /&gt;
=== mailman3 ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mailing Lists]]&lt;br /&gt;
&lt;br /&gt;
Mailing list handler&lt;br /&gt;
* Location: &#039;&#039;mailman3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Fall 2024, to mailman 3.10&lt;br /&gt;
&lt;br /&gt;
=== prometheus ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Observability]]&lt;br /&gt;
&lt;br /&gt;
Also hosts ClickHouse and vector&lt;br /&gt;
* Location: &#039;&#039;qemu-2-prometheus&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
Hosted on [[New NetApp]]&lt;br /&gt;
* Location: [[New NetApp]] on MC CSC rack&lt;br /&gt;
* Last update: 2017, pending &amp;quot;New New NetApp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Ceph ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;:  [[Ceph]]&lt;br /&gt;
&lt;br /&gt;
Storage backend for CSCloud.&lt;br /&gt;
* Location: 3 node cluster on riboflavin, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General services ==&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mirror]]&lt;br /&gt;
&lt;br /&gt;
Our flagship service.&lt;br /&gt;
* Location: [[Machine List#potassium-benzoate|potassium-benzoate]]&lt;br /&gt;
* Last update: Constantly by syscom&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Main Page#CSC Cloud|CSC Cloud]]&lt;br /&gt;
&lt;br /&gt;
Another flagship service.&lt;br /&gt;
* Location: 3 node cluster on chamomile, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== VaultWarden ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Vaultwarden]]&lt;br /&gt;
&lt;br /&gt;
Bitwarden-compatible password manager.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== BigBlueButton ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[BigBlueButton]]&lt;br /&gt;
&lt;br /&gt;
Online conferencing.&lt;br /&gt;
* Location: &#039;&#039;bigbluebutton3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Plane ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Plane]]&lt;br /&gt;
&lt;br /&gt;
JIRA but selfhosted.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== IRC webchat (The Lounge) ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[How to IRC#The Lounge]]&lt;br /&gt;
&lt;br /&gt;
* Location: &#039;&#039;chat&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Mattermost ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MatterMost]]&lt;br /&gt;
* Location: &#039;&#039;mattermost&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Nextcloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Nextcloud]]&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s file and calendar server.&lt;br /&gt;
* Location: &#039;&#039;nextcloud&#039;&#039; container on [[Machine List#guayusa|guayusa]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Git ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Git Hosting]]&lt;br /&gt;
&lt;br /&gt;
Gitea server for various CSC projects.&lt;br /&gt;
* Location: [[Machine List #caffeine|caffeine]]&lt;br /&gt;
* Last update: Spring 2025&lt;br /&gt;
* CI: Drone (deprecated), Gitea Act Runner (&#039;&#039;gitea-act-runner&#039;&#039; nspawn container on [[Machine List#phosphoric-acid|phosphoric-acid]])&lt;br /&gt;
&lt;br /&gt;
== Web infra ==&lt;br /&gt;
=== Member/Club Hosting ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Web Hosting]] and [[Club Hosting]]&lt;br /&gt;
&lt;br /&gt;
Apache and PHP. Your regular, old-school hosting service.&lt;br /&gt;
* Location: &#039;&#039;caffeine&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== MySQL/PostgreSQL ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MySQL]] and [[PostgreSQL]]&lt;br /&gt;
&lt;br /&gt;
Databases for hosting.&lt;br /&gt;
* Location: &#039;&#039;coffee&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, still on PostgreSQL 15&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5361</id>
		<title>Git Hosting</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5361"/>
		<updated>2025-06-01T03:37:58Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Continuous Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have a [https://git.csclub.uwaterloo.ca Gitea] instance running on [[Machine List#caffeine|caffeine]]. You can sign in via LDAP to the web interface. Projects used by CSC as a whole are owned by the [https://git.csclub.uwaterloo.ca/public public] organization, except for website-committee related repos, which are owned by the [https://git.csclub.uwaterloo.ca/www www] org.&lt;br /&gt;
&lt;br /&gt;
== Installation Details ==&lt;br /&gt;
&amp;lt;code&amp;gt;/etc/gitea&amp;lt;/code&amp;gt; on caffeine contains the configs for Gitea. It&#039;s installed as a Debian package, with additional files in &amp;lt;code&amp;gt;/var/lib/gitea/&amp;lt;/code&amp;gt;and a systemd service at &amp;lt;code&amp;gt;/lib/systemd/system/gitea.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is a custom locale (used to define CSC-custom strings in some pages) at &amp;lt;code&amp;gt;/var/lib/gitea/custom/options/locale/locale_en-US.ini&amp;lt;/code&amp;gt; that may need to be updated when the Gitea APT package is updated. To update this, run the &amp;lt;code&amp;gt;update_custom_locale.sh&amp;lt;/code&amp;gt; in that directory (as root).&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;quot;It&#039;s basically GitHub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
- raymo&lt;br /&gt;
&lt;br /&gt;
=== SSH keys ===&lt;br /&gt;
It is recommended to setup [https://git.csclub.uwaterloo.ca/user/settings/keys SSH keys] so that you do not have to enter your password each time you push to a repo. Once you have uploaded your public key, add the following to your ~/.ssh/config:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host csclub.uwaterloo.ca&lt;br /&gt;
        HostName csclub.uwaterloo.ca&lt;br /&gt;
        IdentityFile ~/.ssh/id_rsa&lt;br /&gt;
        User git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Replace ~/.ssh/id_rsa by the location of your private SSH key.) Now you should be able to clone, push and pull over SSH.&lt;br /&gt;
&lt;br /&gt;
== Continuous Integration ==&lt;br /&gt;
We have a Gitea Act Runner integrated directly into Gitea. It should have identical syntax as Github Actions (see [https://docs.gitea.com/usage/actions/comparison|Compared to GitHub Actions - Gitea]).&lt;br /&gt;
&lt;br /&gt;
Before Spring 2025:&lt;br /&gt;
We are running a CI server at https://ci.csclub.uwaterloo.ca. It uses OAuth via Gitea for logins, so you need to have logged in to Gitea first. See https://docs.drone.io/ for documentation. All you have to do is create a .drone.yml file in your repo, then enable CI on the repo from the CSC Drone website. There is an example [https://git.csclub.uwaterloo.ca/merenber/drone-test here].&lt;br /&gt;
&lt;br /&gt;
== Pushing and pulling from the filesystem ==&lt;br /&gt;
(for syscom only)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you need to keep the ability to push/pull from the filesystem, in addition to Gitea, you will need to take the following steps.&lt;br /&gt;
In this example, we are migrating a repo called &#039;public/repo.git&#039;, which is a folder under /srv/git on caffeine (which is a symlink to /users/git).&lt;br /&gt;
The way we&#039;re doing this right now is kind of hacky, but it works:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Clone the original repo locally: &amp;lt;code&amp;gt;git clone /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Delete the old repo (from phosphoric-acid, which has no_root_squash): &amp;lt;code&amp;gt;rm -rf /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Create a new repo with the name &#039;repo&#039; from the Gitea web UI. This should create a bare repository at &amp;lt;code&amp;gt;/srv/git/public/repo.git&amp;lt;/code&amp;gt;. (Make sure you choose the &#039;public&#039; org from the dropdown.)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Push the original repo to the new remote:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd repo&lt;br /&gt;
git remote add gitea https://git.csclub.uwaterloo.ca/public/repo.git&lt;br /&gt;
git push gitea master&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Remove any git gooks which require gitea:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm $(grep -IRl gitea /srv/git/public/repo.git/hooks)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Change file permissions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R git:git /srv/git/public/repo.git&lt;br /&gt;
chmod -R g+w /srv/git/public/repo.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You will need to do this from phosphoric-acid (due to NFS root squashing).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
Note that the repo folder SHOULD be owned by git:git. Anything else will likely break Gitea. (If a user pushes something to the folder and their umask doesn&#039;t allow group members to read, for example, then Gitea will be unable to read the repo.)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This means that only trusted users should be in the git group - ideally, only syscom members.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are having trouble pulling/pushing with SSH and have something like this when trying &amp;lt;code&amp;gt;ssh git@csclub.uwaterloo.ca&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PTY allocation request failed on channel 0&lt;br /&gt;
shell request failed on channel 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just restart &amp;lt;code&amp;gt;gitea.service&amp;lt;/code&amp;gt; on caffeine.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5360</id>
		<title>Service List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5360"/>
		<updated>2025-06-01T03:35:48Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* General services */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A list of services we run and when they&#039;ve been last updated.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure ==&lt;br /&gt;
=== LDAP/Kerberos ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[LDAP]] and [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Member information storage and authentication backend.&lt;br /&gt;
* Location: &#039;&#039;auth1&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
SSO provider.&lt;br /&gt;
* Location: somewhere on k8s&lt;br /&gt;
* Last updated: Unknown, before Spring 2022&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mail]]&lt;br /&gt;
&lt;br /&gt;
Postfix/Dovecot mail server&lt;br /&gt;
* Location: &#039;&#039;mail&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Fall 2024&lt;br /&gt;
&lt;br /&gt;
=== mailman3 ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mailing Lists]]&lt;br /&gt;
&lt;br /&gt;
Mailing list handler&lt;br /&gt;
* Location: &#039;&#039;mailman3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Fall 2024, to mailman 3.10&lt;br /&gt;
&lt;br /&gt;
=== prometheus ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Observability]]&lt;br /&gt;
&lt;br /&gt;
Also hosts ClickHouse and vector&lt;br /&gt;
* Location: &#039;&#039;qemu-2-prometheus&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
Hosted on [[New NetApp]]&lt;br /&gt;
* Location: [[New NetApp]] on MC CSC rack&lt;br /&gt;
* Last update: 2017, pending &amp;quot;New New NetApp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Ceph ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;:  [[Ceph]]&lt;br /&gt;
&lt;br /&gt;
Storage backend for CSCloud.&lt;br /&gt;
* Location: 3 node cluster on riboflavin, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General services ==&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mirror]]&lt;br /&gt;
&lt;br /&gt;
Our flagship service.&lt;br /&gt;
* Location: [[Machine List#potassium-benzoate|potassium-benzoate]]&lt;br /&gt;
* Last update: Constantly by syscom&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Main Page#CSC Cloud|CSC Cloud]]&lt;br /&gt;
&lt;br /&gt;
Another flagship service.&lt;br /&gt;
* Location: 3 node cluster on chamomile, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== VaultWarden ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Vaultwarden]]&lt;br /&gt;
&lt;br /&gt;
Bitwarden-compatible password manager.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== BigBlueButton ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[BigBlueButton]]&lt;br /&gt;
&lt;br /&gt;
Online conferencing.&lt;br /&gt;
* Location: &#039;&#039;bigbluebutton3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Plane ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Plane]]&lt;br /&gt;
&lt;br /&gt;
JIRA but selfhosted.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== IRC webchat (The Lounge) ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[How to IRC#The Lounge]]&lt;br /&gt;
&lt;br /&gt;
* Location: &#039;&#039;chat&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Mattermost ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MatterMost]]&lt;br /&gt;
* Location: &#039;&#039;mattermost&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Nextcloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Nextcloud]]&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s file and calendar server.&lt;br /&gt;
* Location: &#039;&#039;nextcloud&#039;&#039; container on [[Machine List#guayusa|guayusa]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Git ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Git Hosting]]&lt;br /&gt;
&lt;br /&gt;
Gitea server for various CSC projects.&lt;br /&gt;
* Location: [[Machine List #caffeine|caffeine]]&lt;br /&gt;
* Last update: Spring 2025&lt;br /&gt;
* CI: Drone (deprecated), Gitea Act Runner (&#039;&#039;gitea-act-runner&#039;&#039; nspawn container on [[Machine List#phosphoric-acid|phosphoric-acid]])&lt;br /&gt;
&lt;br /&gt;
== Web infra ==&lt;br /&gt;
=== Member/Club Hosting ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Web Hosting]] and [[Club Hosting]]&lt;br /&gt;
&lt;br /&gt;
Apache and PHP. Your regular, old-school hosting service.&lt;br /&gt;
* Location: &#039;&#039;caffeine&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== MySQL/PostgreSQL ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MySQL]] and [[PostgreSQL]]&lt;br /&gt;
&lt;br /&gt;
Databases for hosting.&lt;br /&gt;
* Location: &#039;&#039;coffee&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, still on PostgreSQL 15&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Web_Hosting&amp;diff=5354</id>
		<title>Web Hosting</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Web_Hosting&amp;diff=5354"/>
		<updated>2025-04-28T22:49:20Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add anubis: fighting ai scrappers&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The CSC offers web hosting for [[Club Hosting|clubs]] and [http://csclub.uwaterloo.ca/about/ our members] in accordance with our [http://csclub.uwaterloo.ca/services/machine_usage Machine Usage Agreement]. This is a quick guide for the kinds of hosting we offer on our webserver, &amp;lt;tt&amp;gt;csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;, also known as [[Machine List#caffeine|caffeine]].&lt;br /&gt;
&lt;br /&gt;
We run an Apache httpd webserver and we offer you the use of a [[MySQL|MySQL database]].&lt;br /&gt;
&lt;br /&gt;
== What can I host on my website? ==&lt;br /&gt;
&lt;br /&gt;
Web hosting is provided in accordance with the CSC [http://csclub.uwaterloo.ca/services/machine_usage Machine Usage Agreement]. As a reminder, you are &#039;&#039;&#039;not permitted&#039;&#039;&#039; to host any of the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Ads.&#039;&#039;&#039; Advertisements are not permitted because using our machines for commercial purposes is forbidden by university policy.&lt;br /&gt;
* &#039;&#039;&#039;Your start-up&#039;s website.&#039;&#039;&#039; Again, commercial use of our hosting is not permitted.&lt;br /&gt;
* &#039;&#039;&#039;Unauthorized copyrighted materials.&#039;&#039;&#039; Violating the law is a violation of our Machine Usage Agreement.&lt;br /&gt;
&lt;br /&gt;
Please note that &#039;&#039;&#039;this is not an exhaustive list. Websites may be taken down &#039;&#039;without notice&#039;&#039;&#039;&#039;&#039; at the discretion of the Systems Committee. (We will always let you know that we took your site down, but if it is breaking our shared environment, we can&#039;t provide an advance warning.)&lt;br /&gt;
&lt;br /&gt;
Some great examples of things members host on our webserver:&lt;br /&gt;
&lt;br /&gt;
* Academic projects!&lt;br /&gt;
* A personal website or blog!&lt;br /&gt;
* [[Club Hosting|Club websites!]]&lt;br /&gt;
&lt;br /&gt;
== How do I make a website? ==&lt;br /&gt;
&lt;br /&gt;
If you just want to show some static content (e.g. blog posts, club information, technical articles), then we recommend that you use a static site generator (SSG). Static sites are faster, simpler and more secure than CMSs like WordPress (dynamic and written in PHP) for small sites. We routinely disable WordPress sites that are more than a few weeks out of date (or if a critical security flaw is disclosed).&lt;br /&gt;
&lt;br /&gt;
Here are some SSGs which require little to no coding experience, and also have a great selection of themes to choose from:&lt;br /&gt;
&lt;br /&gt;
* [https://jekyllrb.com/ Jekyll] (accepts Markdown, Liquid and HTML)&lt;br /&gt;
* [https://gohugo.io/ Hugo] (accepts a wide variety of formats, including Markdown and JSON)&lt;br /&gt;
* [https://hexo.io/ Hexo] (accepts Markdown and various Javascript-based templating engines)&lt;br /&gt;
* [https://www.11ty.dev/ Eleventy] (accepts Markdown, Liquid, HTML, and various Javascript-based templating engines)&lt;br /&gt;
* [https://www.getzola.org/ Zola] (accepts Markdown and Tera)&lt;br /&gt;
* [https://blog.getpelican.com/ Pelican] (accepts Markdown, reStructuredText and Jinja2)&lt;br /&gt;
&lt;br /&gt;
[https://astro.build/ Astro] is an excellent static site builder which integrates with a wide variety of JS-based frameworks (including React, Vue, Svelte and Solid), but requires a bit more coding experience.&lt;br /&gt;
&lt;br /&gt;
These SSGs require some experience with React.js:&lt;br /&gt;
&lt;br /&gt;
* [https://nextjs.org/ Next.js]&lt;br /&gt;
* [https://www.gatsbyjs.com/ Gatsby.js]&lt;br /&gt;
&lt;br /&gt;
These SSGs require some experience with Vue.js:&lt;br /&gt;
&lt;br /&gt;
* [https://nuxtjs.org/ Nuxt.js]&lt;br /&gt;
* [https://vuepress.vuejs.org/ Vuepress]&lt;br /&gt;
* [https://vitepress.vuejs.org/ Vitepress]&lt;br /&gt;
* [https://gridsome.org/ Gridsome]&lt;br /&gt;
&lt;br /&gt;
[https://jamstack.org/generators/ Here] is an awesome list of other generators to explore, if you are interested.&lt;br /&gt;
&lt;br /&gt;
=== Transferring your files to the CSC servers ===&lt;br /&gt;
If you just need to transfer a single file, then the easiest option is to use the &amp;lt;code&amp;gt;scp&amp;lt;/code&amp;gt; command (which is available on all major operating systems), e.g.&lt;br /&gt;
&lt;br /&gt;
  scp /path/to/your/file your_username@corn-syrup.csclub.uwaterloo.ca:~/&lt;br /&gt;
&lt;br /&gt;
This will copy /path/to/your/file from your local PC to your CSC home directory (we use NFS, so you can access it from any of the general-use machines).&lt;br /&gt;
&lt;br /&gt;
However, we strongly recommend setting up a git repository in your home directory instead.&lt;br /&gt;
&lt;br /&gt;
=== Setting up a git repository ===&lt;br /&gt;
All of the files in the &amp;lt;code&amp;gt;www&amp;lt;/code&amp;gt; directory in your home directory are accessible from csclub.uwaterloo.ca/~your_username. If everything is set up right, this can provide a GitHub Pages-like experience.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For members:&amp;lt;br&amp;gt;&lt;br /&gt;
Create a &amp;quot;bare&amp;quot; git repository in your home directory (on the CSC machines). You will git push/pull from this directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir myrepo.git&lt;br /&gt;
cd myrepo.git&lt;br /&gt;
git init --bare&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For club reps:&amp;lt;br&amp;gt;&lt;br /&gt;
Switch to the Unix user for your club, and use the &amp;lt;code&amp;gt;--shared&amp;lt;/code&amp;gt; option to automatically add group write permissions.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
become_club myclub&lt;br /&gt;
cd ~&lt;br /&gt;
mkdir myrepo.git&lt;br /&gt;
cd myrepo.git&lt;br /&gt;
git init --bare --shared&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Create a git post-receive hook which will automatically deploy your website whenever you git push. Paste the following script into hooks/post-receive (in the bare repo you created earlier). You may wish to customize it a bit first.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
set -e&lt;br /&gt;
# Uncomment this to echo the commands as they are executed&lt;br /&gt;
#set -x&lt;br /&gt;
shopt -s dotglob&lt;br /&gt;
# FOR CLUB REPS ONLY: set the following variable to e.g. /users/myclub/www&lt;br /&gt;
DEPLOYMENT_DIR=~/www&lt;br /&gt;
&lt;br /&gt;
while read oldrev newrev refname; do&lt;br /&gt;
    branch=$(git rev-parse --symbolic --abbrev-ref $refname)&lt;br /&gt;
    # Only the master branch will be deployed&lt;br /&gt;
    if [ &amp;quot;$branch&amp;quot; != master ]; then&lt;br /&gt;
        continue&lt;br /&gt;
    fi&lt;br /&gt;
    rm -rf $DEPLOYMENT_DIR/*&lt;br /&gt;
    git --work-tree=$DEPLOYMENT_DIR checkout -f $branch&lt;br /&gt;
    # FOR CLUB REPS ONLY: uncomment the following lines and replace &#039;myclub&#039;&lt;br /&gt;
    # with the Unix group name of your club&lt;br /&gt;
    #chgrp -R myclub $DEPLOYMENT_DIR&lt;br /&gt;
    #chmod -R g+w $DEPLOYMENT_DIR&lt;br /&gt;
done&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(The script was adapted from [https://peteris.rocks/blog/deploy-your-website-with-git/ here].)&lt;br /&gt;
&lt;br /&gt;
Make the script executable:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chmod +x hooks/post-receive&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;For club reps&amp;lt;/b&amp;gt;: Make sure the www directory is group-writable. Switch to the Unix user of your club and run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chmod g+w ~/www&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
If you have not done so already, add your public SSH key to your ~/.ssh/authorized_keys file (on the CSC machines). See [https://git-scm.com/book/en/v2/Git-on-the-Server-Generating-Your-SSH-Public-Key here] for a tutorial.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
On your local computer, add [[Machine_List|any CSC machine]] as a remote of your git repo.&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For members:&amp;lt;br&amp;gt;&lt;br /&gt;
Just use the directory of your git repo, e.g.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git remote add csc your_username@corn-syrup.csclub.uwaterloo.ca:myrepo.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;For club reps:&amp;lt;br&amp;gt;&lt;br /&gt;
Use the full path of the repo in your club user&#039;s home directory, e.g.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git remote add csc your_username@corn-syrup.csclub.uwaterloo.ca:/users/myclub/myrepo.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Now you can just &amp;lt;code&amp;gt;git push&amp;lt;/code&amp;gt; normally after a commit, e.g.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git push csc master&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And the files should show up automatically in your www folder (or your club&#039;s www folder, if you are a club rep).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
If you have any files in your repo which you don&#039;t want to be served from your website, use a [https://httpd.apache.org/docs/2.4/howto/htaccess.html .htaccess file] in your www folder (make sure this is committed to the git repo). For example, to deny access to the folder named src (in your www folder), you could use the following snippet:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
RewriteEngine On&lt;br /&gt;
RewriteRule &amp;quot;^src(/.*)?$&amp;quot; - [F,L]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
See [https://httpd.apache.org/docs/2.4/mod/core.html the Apache documentation] for more details.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need help, email &amp;lt;tt&amp;gt;syscom@csclub.uwaterloo.ca[mailto:syscom@csclub.uwaterloo.ca]&amp;lt;/tt&amp;gt; or come to the CS Club office on the MC 3rd floor across from the Mathsoc CnD.&lt;br /&gt;
&lt;br /&gt;
== DNS and Your Domain Name ==&lt;br /&gt;
&lt;br /&gt;
You can serve files without any additional configuration by placing them in your &amp;lt;tt&amp;gt;www&amp;lt;/tt&amp;gt; directory and accessing them at &amp;lt;tt&amp;gt;http://csclub.uwaterloo.ca/~userid&amp;lt;/tt&amp;gt;, where &amp;lt;tt&amp;gt;userid&amp;lt;/tt&amp;gt; is your CSC user ID. However, many of our members and clubs prefer to use a custom domain name.&lt;br /&gt;
&lt;br /&gt;
Note that this means you &#039;&#039;do not&#039;&#039; have to register a domain name to be able to use our services. You can just put a website at &amp;lt;tt&amp;gt;http://csclub.uwaterloo.ca/~userid&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== uwaterloo.ca domain Names ===&lt;br /&gt;
&lt;br /&gt;
If you represent a UWaterloo organization, you may be eligible for a custom &amp;lt;tt&amp;gt;uwaterloo.ca&amp;lt;/tt&amp;gt; domain name, such as &amp;lt;tt&amp;gt;csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;. We can request this on your behalf.&lt;br /&gt;
&lt;br /&gt;
In order to do so, we must have verified that the organization is a legitimate UWaterloo-affiliated group, and that you, the representative, are authorized to request a domain name on their behalf. This all takes place when you request [[Club Hosting|club hosting]] with the Computer Science Club.&lt;br /&gt;
&lt;br /&gt;
Once you register as a club representative of your particular organization, you can send an email from your official club account to syscom@csclub.uwaterloo.ca to request the domain &amp;lt;tt&amp;gt;yourdomain.uwaterloo.ca&amp;lt;/tt&amp;gt;. Assuming it is available, we will file a ticket and request the domain in your name.&lt;br /&gt;
&lt;br /&gt;
=== Your personal domain name ===&lt;br /&gt;
&lt;br /&gt;
These virtual hosts must be approved by the Executive and Systems Committee. If interested, send syscom@csclub.uwaterloo.ca an email. If your request is approved, the Systems Committee will direct you to create a CNAME record for your domain and point it at &amp;lt;tt&amp;gt;csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you are interested in receiving mail or having other records on your domain, the apex of your domain cannot be a CNAME. If this is the case, then your domain should contain an &amp;quot;A&amp;quot; record of &amp;lt;tt&amp;gt;129.97.134.17&amp;lt;/tt&amp;gt; and a (optional, but recommended) &amp;quot;AAAA&amp;quot; record of &amp;lt;tt&amp;gt;2620:101:f000:4901:c5c::caff:e12e&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you want TLS on your personal domain, mention this in your email to syscom (syscom: see [[SSL#letsencrypt]]).&lt;br /&gt;
&lt;br /&gt;
== Static Sites ==&lt;br /&gt;
&lt;br /&gt;
You can place all your static content into your web directory, &amp;lt;tt&amp;gt;/users/userid/www&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you have been approved for a virtual host, you can access this content using your personal domain once the Systems Committee makes the appropriate configuration changes. Here is an example configuration file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
  	ServerName foobar.uwaterloo.ca&lt;br /&gt;
  	ServerAlias *.foobar.uwaterloo.ca foobar&lt;br /&gt;
  	ServerAdmin your@email.here.tld&lt;br /&gt;
  &lt;br /&gt;
  	DocumentRoot /users/userid/www/&lt;br /&gt;
  &lt;br /&gt;
  	ErrorLog /var/log/apache2/luser-userid-error.log&lt;br /&gt;
  	CustomLog /var/log/apache2/luser-userid-access.log combined&lt;br /&gt;
  &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Dynamic Sites ==&lt;br /&gt;
&lt;br /&gt;
If you require use of a database, we offer you the sole choice of MySQL. See [[MySQL|this guide]] for how to create your database and connect to MySQL.&lt;br /&gt;
&lt;br /&gt;
=== ***NOTICE*** ===&lt;br /&gt;
&lt;br /&gt;
  We &#039;&#039;&#039;STRONGLY&#039;&#039;&#039; discourage the use of content management systems such as&lt;br /&gt;
  WordPress. These packages are notorious for the number of security&lt;br /&gt;
  vulnerabilities they contain and pose a threat to our systems if they are not&lt;br /&gt;
  kept up to date. The Systems Committee &#039;&#039;&#039;WILL,&#039;&#039;&#039; at its discretion, disable&lt;br /&gt;
  any website using a package such as WordPress that is not updated to the latest&lt;br /&gt;
  version or that is found to contain exploitable security flaws. In such a case,&lt;br /&gt;
  the member or club serving that site will be notified of the termination; the&lt;br /&gt;
  site will not be re-enabled until the issues are addressed.&lt;br /&gt;
&lt;br /&gt;
When pages are parked, access to them is restricted to on-campus IPs, so you can still fix your page, but anyone off-campus will not be able to access it, and will be shown this page instead: [https://csclub.uwaterloo.ca/~sysadmin/insecure/ https://csclub.uwaterloo.ca/~sysadmin/insecure/].&lt;br /&gt;
&lt;br /&gt;
=== Using PHP ===&lt;br /&gt;
&lt;br /&gt;
Because we use Apache, it&#039;s as simple as placing your &amp;lt;tt&amp;gt;index.php&amp;lt;/tt&amp;gt; file in your &amp;lt;tt&amp;gt;/users/userid/www&amp;lt;/tt&amp;gt;. That&#039;s it!&lt;br /&gt;
&lt;br /&gt;
You can even include rewrite rules in an &amp;lt;tt&amp;gt;.htaccess&amp;lt;/tt&amp;gt; file in your web directory.&lt;br /&gt;
&lt;br /&gt;
=== Reverse Proxy (Python, Ruby, Perl, etc.) ===&lt;br /&gt;
&lt;br /&gt;
(In progress... Cliff Notes below)&lt;br /&gt;
&lt;br /&gt;
If computationally expensive, please run the server on a general-use server and proxy to Caffeine.&lt;br /&gt;
&lt;br /&gt;
If Python, (1) use a [http://docs.python-guide.org/en/latest/dev/virtualenvs/ virtual environment] (2) host your app (within the virtualenv) with [http://gunicorn.org/ Gunicorn] on a high port (but campus firewalled, i.e. NOT Ports 28000-28500).&lt;br /&gt;
&lt;br /&gt;
If Ruby (Note, I&#039;ve never used Ruby so take this with a grain of salt), use [http://unicorn.bogomips.org/ Unicorn] in the same way.&lt;br /&gt;
&lt;br /&gt;
==== .htaccess Config ====&lt;br /&gt;
&lt;br /&gt;
Put the following in the appropriate .htaccess file (e.g. if you were running your app at ~ctdalek/python-app, put the .htaccess in ~ctdalek/www/python-app alongside the static files). Replace HOST with localhost if running on Caffeine or the hostname if running elsewhere; replace port with your chosen port number.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
RewriteEngine On&lt;br /&gt;
&lt;br /&gt;
# If you want websockets, uncomment this:&lt;br /&gt;
#RewriteCond %{HTTP:UPGRADE} ^WebSocket$ [NC]&lt;br /&gt;
#RewriteCond %{HTTP:CONNECTION} ^Upgrade$ [NC]&lt;br /&gt;
#RewriteRule .* ws://HOST:RANDOM_PORT%{REQUEST_URI} [L,P]&lt;br /&gt;
&lt;br /&gt;
RewriteCond %{SCRIPT_FILENAME} !-d&lt;br /&gt;
RewriteCond %{SCRIPT_FILENAME} !-f&lt;br /&gt;
RewriteRule &amp;quot;index.html&amp;quot; &amp;quot;http://HOST:RANDOM_PORT/&amp;quot; [P]&lt;br /&gt;
&lt;br /&gt;
RewriteCond %{SCRIPT_FILENAME} !-d&lt;br /&gt;
RewriteCond %{SCRIPT_FILENAME} !-f&lt;br /&gt;
RewriteRule &amp;quot;^(.*)$&amp;quot; &amp;quot;http://HOST:RANDOM_PORT/$1&amp;quot; [P]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Requiring Authentication ==&lt;br /&gt;
&amp;lt;b&amp;gt;**UPDATE**&amp;lt;/b&amp;gt;: CAS is deprecated; the instructions below are left for historical purposes only. The University of Waterloo now uses [[ADFS]] for web authentication. Unfortunately the Apache module which we use to integrate with ADFS (mod_auth_mellon) cannot be used from .htaccess files, which means that regular members cannot use this. ([https://github.com/latchset/mod_auth_mellon/issues/82 Here] is the relevant GitHub issue; as of this writing, it is still open.) If you require UW authentication for your website, please send an email to syscom and we will configure Apache for you.&lt;br /&gt;
&lt;br /&gt;
=== CAS (no longer works) ===&lt;br /&gt;
You can require users to authenticate through the University&#039;s Central Authentication System (CAS) by adding the following contents to your .htaccess configuration file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
AuthType CAS&lt;br /&gt;
Require valid-user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can replace &amp;lt;pre&amp;gt;Require valid-user&amp;lt;/pre&amp;gt; with &amp;lt;pre&amp;gt;Require user ctdalek&amp;lt;/pre&amp;gt; to restrict to specific users. See https://doubledoublesecurity.ca/uw/cas/user.html for more information.&lt;br /&gt;
&lt;br /&gt;
== Syscom ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling insecure or infringing sites ===&lt;br /&gt;
&lt;br /&gt;
To disable a webspace that has known security vulnerabilities add the following snippet to `/etc/apache2/conf-available/disable-vuln-site.conf`. This rewrites all accesses of the directory or its children to the given file. Note that our disable page always returns HTTP status code 503 (Service Unavailable).&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Directory /users/$BADUSER/www&amp;gt;&lt;br /&gt;
     AllowOverride None&lt;br /&gt;
     Redirect 503 /&lt;br /&gt;
     ErrorDocument 503 /~sysadmin/insecure/index.html&lt;br /&gt;
 &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For infringing sites:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Directory &amp;quot;/users/$BADUSER/www/infringing-directory&amp;quot;&amp;gt;&lt;br /&gt;
    AllowOverride None&lt;br /&gt;
    Redirect 503 /&lt;br /&gt;
    ErrorDocument 503 /~sysadmin/infringing/index.html&lt;br /&gt;
 &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For club domains (e.g. club1.uwaterloo.ca), redirect to the CSC domain instead:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Directory &amp;quot;/users/$BADCLUB/www&amp;quot;&amp;gt;&lt;br /&gt;
   AllowOverride None&lt;br /&gt;
   RewriteEngine On&lt;br /&gt;
   RewriteRule . &amp;lt;nowiki&amp;gt;https://csclub.uwaterloo.ca/~sysadmin/insecure/index.php&amp;lt;/nowiki&amp;gt; [L,P]&lt;br /&gt;
 &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For WordPress sites specifically, insert a snippet similar to the following into conf-enabled/disable-wordpress.conf:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;Directory &amp;quot;/users/$BADCLUB/www&amp;quot;&amp;gt;&lt;br /&gt;
   Include snippets/disable-wordpress.conf&lt;br /&gt;
 &amp;lt;/Directory&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Expired Websites ===&lt;br /&gt;
&lt;br /&gt;
There is a cron job running hourly on caffeine which disables expired member&#039;s websites (and re-enables them when they&#039;ve renewed their membership).&lt;br /&gt;
&lt;br /&gt;
The script is here: https://git.csclub.uwaterloo.ca/public/expire-sites&lt;br /&gt;
&lt;br /&gt;
Some highlights:&lt;br /&gt;
&lt;br /&gt;
* The script provides a 1-month grace period (corresponding to the grace period of pam-csc)&lt;br /&gt;
* The expired page returns HTTP status code of 503 (Service Unavailable)&lt;br /&gt;
&lt;br /&gt;
=== Fighting AI scrappers ===&lt;br /&gt;
We use [https://github.com/TecharoHQ/anubis Anubis] to block excess AI scrappers. This sits between Apache2 and the real application to insert a proof-of-work challenge for the browser to solve, before proceeding to provide the actual content.&lt;br /&gt;
&lt;br /&gt;
Since Anubis requires JavaScript, it is only selectively enabled on sites that a) is currently getting excessively scraped and b) need JavaScript already.&lt;br /&gt;
&lt;br /&gt;
Currently, only [[Git Hosting]] is protected.&lt;br /&gt;
&lt;br /&gt;
See &amp;lt;code&amp;gt;caffeine:/etc/anubis&amp;lt;/code&amp;gt; for details.&lt;br /&gt;
&lt;br /&gt;
=== Sample Apache config for website with both a custom domain and a UW subdomain ===&lt;br /&gt;
&lt;br /&gt;
 Define ENTITY_NAME pmclub&lt;br /&gt;
 Define CUSTOM_DOMAIN puremath.club&lt;br /&gt;
 Define UW_SUBDOMAIN ${ENTITY_NAME}.uwaterloo.ca&lt;br /&gt;
 Define ADMIN_EMAIL ${ENTITY_NAME}@csclub.uwaterloo.ca&lt;br /&gt;
 Define ENTITY_HOME https://csclub.uwaterloo.ca/~${ENTITY_NAME}&lt;br /&gt;
 &lt;br /&gt;
 Define APACHE_LOG_DIR /var/log/apache2&lt;br /&gt;
 Define ERROR_LOG ${APACHE_LOG_DIR}/${ENTITY_NAME}-error.log&lt;br /&gt;
 Define CUSTOM_LOG &amp;quot;${APACHE_LOG_DIR}/${ENTITY_NAME}-access.log combined&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
 	ServerName ${CUSTOM_DOMAIN}&lt;br /&gt;
 	ServerAlias *.${CUSTOM_DOMAIN} ${UW_SUBDOMAIN} *.${UW_SUBDOMAIN} ${ENTITY_NAME}&lt;br /&gt;
 	ServerAdmin ${ADMIN_EMAIL}&lt;br /&gt;
 &lt;br /&gt;
 	Redirect permanent / https://${CUSTOM_DOMAIN}/&lt;br /&gt;
 &lt;br /&gt;
 	ErrorLog ${ERROR_LOG}&lt;br /&gt;
 	CustomLog ${CUSTOM_LOG}&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;VirtualHost csclub:443&amp;gt;&lt;br /&gt;
 	SSLEngine on&lt;br /&gt;
 	SSLCertificateFile /etc/letsencrypt/live/${CUSTOM_DOMAIN}/fullchain.pem&lt;br /&gt;
 	SSLCertificateKeyFile /etc/letsencrypt/live/${CUSTOM_DOMAIN}/privkey.pem&lt;br /&gt;
 	SSLStrictSNIVHostCheck on&lt;br /&gt;
 &lt;br /&gt;
 	ServerName ${CUSTOM_DOMAIN}&lt;br /&gt;
 	ServerAlias *.${CUSTOM_DOMAIN}&lt;br /&gt;
 	ServerAdmin ${ADMIN_EMAIL}&lt;br /&gt;
 &lt;br /&gt;
 	DocumentRoot /users/${ENTITY_NAME}/www&lt;br /&gt;
 &lt;br /&gt;
 	ErrorLog ${ERROR_LOG}&lt;br /&gt;
 	CustomLog ${CUSTOM_LOG}&lt;br /&gt;
 &lt;br /&gt;
 	Redirect permanent /&amp;lt;special page&amp;gt; ${ENTITY_HOME}/&amp;lt;special path&amp;gt;/&amp;lt;special file&amp;gt;&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;VirtualHost csclub:443&amp;gt;&lt;br /&gt;
 	SSLEngine on&lt;br /&gt;
 	SSLCertificateFile /etc/letsencrypt/live/${UW_SUBDOMAIN}/fullchain.pem&lt;br /&gt;
 	SSLCertificateKeyFile /etc/letsencrypt/live/${UW_SUBDOMAIN}/privkey.pem&lt;br /&gt;
 	SSLStrictSNIVHostCheck on&lt;br /&gt;
 &lt;br /&gt;
 	ServerName ${UW_SUBDOMAIN}&lt;br /&gt;
 	ServerAlias *.${UW_SUBDOMAIN}&lt;br /&gt;
 	ServerAdmin ${ADMIN_EMAIL}&lt;br /&gt;
 &lt;br /&gt;
 	Redirect permanent / https://${CUSTOM_DOMAIN}/&lt;br /&gt;
 &lt;br /&gt;
 	ErrorLog ${ERROR_LOG}&lt;br /&gt;
 	CustomLog ${CUSTOM_LOG}&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Systemd-nspawn&amp;diff=5350</id>
		<title>Systemd-nspawn</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Systemd-nspawn&amp;diff=5350"/>
		<updated>2025-04-25T15:18:34Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: bookworm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html systemd-nspawn] is a simpler alternative to [https://wiki.csclub.uwaterloo.ca/Virtualization_(LXC_Containers) LXC] which works well on modern versions of Debian (and, unlike LXC, it does not break very critical systemd services running in containers). For &amp;quot;pet&amp;quot; containers, we should be using systemd-nspawn; for &amp;quot;cattle&amp;quot; containers, [[Podman]] is more appropriate.&lt;br /&gt;
&lt;br /&gt;
Some light reading:&lt;br /&gt;
&lt;br /&gt;
* https://wiki.debian.org/nspawn&lt;br /&gt;
* https://wiki.archlinux.org/title/Systemd-nspawn&lt;br /&gt;
&lt;br /&gt;
== Quickstart ==&lt;br /&gt;
In the example below, we will create a container called &#039;machine1&#039;.&lt;br /&gt;
&lt;br /&gt;
Create a directory for the rootfs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /var/lib/machines/machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or, if you are using an LVM volume, just create a symlink in /var/lib/machines to where the LV is mounted:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ln -s /vm/machine1 /var/lib/machines/machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now bootstrap the rootfs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
debootstrap --variant=minbase --include=dbus,systemd-container,vim bookworm . http://mirror.csclub.uwaterloo.ca/debian&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the &amp;lt;code&amp;gt;systemd-container&amp;lt;/code&amp;gt; package &amp;lt;b&amp;gt;must&amp;lt;/b&amp;gt; be installed in the guest.&lt;br /&gt;
&lt;br /&gt;
Now do a bit of setup in the rootfs:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chroot /var/lib/machines/machine1&lt;br /&gt;
# Only do this if you want to use `machinectl login`&lt;br /&gt;
passwd -d root&lt;br /&gt;
cat &amp;lt;&amp;lt;EOF &amp;gt;&amp;gt;/etc/securetty&lt;br /&gt;
pts/0&lt;br /&gt;
pts/1&lt;br /&gt;
pts/2&lt;br /&gt;
pts/3&lt;br /&gt;
EOF&lt;br /&gt;
# set hostname&lt;br /&gt;
echo machine1 &amp;gt; /etc/hostname&lt;br /&gt;
# set FQDN&lt;br /&gt;
nano /etc/hosts&lt;br /&gt;
# Use systemd-networkd for network management. See &lt;br /&gt;
vim /etc/systemd/network/10-hostbr0.network&lt;br /&gt;
exit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now paste the following into /etc/systemd/nspawn/machine1.nspawn:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Exec]&lt;br /&gt;
Boot=yes&lt;br /&gt;
Hostname=machine1&lt;br /&gt;
PrivateUsers=no&lt;br /&gt;
&lt;br /&gt;
[Network]&lt;br /&gt;
Bridge=br0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Replace &#039;br0&#039; by the bridge interface on the host to which the container should be attached (a veth pair will be created when the container starts up).&lt;br /&gt;
&lt;br /&gt;
Also make sure to set &#039;PrivateUsers=no&#039;, because by default systemd-nspawn uses some randomized UID/GID mapping which makes it difficult to migrate the container to a different system.&lt;br /&gt;
&lt;br /&gt;
Now start the container:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl start systemd-nspawn@machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Or alternatively, using &amp;lt;code&amp;gt;machinectl&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
machinectl start machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To login to a container via an emulated serial console (I don&#039;t recommend doing this, since the TTY gets screwed up):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
machinectl login machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Attach to a running container (similar to &amp;lt;code&amp;gt;lxc-attach&amp;lt;/code&amp;gt;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
machinectl shell machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Note&amp;lt;/b&amp;gt;: if you see the error &amp;lt;code&amp;gt;sh: 2: exec: : Permission denied&amp;lt;/code&amp;gt;, append /bin/bash to the end of the command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
machinectl shell machine1 /bin/bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Important&amp;lt;/b&amp;gt;: make sure the container starts up at boot:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
systemctl enable systemd-nspawn@machine1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Multiple network interfaces ==&lt;br /&gt;
Unfortunately systemd does not have a built-in way to create [https://github.com/systemd/systemd/issues/11087 multiple bridged network interfaces]. Thankfully, it&#039;s not too difficult to accomplish this using the &amp;lt;code&amp;gt;VirtualEthernetExtra&amp;lt;/code&amp;gt; option and a systemd drop-in; the idea is to create some extra veth pairs and then manually attach them to the bridge.&lt;br /&gt;
&lt;br /&gt;
Let&#039;s say you have three bridges on the host: br0, br1 and br2, and you want the container to be attached to all three. Make your nspawn file look like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
[Network]&lt;br /&gt;
Bridge=br0&lt;br /&gt;
# These will be manually bridged to the host&lt;br /&gt;
VirtualEthernetExtra=ve-machine1-1:veth1&lt;br /&gt;
VirtualEthernetExtra=ve-machine1-2:veth2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now run &amp;lt;code&amp;gt;systemctl edit systemd-nspawn@machine1&amp;lt;/code&amp;gt; and paste the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[Service]&lt;br /&gt;
ExecStartPost=/usr/sbin/ip link set dev ve-machine1-1 master br1&lt;br /&gt;
ExecStartPost=/usr/sbin/ip link set dev ve-machine1-1 up&lt;br /&gt;
ExecStartPost=/usr/sbin/ip link set dev ve-machine1-2 master br2&lt;br /&gt;
ExecStartPost=/usr/sbin/ip link set dev ve-machine1-2 up&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In the container, there will be three interfaces:&lt;br /&gt;
&lt;br /&gt;
* host0, which is attached to br0 on the host&lt;br /&gt;
* veth1, which is attached to br1 on the host&lt;br /&gt;
* veth2, which is attached to br2 on the host&lt;br /&gt;
&lt;br /&gt;
Make sure you update /etc/systemd/network/10-hostbr.network in the container accordingly.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5346</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5346"/>
		<updated>2025-03-31T19:38:41Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* mannitol */ now has cuda&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our machines are in the E7, F7, G7 and H7 racks (as of Jan. 2022) in the MC 3015 server room. There is an additional rack in the DC 3558 machine room on the third floor. Our office terminals are in the CSC office, in MC 3036/3037.&lt;br /&gt;
&lt;br /&gt;
= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Redundant active backup coming soon...)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* LXC virtual machine hosted on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [https://www.apache.org/ Apache]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no cryptocurrency mining).&lt;br /&gt;
&lt;br /&gt;
For computationally-intensive jobs (CPU/memory bound) we recommend running on high-fructose-corn-syrup, carbonated-water, sorbitol, mannitol, or corn-syrup, listed in roughly decreasing order of available resources. For low-intensity interactive jobs, such as IRC clients, we recommend running on neotame. &#039;&#039;&#039;&amp;lt;u&amp;gt;If you have a long-running computationally intensive job, it&#039;s good to nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is a large SuperMicro server. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Missing moba IO shield (as of January 2024)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;neotame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
neotame is a SuperMicro server funded by MEF. It is the successor to taurine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We strongly discourage running computationally-intensive jobs&#039;&#039;&#039; on neotame as many users run interactive applications such as IRC clients on it and any significant service degradation will be more likely to affect other users (who will probably notice right away).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* SSH server also listens on ports 21, 22, 53, 80, 81, 443, 8000, 8080 for your convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sorbitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sorbitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
== &#039;&#039;mannitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mannitol is a SuperMicro server funded by MEF. CUDA is available on this node.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* NVIDIA GeForce RTX 3050 6G&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cyanide&#039;&#039; ==&lt;br /&gt;
cyanide is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)], identical in specification to powernap&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;suika&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Suika is an office terminal built from various components donated by our members.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* AMD Ryzen 7 2700X&lt;br /&gt;
* 2x 8GB DDR4&lt;br /&gt;
* 1x Samsung 256GB SSD&lt;br /&gt;
* AMD Radeon RX 550 4GB&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;powernap&#039;&#039;==&lt;br /&gt;
powernap is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)].&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
=== Speaker === &lt;br /&gt;
powernap has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
* MPD for playing music. Only office/termcom/syscom can log into powernap. Use `ncmpcpp` to control MPD.&lt;br /&gt;
** TODO: this is not the case anymore&lt;br /&gt;
* Bluetooth audio receiver. Only syscom can control bluetooth pairing. Use `bluetoothctl` to control bluetooth.&lt;br /&gt;
&lt;br /&gt;
Music is located in `/music` on the office terminals.&lt;br /&gt;
&lt;br /&gt;
= Progcom Only =&lt;br /&gt;
The Programme Committee has access to a VM on corn-syrup called &#039;progcom&#039;. They have sudo rights in this VM so they may install and run their own software inside it. This VM should only be accessible by members of progcom or syscom.&lt;br /&gt;
&lt;br /&gt;
The CI/CD stuff for the csclub.uwaterloo.ca runs on this vm (drone).&lt;br /&gt;
&lt;br /&gt;
= Codey Bot Only =&lt;br /&gt;
Ran on CSC Cloud in a separate Cloudstack project. codey-staging, codey-dev, codey-prod.&lt;br /&gt;
&lt;br /&gt;
TODO: migrating from cloudstack&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems are only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;xylitol&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
xylitol is a Dell PowerEdge R815 donated by CSCF. It is primarily a container host for services previously hosted on aspartame and dextrose, including munin, rt, mathnews, auth1, and dns1. It was provisioned with the intent to replace both of those hosts.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Dual AMD Opteron 6176 (2.3 GHz, 48 cores total)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 500GB volume group on RAID1 SSD (xylitol-mirrored)&lt;br /&gt;
* 500ish-GB volume group on RAID10 HDD (xylitol-raidten)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth1&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] primary&lt;br /&gt;
*[[Kerberos]] primary&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chat&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* The Lounge web IRC client (https://chat.csclub.uwaterloo.ca)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phosphoric-acid&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phosphoric-acid is a Dell PowerEdge R815 donated by CSCF and is a clone of xylitol. It may be used to provide redundant cloud services in the future.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* (clone of Xylitol)&lt;br /&gt;
* 4x 2TB Kingston KC3000 (ZFS Z2 [Sustain 2-failures]) (KIN-SKC3000D2048G)&lt;br /&gt;
** Mounted on 2x Startech Dual M.2 PCIE SSD Adapter Cards (STA-PEX8M2E2)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[#caffeine|caffeine]]&lt;br /&gt;
*[[#coffee|coffee]]&lt;br /&gt;
*prometheus&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;coffee&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on phosphoric-acid.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Database#MySQL|MySQL]]&lt;br /&gt;
*[[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;cobalamin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics, on Science Computing Rack 2. NICs are plugged into A1 and A2 on the adjacent rack. Acts as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
TODO: should replace with another Syscom server when Science Computing clears out the rack (ETA before 09/2024)&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
*http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]] (kerberos)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TODO: Mega unreliable.&#039;&#039;&#039; (Goes down once every few weeks... due to power outages in the PHYS server room)&lt;br /&gt;
** It is plugged into a UPS but the UPS has dead batteries.&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
* Physical access to the PHYS server rooms can be acquired by visiting Science Computing in PHYS 2006.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth2&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] secondary&lt;br /&gt;
*[[Kerberos]] secondary&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mail&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
TODO: &amp;quot;HA&amp;quot;-ish configuration&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
*[[Webmail]]&lt;br /&gt;
*[[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
TODO: gone??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 2 x Intel Xeon E5-2695 v4 (18 cores, 2.10GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RDIMM RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 9 x 18TB Seagate Exos X18 (8 ZFS, Z2,1 hot-spare)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
Spec before 2025-03-27:&lt;br /&gt;
* 1 x Intel Xeon E5-2630 v3 (8 cores, 2.40 GHz)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mirror]]&lt;br /&gt;
*[[Talks]] mirror&lt;br /&gt;
*[[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;munin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
TODO: Debian 9?&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;yerba-mate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* TODO: use as backup server&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;citric-acid&#039;&#039;==&lt;br /&gt;
A Dell PowerEdge R815 (TODO: check model) provided by CSCF to replace [[Machine List#aspartame|aspartame]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* 2 x AMD Opteron 6174 (12 cores, 2.20 GHz)&lt;br /&gt;
* 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Services&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Configured for [https://pass.uwaterloo.ca pass.uwaterloo.ca], a university-wide password manager hosted by CSC as a demo service for all Nexus (ADFS) user.&lt;br /&gt;
* [[Plane]], an internal (CSC) project management tool.&lt;br /&gt;
* Minio&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Being repurposed for Termcom training and development.&lt;br /&gt;
* TODO: migrate Vaultwarden (https://pass.csclub.uwaterloo.ca/)??&lt;br /&gt;
* UFW opened-ports: SSH, HTTP/HTTPS&lt;br /&gt;
* Upgraded to Podman 4.x&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chamomile&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x 2.20GHz 12-core processors (AMD Opteron(tm) Processor 6174)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Cloudstack host&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;riboflavin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R515 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 2.6 GHz 8-core processors (AMD Opteron(tm) Processor 4376 HE)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
* 2x 500GB internal SSD&lt;br /&gt;
* 12x Seagate 4TB SSHD&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack block and object storage for csclub.cloud&lt;br /&gt;
* ????&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;guayusa&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
&lt;br /&gt;
Was used to experiment the following then-new CSC services:&lt;br /&gt;
&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* TODO: ???&lt;br /&gt;
** block1.cloud&lt;br /&gt;
** object1.cloud&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
* TODO: ditch... Currently being used to set up NextCloud.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginkgo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;biloba&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558. TODO: rack??&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs00&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs00 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* dual SFP connection to core switch&lt;br /&gt;
&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs01&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs01 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
TODO: disconnected??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs10&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs10 is a &#039;&#039;&#039;NetApp FAS8040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* FAS8040 (dual heads)&lt;br /&gt;
** ... TODO&lt;br /&gt;
* 6 DS4324 HDD shelves (24-disks each)&lt;br /&gt;
** 24 x 2TB HDDs (assorted brands/models)&lt;br /&gt;
** Dual IOM3 controllers.&lt;br /&gt;
** Loop 1: bottom 4 shelves&lt;br /&gt;
** Loop 2: top 2 shelves + SSD shelf&lt;br /&gt;
* 1 DS2246 SSD shelf (TODO: right model?)&lt;br /&gt;
** 24 Samsung SM1625 SSDs (MZ-6ER2000/0G3), 200GB (SAS 2, 2.5&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
== ps3 ==&lt;br /&gt;
This is just a very wide PS3, the model that supported running Linux natively before it was removed. Firmware was updated to remove this feature, however it can still be done via. homebrew. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* It&#039;s a PS3.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022-10-24&#039;&#039;&#039; - Thermal paste replaced + firmware updated to latest supported version, also modded.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;binaerpilot&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;anamanaguchi&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: May have disappeared at some point&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;digital cutter&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;aspartame&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
aspartame was a taurine clone donated by CSCF. It was once our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. Decomissioned in March 2021 after refusing to boot following a power outage.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;psilodump&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
psilodump was plugged into aspartame. It&#039;s still installed but inaccessible.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phlogiston&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes from before decommissioning ====&lt;br /&gt;
&lt;br /&gt;
* The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;glomag&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decommissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Before its decommissioning, glomag hosted [[#caffeine|caffeine]], [[#mail|mail]], and [[#munin|munin]] as [[Virtualization#Linux_Container|Linux containers]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;Lisp machine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginseng&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [[wikipedia:Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29|RAID 10]] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;calum&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Calum used to be our main server and was named after Calum T Dalek.  Purchased new by the club in 1994. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* SPARCserver 10 (headless SPARCstation 10)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;paza&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;romana&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;acesulfame-potassium&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;skynet&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
*[http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
*[http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
*[http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;freebsd&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;rainbowdragoneyes&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;denardo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
==&#039;&#039;artificial-flavours&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-nitrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;taurine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, taurine has been decommissioned as of Fall 2019.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
*[[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;dextrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
dextrose was a [[#taurine|taurine]] clone donated by CSCF and was decommissioned in Fall 2019 after being replaced with a more powerful server.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sucrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
sucrose was a [[#taurine|taurine]] clone donated by CSCF. It was decommissioned in Fall 2019 following multiple hardware failures.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;goto80&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Note (2022-10-25): This seems to have gone missing or otherwise left our hands.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
This was small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located was the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;nullsleep&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It&#039;s decommissioned on 2023-03-20 due to repeated unexpected shutdown. Replaced by [[#powernap|powernap]]. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminal&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
bit-shifter was an office terminal, decommissioned April 2023 due to extended age. It was upgraded to the same specs as Strombola at an unknown point in time.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core 2 Quad CPU Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
Strombola was an office terminal named after Gordon Strombola. It was retired in April 2023.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium G4600 2 cores @ 3.6Ghz&lt;br /&gt;
* 8 GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
gwem was an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
(*specs are outdated at least as of 2023-05-27*)&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin was an office terminal. It was upgraded in Spring 2014 after an unidentified failure. Not operational (no video output) as of July 2022.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the MC 3015 machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5345</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5345"/>
		<updated>2025-03-31T19:36:55Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: mirror upgraded on 2025-03-27&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our machines are in the E7, F7, G7 and H7 racks (as of Jan. 2022) in the MC 3015 server room. There is an additional rack in the DC 3558 machine room on the third floor. Our office terminals are in the CSC office, in MC 3036/3037.&lt;br /&gt;
&lt;br /&gt;
= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Redundant active backup coming soon...)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* LXC virtual machine hosted on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [https://www.apache.org/ Apache]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no cryptocurrency mining).&lt;br /&gt;
&lt;br /&gt;
For computationally-intensive jobs (CPU/memory bound) we recommend running on high-fructose-corn-syrup, carbonated-water, sorbitol, mannitol, or corn-syrup, listed in roughly decreasing order of available resources. For low-intensity interactive jobs, such as IRC clients, we recommend running on neotame. &#039;&#039;&#039;&amp;lt;u&amp;gt;If you have a long-running computationally intensive job, it&#039;s good to nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is a large SuperMicro server. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Missing moba IO shield (as of January 2024)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;neotame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
neotame is a SuperMicro server funded by MEF. It is the successor to taurine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We strongly discourage running computationally-intensive jobs&#039;&#039;&#039; on neotame as many users run interactive applications such as IRC clients on it and any significant service degradation will be more likely to affect other users (who will probably notice right away).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* SSH server also listens on ports 21, 22, 53, 80, 81, 443, 8000, 8080 for your convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sorbitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sorbitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
== &#039;&#039;mannitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mannitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cyanide&#039;&#039; ==&lt;br /&gt;
cyanide is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)], identical in specification to powernap&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;suika&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Suika is an office terminal built from various components donated by our members.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* AMD Ryzen 7 2700X&lt;br /&gt;
* 2x 8GB DDR4&lt;br /&gt;
* 1x Samsung 256GB SSD&lt;br /&gt;
* AMD Radeon RX 550 4GB&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;powernap&#039;&#039;==&lt;br /&gt;
powernap is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)].&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
=== Speaker === &lt;br /&gt;
powernap has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
* MPD for playing music. Only office/termcom/syscom can log into powernap. Use `ncmpcpp` to control MPD.&lt;br /&gt;
** TODO: this is not the case anymore&lt;br /&gt;
* Bluetooth audio receiver. Only syscom can control bluetooth pairing. Use `bluetoothctl` to control bluetooth.&lt;br /&gt;
&lt;br /&gt;
Music is located in `/music` on the office terminals.&lt;br /&gt;
&lt;br /&gt;
= Progcom Only =&lt;br /&gt;
The Programme Committee has access to a VM on corn-syrup called &#039;progcom&#039;. They have sudo rights in this VM so they may install and run their own software inside it. This VM should only be accessible by members of progcom or syscom.&lt;br /&gt;
&lt;br /&gt;
The CI/CD stuff for the csclub.uwaterloo.ca runs on this vm (drone).&lt;br /&gt;
&lt;br /&gt;
= Codey Bot Only =&lt;br /&gt;
Ran on CSC Cloud in a separate Cloudstack project. codey-staging, codey-dev, codey-prod.&lt;br /&gt;
&lt;br /&gt;
TODO: migrating from cloudstack&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems are only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;xylitol&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
xylitol is a Dell PowerEdge R815 donated by CSCF. It is primarily a container host for services previously hosted on aspartame and dextrose, including munin, rt, mathnews, auth1, and dns1. It was provisioned with the intent to replace both of those hosts.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Dual AMD Opteron 6176 (2.3 GHz, 48 cores total)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 500GB volume group on RAID1 SSD (xylitol-mirrored)&lt;br /&gt;
* 500ish-GB volume group on RAID10 HDD (xylitol-raidten)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth1&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] primary&lt;br /&gt;
*[[Kerberos]] primary&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chat&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* The Lounge web IRC client (https://chat.csclub.uwaterloo.ca)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phosphoric-acid&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phosphoric-acid is a Dell PowerEdge R815 donated by CSCF and is a clone of xylitol. It may be used to provide redundant cloud services in the future.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* (clone of Xylitol)&lt;br /&gt;
* 4x 2TB Kingston KC3000 (ZFS Z2 [Sustain 2-failures]) (KIN-SKC3000D2048G)&lt;br /&gt;
** Mounted on 2x Startech Dual M.2 PCIE SSD Adapter Cards (STA-PEX8M2E2)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[#caffeine|caffeine]]&lt;br /&gt;
*[[#coffee|coffee]]&lt;br /&gt;
*prometheus&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;coffee&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on phosphoric-acid.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Database#MySQL|MySQL]]&lt;br /&gt;
*[[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;cobalamin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics, on Science Computing Rack 2. NICs are plugged into A1 and A2 on the adjacent rack. Acts as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
TODO: should replace with another Syscom server when Science Computing clears out the rack (ETA before 09/2024)&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
*http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]] (kerberos)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TODO: Mega unreliable.&#039;&#039;&#039; (Goes down once every few weeks... due to power outages in the PHYS server room)&lt;br /&gt;
** It is plugged into a UPS but the UPS has dead batteries.&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
* Physical access to the PHYS server rooms can be acquired by visiting Science Computing in PHYS 2006.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth2&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] secondary&lt;br /&gt;
*[[Kerberos]] secondary&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mail&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
TODO: &amp;quot;HA&amp;quot;-ish configuration&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
*[[Webmail]]&lt;br /&gt;
*[[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
TODO: gone??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 2 x Intel Xeon E5-2695 v4 (18 cores, 2.10GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RDIMM RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 9 x 18TB Seagate Exos X18 (8 ZFS, Z2,1 hot-spare)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
Spec before 2025-03-27:&lt;br /&gt;
* 1 x Intel Xeon E5-2630 v3 (8 cores, 2.40 GHz)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mirror]]&lt;br /&gt;
*[[Talks]] mirror&lt;br /&gt;
*[[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;munin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
TODO: Debian 9?&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;yerba-mate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* TODO: use as backup server&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;citric-acid&#039;&#039;==&lt;br /&gt;
A Dell PowerEdge R815 (TODO: check model) provided by CSCF to replace [[Machine List#aspartame|aspartame]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* 2 x AMD Opteron 6174 (12 cores, 2.20 GHz)&lt;br /&gt;
* 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Services&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Configured for [https://pass.uwaterloo.ca pass.uwaterloo.ca], a university-wide password manager hosted by CSC as a demo service for all Nexus (ADFS) user.&lt;br /&gt;
* [[Plane]], an internal (CSC) project management tool.&lt;br /&gt;
* Minio&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Being repurposed for Termcom training and development.&lt;br /&gt;
* TODO: migrate Vaultwarden (https://pass.csclub.uwaterloo.ca/)??&lt;br /&gt;
* UFW opened-ports: SSH, HTTP/HTTPS&lt;br /&gt;
* Upgraded to Podman 4.x&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chamomile&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x 2.20GHz 12-core processors (AMD Opteron(tm) Processor 6174)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Cloudstack host&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;riboflavin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R515 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 2.6 GHz 8-core processors (AMD Opteron(tm) Processor 4376 HE)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
* 2x 500GB internal SSD&lt;br /&gt;
* 12x Seagate 4TB SSHD&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack block and object storage for csclub.cloud&lt;br /&gt;
* ????&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;guayusa&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
&lt;br /&gt;
Was used to experiment the following then-new CSC services:&lt;br /&gt;
&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* TODO: ???&lt;br /&gt;
** block1.cloud&lt;br /&gt;
** object1.cloud&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
* TODO: ditch... Currently being used to set up NextCloud.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginkgo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;biloba&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558. TODO: rack??&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs00&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs00 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* dual SFP connection to core switch&lt;br /&gt;
&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs01&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs01 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
TODO: disconnected??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs10&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs10 is a &#039;&#039;&#039;NetApp FAS8040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* FAS8040 (dual heads)&lt;br /&gt;
** ... TODO&lt;br /&gt;
* 6 DS4324 HDD shelves (24-disks each)&lt;br /&gt;
** 24 x 2TB HDDs (assorted brands/models)&lt;br /&gt;
** Dual IOM3 controllers.&lt;br /&gt;
** Loop 1: bottom 4 shelves&lt;br /&gt;
** Loop 2: top 2 shelves + SSD shelf&lt;br /&gt;
* 1 DS2246 SSD shelf (TODO: right model?)&lt;br /&gt;
** 24 Samsung SM1625 SSDs (MZ-6ER2000/0G3), 200GB (SAS 2, 2.5&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
== ps3 ==&lt;br /&gt;
This is just a very wide PS3, the model that supported running Linux natively before it was removed. Firmware was updated to remove this feature, however it can still be done via. homebrew. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* It&#039;s a PS3.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022-10-24&#039;&#039;&#039; - Thermal paste replaced + firmware updated to latest supported version, also modded.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;binaerpilot&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;anamanaguchi&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: May have disappeared at some point&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;digital cutter&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;aspartame&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
aspartame was a taurine clone donated by CSCF. It was once our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. Decomissioned in March 2021 after refusing to boot following a power outage.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;psilodump&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
psilodump was plugged into aspartame. It&#039;s still installed but inaccessible.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phlogiston&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes from before decommissioning ====&lt;br /&gt;
&lt;br /&gt;
* The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;glomag&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decommissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Before its decommissioning, glomag hosted [[#caffeine|caffeine]], [[#mail|mail]], and [[#munin|munin]] as [[Virtualization#Linux_Container|Linux containers]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;Lisp machine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginseng&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [[wikipedia:Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29|RAID 10]] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;calum&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Calum used to be our main server and was named after Calum T Dalek.  Purchased new by the club in 1994. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* SPARCserver 10 (headless SPARCstation 10)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;paza&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;romana&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;acesulfame-potassium&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;skynet&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
*[http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
*[http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
*[http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;freebsd&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;rainbowdragoneyes&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;denardo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
==&#039;&#039;artificial-flavours&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-nitrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;taurine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, taurine has been decommissioned as of Fall 2019.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
*[[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;dextrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
dextrose was a [[#taurine|taurine]] clone donated by CSCF and was decommissioned in Fall 2019 after being replaced with a more powerful server.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sucrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
sucrose was a [[#taurine|taurine]] clone donated by CSCF. It was decommissioned in Fall 2019 following multiple hardware failures.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;goto80&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Note (2022-10-25): This seems to have gone missing or otherwise left our hands.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
This was small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located was the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;nullsleep&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It&#039;s decommissioned on 2023-03-20 due to repeated unexpected shutdown. Replaced by [[#powernap|powernap]]. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminal&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
bit-shifter was an office terminal, decommissioned April 2023 due to extended age. It was upgraded to the same specs as Strombola at an unknown point in time.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core 2 Quad CPU Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
Strombola was an office terminal named after Gordon Strombola. It was retired in April 2023.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium G4600 2 cores @ 3.6Ghz&lt;br /&gt;
* 8 GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
gwem was an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
(*specs are outdated at least as of 2023-05-27*)&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin was an office terminal. It was upgraded in Spring 2014 after an unidentified failure. Not operational (no video output) as of July 2022.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the MC 3015 machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5341</id>
		<title>Service List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5341"/>
		<updated>2025-03-10T17:17:29Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* BigBlueButton */ update container name&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A list of services we run and when they&#039;ve been last updated.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure ==&lt;br /&gt;
=== LDAP/Kerberos ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[LDAP]] and [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Member information storage and authentication backend.&lt;br /&gt;
* Location: &#039;&#039;auth1&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
SSO provider.&lt;br /&gt;
* Location: somewhere on k8s&lt;br /&gt;
* Last updated: Unknown, before Spring 2022&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mail]]&lt;br /&gt;
&lt;br /&gt;
Postfix/Dovecot mail server&lt;br /&gt;
* Location: &#039;&#039;mail&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Fall 2024&lt;br /&gt;
&lt;br /&gt;
=== mailman3 ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mailing Lists]]&lt;br /&gt;
&lt;br /&gt;
Mailing list handler&lt;br /&gt;
* Location: &#039;&#039;mailman3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Fall 2024, to mailman 3.10&lt;br /&gt;
&lt;br /&gt;
=== prometheus ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Observability]]&lt;br /&gt;
&lt;br /&gt;
Also hosts ClickHouse and vector&lt;br /&gt;
* Location: &#039;&#039;qemu-2-prometheus&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
Hosted on [[New NetApp]]&lt;br /&gt;
* Location: [[New NetApp]] on MC CSC rack&lt;br /&gt;
* Last update: 2017, pending &amp;quot;New New NetApp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Ceph ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;:  [[Ceph]]&lt;br /&gt;
&lt;br /&gt;
Storage backend for CSCloud.&lt;br /&gt;
* Location: 3 node cluster on riboflavin, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General services ==&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mirror]]&lt;br /&gt;
&lt;br /&gt;
Our flagship service.&lt;br /&gt;
* Location: [[Machine List#potassium-benzoate|potassium-benzoate]]&lt;br /&gt;
* Last update: Constantly by syscom&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Main Page#CSC Cloud|CSC Cloud]]&lt;br /&gt;
&lt;br /&gt;
Another flagship service.&lt;br /&gt;
* Location: 3 node cluster on chamomile, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== VaultWarden ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Vaultwarden]]&lt;br /&gt;
&lt;br /&gt;
Bitwarden-compatible password manager.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== BigBlueButton ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[BigBlueButton]]&lt;br /&gt;
&lt;br /&gt;
Online conferencing.&lt;br /&gt;
* Location: &#039;&#039;bigbluebutton3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Plane ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Plane]]&lt;br /&gt;
&lt;br /&gt;
JIRA but selfhosted.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== IRC webchat (The Lounge) ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[How to IRC#The Lounge]]&lt;br /&gt;
&lt;br /&gt;
* Location: &#039;&#039;chat&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Mattermost ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MatterMost]]&lt;br /&gt;
* Location: &#039;&#039;mattermost&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Nextcloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Nextcloud]]&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s file and calendar server.&lt;br /&gt;
* Location: &#039;&#039;nextcloud&#039;&#039; container on [[Machine List#guayusa|guayusa]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Web infra ==&lt;br /&gt;
=== Member/Club Hosting ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Web Hosting]] and [[Club Hosting]]&lt;br /&gt;
&lt;br /&gt;
Apache and PHP. Your regular, old-school hosting service.&lt;br /&gt;
* Location: &#039;&#039;caffeine&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== MySQL/PostgreSQL ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MySQL]] and [[PostgreSQL]]&lt;br /&gt;
&lt;br /&gt;
Databases for hosting.&lt;br /&gt;
* Location: &#039;&#039;coffee&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, still on PostgreSQL 15&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5340</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5340"/>
		<updated>2025-03-10T15:29:08Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Software Infrastructure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [[PostgreSQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Committees Documentation ==&lt;br /&gt;
=== Club Operation ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[How to (Extra) Ban Someone]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos |Password Reset]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
To see a complete list of services, where to find them and when they are updated, see [[Service List]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[MatterMost]]&lt;br /&gt;
* [[Load-balancer]]&lt;br /&gt;
* [[Plane]]&lt;br /&gt;
* [[RT]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5339</id>
		<title>Service List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Service_List&amp;diff=5339"/>
		<updated>2025-03-10T15:28:24Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Created page with &amp;quot;A list of services we run and when they&amp;#039;ve been last updated.  == Infrastructure == === LDAP/Kerberos === &amp;#039;&amp;#039;See&amp;#039;&amp;#039;: LDAP and Kerberos  Member information storage and authentication backend. * Location: &amp;#039;&amp;#039;auth1&amp;#039;&amp;#039; container on xylitol * Last updated: Unknown, updated alongside debian 12  === Keycloak === &amp;#039;&amp;#039;See&amp;#039;&amp;#039;: Keycloak  SSO provider. * Location: somewhere on k8s * Last updated: Unknown, before Spring 2022  === Mail === &amp;#039;&amp;#039;See&amp;#039;&amp;#039;: Mail...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A list of services we run and when they&#039;ve been last updated.&lt;br /&gt;
&lt;br /&gt;
== Infrastructure ==&lt;br /&gt;
=== LDAP/Kerberos ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[LDAP]] and [[Kerberos]]&lt;br /&gt;
&lt;br /&gt;
Member information storage and authentication backend.&lt;br /&gt;
* Location: &#039;&#039;auth1&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== Keycloak ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Keycloak]]&lt;br /&gt;
&lt;br /&gt;
SSO provider.&lt;br /&gt;
* Location: somewhere on k8s&lt;br /&gt;
* Last updated: Unknown, before Spring 2022&lt;br /&gt;
&lt;br /&gt;
=== Mail ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mail]]&lt;br /&gt;
&lt;br /&gt;
Postfix/Dovecot mail server&lt;br /&gt;
* Location: &#039;&#039;mail&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last updated: Fall 2024&lt;br /&gt;
&lt;br /&gt;
=== mailman3 ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mailing Lists]]&lt;br /&gt;
&lt;br /&gt;
Mailing list handler&lt;br /&gt;
* Location: &#039;&#039;mailman3&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Fall 2024, to mailman 3.10&lt;br /&gt;
&lt;br /&gt;
=== prometheus ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Observability]]&lt;br /&gt;
&lt;br /&gt;
Also hosts ClickHouse and vector&lt;br /&gt;
* Location: &#039;&#039;qemu-2-prometheus&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, updated alongside debian 12&lt;br /&gt;
&lt;br /&gt;
=== NFS ===&lt;br /&gt;
Hosted on [[New NetApp]]&lt;br /&gt;
* Location: [[New NetApp]] on MC CSC rack&lt;br /&gt;
* Last update: 2017, pending &amp;quot;New New NetApp&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Ceph ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;:  [[Ceph]]&lt;br /&gt;
&lt;br /&gt;
Storage backend for CSCloud.&lt;br /&gt;
* Location: 3 node cluster on riboflavin, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== General services ==&lt;br /&gt;
=== Mirror ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Mirror]]&lt;br /&gt;
&lt;br /&gt;
Our flagship service.&lt;br /&gt;
* Location: [[Machine List#potassium-benzoate|potassium-benzoate]]&lt;br /&gt;
* Last update: Constantly by syscom&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Main Page#CSC Cloud|CSC Cloud]]&lt;br /&gt;
&lt;br /&gt;
Another flagship service.&lt;br /&gt;
* Location: 3 node cluster on chamomile, ginkgo and biloba&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== VaultWarden ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Vaultwarden]]&lt;br /&gt;
&lt;br /&gt;
Bitwarden-compatible password manager.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== BigBlueButton ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[BigBlueButton]]&lt;br /&gt;
&lt;br /&gt;
Online conferencing.&lt;br /&gt;
* Location: &#039;&#039;BigBlueButton&#039;&#039; nspawn container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== Plane ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Plane]]&lt;br /&gt;
&lt;br /&gt;
JIRA but selfhosted.&lt;br /&gt;
* Location: &#039;&#039;Where?&#039;&#039;&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== IRC webchat (The Lounge) ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[How to IRC#The Lounge]]&lt;br /&gt;
&lt;br /&gt;
* Location: &#039;&#039;chat&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Mattermost ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MatterMost]]&lt;br /&gt;
* Location: &#039;&#039;mattermost&#039;&#039; container on [[Machine List#xylitol|xylitol]]&lt;br /&gt;
* Last update: Unknown&lt;br /&gt;
&lt;br /&gt;
=== Nextcloud ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Nextcloud]]&lt;br /&gt;
&lt;br /&gt;
CSC&#039;s file and calendar server.&lt;br /&gt;
* Location: &#039;&#039;nextcloud&#039;&#039; container on [[Machine List#guayusa|guayusa]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Web infra ==&lt;br /&gt;
=== Member/Club Hosting ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[Web Hosting]] and [[Club Hosting]]&lt;br /&gt;
&lt;br /&gt;
Apache and PHP. Your regular, old-school hosting service.&lt;br /&gt;
* Location: &#039;&#039;caffeine&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Winter 2025&lt;br /&gt;
&lt;br /&gt;
=== MySQL/PostgreSQL ===&lt;br /&gt;
&#039;&#039;See&#039;&#039;: [[MySQL]] and [[PostgreSQL]]&lt;br /&gt;
&lt;br /&gt;
Databases for hosting.&lt;br /&gt;
* Location: &#039;&#039;coffee&#039;&#039; VM on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
* Last update: Unknown, still on PostgreSQL 15&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Meeting:Meetings&amp;diff=5338</id>
		<title>Meeting:Meetings</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Meeting:Meetings&amp;diff=5338"/>
		<updated>2025-03-09T23:37:20Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Minutes of Meetings (Executive)==&lt;br /&gt;
* [[Tuesday 16 September 2008]]&lt;br /&gt;
&lt;br /&gt;
==General Meetings==&lt;br /&gt;
&lt;br /&gt;
* [[Meetings/2025-01-13|Monday 13 January 2025]]&lt;br /&gt;
* [[Meetings/2024-01-11|Thursday 11 January 2024]]&lt;br /&gt;
* [[Tuesday 12 September 2023]]&lt;br /&gt;
* [[Wednesday 11 May 2023]]&lt;br /&gt;
* [[Meetings/2023-01-12|Thursday 12 January 2023]]&lt;br /&gt;
* [[Meetings/2022-09-12|Monday 12 September 2022]]&lt;br /&gt;
* [[Meetings/2022-05-05|Thursday 5 May 2022]]&lt;br /&gt;
* [[Thursday 2 October 2008]]&lt;br /&gt;
* [[Friday 19 October 2007]]&lt;br /&gt;
&lt;br /&gt;
==Weekly All-Hands Meetings==&lt;br /&gt;
* [[Monday 5 December 2022]]&lt;br /&gt;
* [[Monday 28 November 2022]]&lt;br /&gt;
* [[Monday 21 November 2022]]&lt;br /&gt;
* [[Monday 14 November 2022]]&lt;br /&gt;
* [[Monday 7 November 2022]]&lt;br /&gt;
* [[Monday 31 October 2022]]&lt;br /&gt;
* [[Monday 24 October 2022]]&lt;br /&gt;
* [[Monday 17 October 2022]]&lt;br /&gt;
* [[Monday 3 October 2022]]&lt;br /&gt;
* [[Sunday 21 March 2021]]&lt;br /&gt;
* [[Sunday 14 March 2021]]&lt;br /&gt;
* [[Sunday 7 March 2021]]&lt;br /&gt;
* [[Sunday 28 February 2021]]&lt;br /&gt;
&lt;br /&gt;
== Termcom/Syscom Meetings ==&lt;br /&gt;
* [[09 Mar 2025 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 29 July 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 24 June 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 10 June 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 27 May 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 13 May 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 25 March 2023 Termcom Meeting]]&lt;br /&gt;
* [[Saturday 11 February 2023]]&lt;br /&gt;
&lt;br /&gt;
== Source ==&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;CSC All-hands Meeting Notes - Fall 2022&#039;&#039;&#039;: https://docs.google.com/document/d/1Tl_E5nM3bguw9if9O2Woc4jNmeZxG7QVel5fzHdZgfQ/edit#&lt;br /&gt;
&lt;br /&gt;
[[Category:Meetings]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Meeting:09_Mar_2025_Termcom_Meeting&amp;diff=5337</id>
		<title>Meeting:09 Mar 2025 Termcom Meeting</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Meeting:09_Mar_2025_Termcom_Meeting&amp;diff=5337"/>
		<updated>2025-03-09T23:36:35Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: Created page with &amp;quot;== pyceo == * TODO: count members accurately == Mirror == * TODO: hardware maintenance == Mail == * TODO: moderation, probably through discord == Matrix == * TODO: create and bridge IRC/discord * HOLD: stuck by storage == Hosting == * TODO: high-availability web server * TODO: high-availability vaultwarden == CSCloud == * TODO: port forwarding with pyceo == New services == * ?: https://github.com/calcom/cal.com == Keycloak == * TODO: Upgrade (relatively easy, k8s magic)...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== pyceo ==&lt;br /&gt;
* TODO: count members accurately&lt;br /&gt;
== Mirror ==&lt;br /&gt;
* TODO: hardware maintenance&lt;br /&gt;
== Mail ==&lt;br /&gt;
* TODO: moderation, probably through discord&lt;br /&gt;
== Matrix ==&lt;br /&gt;
* TODO: create and bridge IRC/discord&lt;br /&gt;
* HOLD: stuck by storage&lt;br /&gt;
== Hosting ==&lt;br /&gt;
* TODO: high-availability web server&lt;br /&gt;
* TODO: high-availability vaultwarden&lt;br /&gt;
== CSCloud ==&lt;br /&gt;
* TODO: port forwarding with pyceo&lt;br /&gt;
== New services ==&lt;br /&gt;
* ?: https://github.com/calcom/cal.com&lt;br /&gt;
== Keycloak ==&lt;br /&gt;
* TODO: Upgrade (relatively easy, k8s magic) and reintegrate with UW&#039;s ADFS system (hard!)&lt;br /&gt;
== Fundings ==&lt;br /&gt;
* MEF Spring 2023 (new cscloud server): expired, reapply&lt;br /&gt;
* SLEF (when?): new server&lt;br /&gt;
* MEF Fall 2024: server parts&lt;br /&gt;
== Community ==&lt;br /&gt;
* TODO: asking for services people want us host &lt;br /&gt;
* TODO: marketing for CSC for students outside math/cs&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5336</id>
		<title>Music</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5336"/>
		<updated>2025-03-05T16:46:16Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add `discoverable off` as standard procedure&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Music is run off &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt;, since that&#039;s the computer with the speakers attached. &lt;br /&gt;
&lt;br /&gt;
Office staff/termcom/syscom permissions are required to play music in the office.&lt;br /&gt;
&lt;br /&gt;
We also have MPD available, however this method is rarely used after 2022. &#039;&#039;kids these days&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==How to play music==&lt;br /&gt;
&lt;br /&gt;
# Run &amp;lt;code&amp;gt;ssh [watid]@powernap.csclub.uwaterloo.ca&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;bluetoothctl&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;pairable on&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;discoverable on&amp;lt;/code&amp;gt; into the console to enable pairing and discovery&lt;br /&gt;
# Connect to &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt; from your device&lt;br /&gt;
# Respond yes to all the prompts on the terminal from &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;trust [device_mac]&amp;lt;/code&amp;gt; to automatically allow audio from your device next time&lt;br /&gt;
# You should be able to play audio like a normal audio device now&lt;br /&gt;
# After you are done pairing, type &amp;lt;code&amp;gt;discoverable off&amp;lt;/code&amp;gt; to prevent distortions. See troubleshooting section for detail&lt;br /&gt;
&lt;br /&gt;
==How to control audio server==&lt;br /&gt;
powernap uses PipeWire and can talk to PulseAudio client. To set volume, mute some audio stream, or change output setting, use an office terminal and do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PULSE_SERVER=powernap.csclub.uwaterloo.ca pavucontrol&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using a Linux laptop then this should work too.&lt;br /&gt;
&lt;br /&gt;
==MPD Controls==&lt;br /&gt;
To view the keybindings of ncmpcpp, press F1 while it&#039;s running. &lt;br /&gt;
The number keys switch between tabs in it. &lt;br /&gt;
&lt;br /&gt;
*1 is current playlist&lt;br /&gt;
*2 is browsing files&lt;br /&gt;
*3 is search&lt;br /&gt;
*4 is browsing the media library&lt;br /&gt;
*5 is browsing saved playlists, etc.&lt;br /&gt;
&lt;br /&gt;
You can add your own &#039;&#039;absolutely legitimately obtained&#039;&#039; music by copying them to &amp;lt;code&amp;gt;/music&amp;lt;/code&amp;gt; on powernap. Then type &amp;lt;code&amp;gt;u&amp;lt;/code&amp;gt; in ncmpcpp to refresh the database.&lt;br /&gt;
&lt;br /&gt;
==Termcom Info==&lt;br /&gt;
&lt;br /&gt;
*We require https://github.com/hrkfdn/mpdas to get scrobbling working as it &amp;quot;official&amp;quot; last.fm integration was removed from mpd in 2013. &lt;br /&gt;
**n.b. https://github.com/hrkfdn/mpdas/issues/58&lt;br /&gt;
*To control the mixer on other terminals, use &amp;lt;code&amp;gt;PULSE_SERVER=nullsleep pavucontrol&amp;lt;/code&amp;gt;&lt;br /&gt;
*The official last.fm account credentials are stored in the exec password spot :)&lt;br /&gt;
&lt;br /&gt;
==Troubleshooting Playback==&lt;br /&gt;
*Sometimes after connecting to powernap there is immense distortion and audio playback is not bearable.&lt;br /&gt;
*To fix type &amp;lt;code&amp;gt;discoverable off&amp;lt;/code&amp;gt; in the bluetoothctl environment (after a successful connection of course).&lt;br /&gt;
*Current consensus is that this happens because there are so many bluetooth devices in MC that the compute on powernap gets overwhelmed.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5329</id>
		<title>Git Hosting</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5329"/>
		<updated>2025-02-13T15:54:45Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have a [https://git.csclub.uwaterloo.ca Gitea] instance running on [[Machine List#caffeine|caffeine]]. You can sign in via LDAP to the web interface. Projects used by CSC as a whole are owned by the [https://git.csclub.uwaterloo.ca/public public] organization, except for website-committee related repos, which are owned by the [https://git.csclub.uwaterloo.ca/www www] org.&lt;br /&gt;
&lt;br /&gt;
== Installation Details ==&lt;br /&gt;
&amp;lt;code&amp;gt;/etc/gitea&amp;lt;/code&amp;gt; on caffeine contains the configs for Gitea. It&#039;s installed as a Debian package, with additional files in &amp;lt;code&amp;gt;/var/lib/gitea/&amp;lt;/code&amp;gt;and a systemd service at &amp;lt;code&amp;gt;/lib/systemd/system/gitea.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is a custom locale (used to define CSC-custom strings in some pages) at &amp;lt;code&amp;gt;/var/lib/gitea/custom/options/locale/locale_en-US.ini&amp;lt;/code&amp;gt; that may need to be updated when the Gitea APT package is updated. To update this, run the &amp;lt;code&amp;gt;update_custom_locale.sh&amp;lt;/code&amp;gt; in that directory (as root).&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;quot;It&#039;s basically GitHub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
- raymo&lt;br /&gt;
&lt;br /&gt;
=== SSH keys ===&lt;br /&gt;
It is recommended to setup [https://git.csclub.uwaterloo.ca/user/settings/keys SSH keys] so that you do not have to enter your password each time you push to a repo. Once you have uploaded your public key, add the following to your ~/.ssh/config:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host csclub.uwaterloo.ca&lt;br /&gt;
        HostName csclub.uwaterloo.ca&lt;br /&gt;
        IdentityFile ~/.ssh/id_rsa&lt;br /&gt;
        User git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Replace ~/.ssh/id_rsa by the location of your private SSH key.) Now you should be able to clone, push and pull over SSH.&lt;br /&gt;
&lt;br /&gt;
== Continuous Integration ==&lt;br /&gt;
We are running a CI server at https://ci.csclub.uwaterloo.ca. It uses OAuth via Gitea for logins, so you need to have logged in to Gitea first. See https://docs.drone.io/ for documentation. All you have to do is create a .drone.yml file in your repo, then enable CI on the repo from the CSC Drone website. There is an example [https://git.csclub.uwaterloo.ca/merenber/drone-test here].&lt;br /&gt;
&lt;br /&gt;
== Pushing and pulling from the filesystem ==&lt;br /&gt;
(for syscom only)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you need to keep the ability to push/pull from the filesystem, in addition to Gitea, you will need to take the following steps.&lt;br /&gt;
In this example, we are migrating a repo called &#039;public/repo.git&#039;, which is a folder under /srv/git on caffeine (which is a symlink to /users/git).&lt;br /&gt;
The way we&#039;re doing this right now is kind of hacky, but it works:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Clone the original repo locally: &amp;lt;code&amp;gt;git clone /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Delete the old repo (from phosphoric-acid, which has no_root_squash): &amp;lt;code&amp;gt;rm -rf /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Create a new repo with the name &#039;repo&#039; from the Gitea web UI. This should create a bare repository at &amp;lt;code&amp;gt;/srv/git/public/repo.git&amp;lt;/code&amp;gt;. (Make sure you choose the &#039;public&#039; org from the dropdown.)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Push the original repo to the new remote:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd repo&lt;br /&gt;
git remote add gitea https://git.csclub.uwaterloo.ca/public/repo.git&lt;br /&gt;
git push gitea master&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Remove any git gooks which require gitea:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm $(grep -IRl gitea /srv/git/public/repo.git/hooks)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Change file permissions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R git:git /srv/git/public/repo.git&lt;br /&gt;
chmod -R g+w /srv/git/public/repo.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You will need to do this from phosphoric-acid (due to NFS root squashing).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
Note that the repo folder SHOULD be owned by git:git. Anything else will likely break Gitea. (If a user pushes something to the folder and their umask doesn&#039;t allow group members to read, for example, then Gitea will be unable to read the repo.)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This means that only trusted users should be in the git group - ideally, only syscom members.&lt;br /&gt;
&lt;br /&gt;
== Troubleshooting ==&lt;br /&gt;
If you are having trouble pulling/pushing with SSH and have something like this when trying &amp;lt;code&amp;gt;ssh git@csclub.uwaterloo.ca&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PTY allocation request failed on channel 0&lt;br /&gt;
shell request failed on channel 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just restart &amp;lt;code&amp;gt;gitea.service&amp;lt;/code&amp;gt; on caffeine.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5328</id>
		<title>Git Hosting</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Git_Hosting&amp;diff=5328"/>
		<updated>2025-02-13T15:54:14Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We have a [https://git.csclub.uwaterloo.ca Gitea] instance running on [[Machine List#caffeine|caffeine]]. You can sign in via LDAP to the web interface. Projects used by CSC as a whole are owned by the [https://git.csclub.uwaterloo.ca/public public] organization, except for website-committee related repos, which are owned by the [https://git.csclub.uwaterloo.ca/www www] org.&lt;br /&gt;
&lt;br /&gt;
== Installation Details ==&lt;br /&gt;
&amp;lt;code&amp;gt;/etc/gitea&amp;lt;/code&amp;gt; on caffeine contains the configs for Gitea. It&#039;s installed as a Debian package, with additional files in &amp;lt;code&amp;gt;/var/lib/gitea/&amp;lt;/code&amp;gt;and a systemd service at &amp;lt;code&amp;gt;/lib/systemd/system/gitea.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
There is a custom locale (used to define CSC-custom strings in some pages) at &amp;lt;code&amp;gt;/var/lib/gitea/custom/options/locale/locale_en-US.ini&amp;lt;/code&amp;gt; that may need to be updated when the Gitea APT package is updated. To update this, run the &amp;lt;code&amp;gt;update_custom_locale.sh&amp;lt;/code&amp;gt; in that directory (as root).&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
&amp;quot;It&#039;s basically GitHub&amp;quot;&lt;br /&gt;
&lt;br /&gt;
- raymo&lt;br /&gt;
&lt;br /&gt;
=== SSH keys ===&lt;br /&gt;
It is recommended to setup [https://git.csclub.uwaterloo.ca/user/settings/keys SSH keys] so that you do not have to enter your password each time you push to a repo. Once you have uploaded your public key, add the following to your ~/.ssh/config:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Host csclub.uwaterloo.ca&lt;br /&gt;
        HostName csclub.uwaterloo.ca&lt;br /&gt;
        IdentityFile ~/.ssh/id_rsa&lt;br /&gt;
        User git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(Replace ~/.ssh/id_rsa by the location of your private SSH key.) Now you should be able to clone, push and pull over SSH.&lt;br /&gt;
&lt;br /&gt;
== Continuous Integration ==&lt;br /&gt;
We are running a CI server at https://ci.csclub.uwaterloo.ca. It uses OAuth via Gitea for logins, so you need to have logged in to Gitea first. See https://docs.drone.io/ for documentation. All you have to do is create a .drone.yml file in your repo, then enable CI on the repo from the CSC Drone website. There is an example [https://git.csclub.uwaterloo.ca/merenber/drone-test here].&lt;br /&gt;
&lt;br /&gt;
== Pushing and pulling from the filesystem ==&lt;br /&gt;
(for syscom only)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
If you need to keep the ability to push/pull from the filesystem, in addition to Gitea, you will need to take the following steps.&lt;br /&gt;
In this example, we are migrating a repo called &#039;public/repo.git&#039;, which is a folder under /srv/git on caffeine (which is a symlink to /users/git).&lt;br /&gt;
The way we&#039;re doing this right now is kind of hacky, but it works:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Clone the original repo locally: &amp;lt;code&amp;gt;git clone /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Delete the old repo (from phosphoric-acid, which has no_root_squash): &amp;lt;code&amp;gt;rm -rf /srv/git/public/repo.git&amp;lt;/code&amp;gt;&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Create a new repo with the name &#039;repo&#039; from the Gitea web UI. This should create a bare repository at &amp;lt;code&amp;gt;/srv/git/public/repo.git&amp;lt;/code&amp;gt;. (Make sure you choose the &#039;public&#039; org from the dropdown.)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Push the original repo to the new remote:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd repo&lt;br /&gt;
git remote add gitea https://git.csclub.uwaterloo.ca/public/repo.git&lt;br /&gt;
git push gitea master&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Remove any git gooks which require gitea:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm $(grep -IRl gitea /srv/git/public/repo.git/hooks)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Change file permissions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
chown -R git:git /srv/git/public/repo.git&lt;br /&gt;
chmod -R g+w /srv/git/public/repo.git&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You will need to do this from phosphoric-acid (due to NFS root squashing).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
Note that the repo folder SHOULD be owned by git:git. Anything else will likely break Gitea. (If a user pushes something to the folder and their umask doesn&#039;t allow group members to read, for example, then Gitea will be unable to read the repo.)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This means that only trusted users should be in the git group - ideally, only syscom members.&lt;br /&gt;
&lt;br /&gt;
=== Troubleshooting ===&lt;br /&gt;
If you are having trouble pulling/pushing with SSH and have something like this when trying &amp;lt;code&amp;gt;ssh git@csclub.uwaterloo.ca&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PTY allocation request failed on channel 0&lt;br /&gt;
shell request failed on channel 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Just restart &amp;lt;code&amp;gt;gitea.service&amp;lt;/code&amp;gt; on caffeine.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5327</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5327"/>
		<updated>2025-02-10T14:47:26Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Updating records */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through it&#039;s [https://ipam.private.uwaterloo.ca IP Address Management system]. IST has published some information on the [https://uwaterloo.atlassian.net/wiki/spaces/ISTKB/pages/43401052394/IP+Address+Management IST Knowledge Base].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* API account located in the standard syscom place&lt;br /&gt;
&lt;br /&gt;
=== Managing Records ===&lt;br /&gt;
There are two primary types of records that are maintained: Hosts and Aliases.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: Use the v4 and v6 toggles in the top left to switch between IPv4 and IPv6 networks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Add a new host ====&lt;br /&gt;
&lt;br /&gt;
# Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
# Click on IPAM -&amp;gt; Networks&lt;br /&gt;
# Locate the appropriate network for the server&lt;br /&gt;
# Click on the IP address that you want to register&lt;br /&gt;
# Set the appropriate information&lt;br /&gt;
## Set the &amp;quot;MAC&amp;quot; address of the machine (&#039;&#039;note: CSC networks don&#039;t use the IST DHCP system, so this is effectively ignored&#039;&#039;)&lt;br /&gt;
## Under &amp;quot;IPAM to DNS replication&amp;quot;&lt;br /&gt;
### Domain: Click the grey button next to the text box and change &amp;quot;Inherit&amp;quot; to &amp;quot;Set&amp;quot;. Then select the &amp;quot;csclub.uwaterloo.ca&amp;quot; domain (or other as appropriate)&lt;br /&gt;
### Shortname: The machine&#039;s name (e.g., caffeine)&lt;br /&gt;
## At the bottom&lt;br /&gt;
### Add &amp;quot;systems-committee@csclub.uwaterloo.ca&amp;quot; as a Technical Contact&lt;br /&gt;
### Select the appropriate Pol8 Classification (usually Public)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Add any aliases for the host (these will be created as CNAME records)&lt;br /&gt;
# Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Repeat the instructions for the IPv6 entry, however you may need to click the &amp;quot;+&amp;quot; to add the IP address on the network.&lt;br /&gt;
&lt;br /&gt;
==== Add/remove an alias to an existing host ====&lt;br /&gt;
&lt;br /&gt;
* Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
* Click on IPAM -&amp;gt; Networks&lt;br /&gt;
* Locate the appropriate network for the server&lt;br /&gt;
* Click on the IP address associated with the &#039;&#039;&#039;destination&#039;&#039;&#039; server (e.g., caffeine)&lt;br /&gt;
* If you get sent to a blank list.. click the &amp;quot;Address&amp;quot; object in the breadcrumb&lt;br /&gt;
* Click &amp;quot;Edit&amp;quot; under the ALIASES section on the screen&lt;br /&gt;
* Click &amp;quot;Next&amp;quot; twice&lt;br /&gt;
* Add or remove the alias to the list&lt;br /&gt;
* Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the IPAM records.&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Updating records ===&lt;br /&gt;
&#039;&#039;&#039;Note!&#039;&#039;&#039; This won&#039;t work for csclub.uwaterloo.ca, as your changes will be overwritten by the update script that pulls records from university&#039;s IPAM.&lt;br /&gt;
&lt;br /&gt;
To manually update a record in the dns1 container (somewhere in /etc/bind):&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
If this is a dynamic zone (so csclub.cloud), temporarily stop allowing dynamic updates via &amp;lt;code&amp;gt;rndc freeze $ZONE&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Modify the corresponding &amp;lt;code&amp;gt;db.$ZONE&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Make sure you also update the serial number for the SOA record for the corresponding zone. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt; to apply changes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5326</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5326"/>
		<updated>2025-02-07T05:17:57Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Updating records */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through it&#039;s [https://ipam.private.uwaterloo.ca IP Address Management system]. IST has published some information on the [https://uwaterloo.atlassian.net/wiki/spaces/ISTKB/pages/43401052394/IP+Address+Management IST Knowledge Base].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* API account located in the standard syscom place&lt;br /&gt;
&lt;br /&gt;
=== Managing Records ===&lt;br /&gt;
There are two primary types of records that are maintained: Hosts and Aliases.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: Use the v4 and v6 toggles in the top left to switch between IPv4 and IPv6 networks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Add a new host ====&lt;br /&gt;
&lt;br /&gt;
# Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
# Click on IPAM -&amp;gt; Networks&lt;br /&gt;
# Locate the appropriate network for the server&lt;br /&gt;
# Click on the IP address that you want to register&lt;br /&gt;
# Set the appropriate information&lt;br /&gt;
## Set the &amp;quot;MAC&amp;quot; address of the machine (&#039;&#039;note: CSC networks don&#039;t use the IST DHCP system, so this is effectively ignored&#039;&#039;)&lt;br /&gt;
## Under &amp;quot;IPAM to DNS replication&amp;quot;&lt;br /&gt;
### Domain: Click the grey button next to the text box and change &amp;quot;Inherit&amp;quot; to &amp;quot;Set&amp;quot;. Then select the &amp;quot;csclub.uwaterloo.ca&amp;quot; domain (or other as appropriate)&lt;br /&gt;
### Shortname: The machine&#039;s name (e.g., caffeine)&lt;br /&gt;
## At the bottom&lt;br /&gt;
### Add &amp;quot;systems-committee@csclub.uwaterloo.ca&amp;quot; as a Technical Contact&lt;br /&gt;
### Select the appropriate Pol8 Classification (usually Public)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Add any aliases for the host (these will be created as CNAME records)&lt;br /&gt;
# Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Repeat the instructions for the IPv6 entry, however you may need to click the &amp;quot;+&amp;quot; to add the IP address on the network.&lt;br /&gt;
&lt;br /&gt;
==== Add/remove an alias to an existing host ====&lt;br /&gt;
&lt;br /&gt;
* Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
* Click on IPAM -&amp;gt; Networks&lt;br /&gt;
* Locate the appropriate network for the server&lt;br /&gt;
* Click on the IP address associated with the &#039;&#039;&#039;destination&#039;&#039;&#039; server (e.g., caffeine)&lt;br /&gt;
* If you get sent to a blank list.. click the &amp;quot;Address&amp;quot; object in the breadcrumb&lt;br /&gt;
* Click &amp;quot;Edit&amp;quot; under the ALIASES section on the screen&lt;br /&gt;
* Click &amp;quot;Next&amp;quot; twice&lt;br /&gt;
* Add or remove the alias to the list&lt;br /&gt;
* Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the IPAM records.&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Updating records ===&lt;br /&gt;
To manually update a record in the dns1 container (somewhere in /etc/bind):&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
If this is a dynamic zone (so csclub.cloud), temporarily stop allowing dynamic updates via &amp;lt;code&amp;gt;rndc freeze $ZONE&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Modify the corresponding &amp;lt;code&amp;gt;db.$ZONE&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Make sure you also update the serial number for the SOA record for the corresponding zone. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt; to apply changes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5325</id>
		<title>DNS</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=DNS&amp;diff=5325"/>
		<updated>2025-02-07T05:10:57Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add instructions on how to edit record in dns1 container&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== IST DNS ==&lt;br /&gt;
&lt;br /&gt;
The University of Waterloo&#039;s DNS is managed through it&#039;s [https://ipam.private.uwaterloo.ca IP Address Management system]. IST has published some information on the [https://uwaterloo.atlassian.net/wiki/spaces/ISTKB/pages/43401052394/IP+Address+Management IST Knowledge Base].&lt;br /&gt;
&lt;br /&gt;
People who have access to Infoblox:&lt;br /&gt;
&lt;br /&gt;
* ztseguin&lt;br /&gt;
* API account located in the standard syscom place&lt;br /&gt;
&lt;br /&gt;
=== Managing Records ===&lt;br /&gt;
There are two primary types of records that are maintained: Hosts and Aliases.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Note: Use the v4 and v6 toggles in the top left to switch between IPv4 and IPv6 networks.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Add a new host ====&lt;br /&gt;
&lt;br /&gt;
# Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
# Click on IPAM -&amp;gt; Networks&lt;br /&gt;
# Locate the appropriate network for the server&lt;br /&gt;
# Click on the IP address that you want to register&lt;br /&gt;
# Set the appropriate information&lt;br /&gt;
## Set the &amp;quot;MAC&amp;quot; address of the machine (&#039;&#039;note: CSC networks don&#039;t use the IST DHCP system, so this is effectively ignored&#039;&#039;)&lt;br /&gt;
## Under &amp;quot;IPAM to DNS replication&amp;quot;&lt;br /&gt;
### Domain: Click the grey button next to the text box and change &amp;quot;Inherit&amp;quot; to &amp;quot;Set&amp;quot;. Then select the &amp;quot;csclub.uwaterloo.ca&amp;quot; domain (or other as appropriate)&lt;br /&gt;
### Shortname: The machine&#039;s name (e.g., caffeine)&lt;br /&gt;
## At the bottom&lt;br /&gt;
### Add &amp;quot;systems-committee@csclub.uwaterloo.ca&amp;quot; as a Technical Contact&lt;br /&gt;
### Select the appropriate Pol8 Classification (usually Public)&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Click &amp;quot;Next&amp;quot;&lt;br /&gt;
# Add any aliases for the host (these will be created as CNAME records)&lt;br /&gt;
# Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Repeat the instructions for the IPv6 entry, however you may need to click the &amp;quot;+&amp;quot; to add the IP address on the network.&lt;br /&gt;
&lt;br /&gt;
==== Add/remove an alias to an existing host ====&lt;br /&gt;
&lt;br /&gt;
* Go to https://ipam.private.uwaterloo.ca&lt;br /&gt;
* Click on IPAM -&amp;gt; Networks&lt;br /&gt;
* Locate the appropriate network for the server&lt;br /&gt;
* Click on the IP address associated with the &#039;&#039;&#039;destination&#039;&#039;&#039; server (e.g., caffeine)&lt;br /&gt;
* If you get sent to a blank list.. click the &amp;quot;Address&amp;quot; object in the breadcrumb&lt;br /&gt;
* Click &amp;quot;Edit&amp;quot; under the ALIASES section on the screen&lt;br /&gt;
* Click &amp;quot;Next&amp;quot; twice&lt;br /&gt;
* Add or remove the alias to the list&lt;br /&gt;
* Click &amp;quot;OK&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== CSC DNS ==&lt;br /&gt;
&lt;br /&gt;
CSC hosts some authoritative dns services on ext-dns1.csclub.uwaterloo.ca (129.97.134.4/2620:101:f000:4901:c5c::4) and ext-dns2.csclub.uwaterloo.ca (129.97.18.20/2620:101:f000:7300:c5c::20).&lt;br /&gt;
&lt;br /&gt;
Current authoritative domains:&lt;br /&gt;
&lt;br /&gt;
* csclub.cloud&lt;br /&gt;
* uwaterloo.club&lt;br /&gt;
* csclub.uwaterloo.ca: A script (/opt/bindify/update-dns on dns1) runs every 10 minutes to populate this zone from the IPAM records.&lt;br /&gt;
&lt;br /&gt;
Those DNS servers are also recursive for machines located on the University network.&lt;br /&gt;
&lt;br /&gt;
=== Updating records ===&lt;br /&gt;
To manually update a record in the dns1 container (somewhere in /etc/bind):&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Temporarily stop allowing dynamic updates via &amp;lt;code&amp;gt;rndc freeze $ZONE&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Modify the corresponding &amp;lt;code&amp;gt;db.$ZONE&amp;lt;/code&amp;gt; file.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Make sure you also update the serial number for the SOA record for the corresponding zone. &lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt; to apply changes.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
=== LOC Records ===&lt;br /&gt;
&lt;br /&gt;
If we really cared, we might add a [http://en.wikipedia.org/wiki/LOC_record LOC record] for csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
=== SSHFP ===&lt;br /&gt;
&lt;br /&gt;
We could look into [http://tools.ietf.org/html/rfc4255 SSHFP] records. Apparently OpenSSH supports these. (Discussion moved to [[Talk:DNS]].)&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5312</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5312"/>
		<updated>2024-12-20T18:45:20Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Writing Mirror News &amp;amp; Banner */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Spring 2023, it is now generated by Hugo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/deploy.sh&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run every minute.&lt;br /&gt;
&lt;br /&gt;
The script will first run &amp;lt;code&amp;gt;synctask2project&amp;lt;/code&amp;gt;, which pull project synchronization status from Merlin (using merlin&#039;s socket), combine sub-projects (for example &amp;lt;code&amp;gt;racket&amp;lt;/code&amp;gt; is a combination for two merlin tasks, &amp;lt;code&amp;gt;plt-bundles&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;racket-installers&amp;lt;/code&amp;gt;) and read the size of the project using &amp;lt;code&amp;gt;zfs list -Hp&amp;lt;/code&amp;gt;. This Python script then spits out a json file to &amp;lt;code&amp;gt;data/sync.json&amp;lt;/code&amp;gt;. Hugo then read the json file and generate the HTML table based on it. The table part is also generated separately into &amp;lt;code&amp;gt;public/project_table/index.html&amp;lt;/code&amp;gt;, which can be read by htmx (JS library used on index page) to achieve live reload on sync status. Finally, the generated product of Hugo is copied to mirror root for display by nginx.&lt;br /&gt;
&lt;br /&gt;
Project information is located at &amp;lt;code&amp;gt;synctask2project/config.toml&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOT&#039;&#039;&#039; the config.toml in the root folder! That&#039;s the config for Hugo). Its format is as follows:&lt;br /&gt;
&amp;lt;pre class=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
merlin_sock = &amp;quot;/path/to/merlin/socket&amp;quot;&lt;br /&gt;
zfs_pools = [&amp;quot;mirror_zfs_pool1&amp;quot;, &amp;quot;mirror_zfs_pool2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[project_name]&lt;br /&gt;
# This is supposed to be the short version shown on the website&lt;br /&gt;
# Mandatory field&lt;br /&gt;
site = &amp;quot;project.site&amp;quot;&lt;br /&gt;
# The full URL&lt;br /&gt;
# Mandatory field&lt;br /&gt;
url = &amp;quot;https://full.project.site&amp;quot;&lt;br /&gt;
# We are the upstream or archived project. Don&#039;t show sync error or last sync time&lt;br /&gt;
# Optional. Default: no&lt;br /&gt;
upstream = yes &lt;br /&gt;
# If this project contains multiple merlin sync tasks, list them here&lt;br /&gt;
# Optional. Default: project_name&lt;br /&gt;
merlin-tasks = [&amp;quot;task1&amp;quot;, &amp;quot;task2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# define more projects below...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mirror-index also supports news. When adding new projects or making modifications, create a markdown file in &amp;lt;code&amp;gt;mirror-index/content/news/&amp;lt;/code&amp;gt; to tell the user what was changed. It should be picked up by Hugo automatically on next generation.&lt;br /&gt;
&lt;br /&gt;
On first setup, run &amp;lt;code&amp;gt;setup.sh&amp;lt;/code&amp;gt;. When doing development (like change the sass or static files), run &amp;lt;code&amp;gt;build.sh&amp;lt;/code&amp;gt; to build assets.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Making changes ===&lt;br /&gt;
Everything in the &amp;lt;code&amp;gt;~mirror&amp;lt;/code&amp;gt; is managed by git (so a monorepo containing all sub-projects like Merlin and mirror-index). To make changes, switch to the mirror user and commit with &amp;lt;code&amp;gt;--author &amp;quot;FirstName LastName &amp;lt;email@csc&amp;gt;&amp;lt;/code&amp;gt; to show who made the change. Then run &amp;lt;code&amp;gt;git push&amp;lt;/code&amp;gt; to push the changes. The remote is using the HTTPS URL, so just enter your CSC credentials.&lt;br /&gt;
&lt;br /&gt;
=== Writing Mirror News &amp;amp; Banner ===&lt;br /&gt;
You can add news by putting a Markdown file into &amp;lt;code&amp;gt;~mirror/mirror-index/content/news&amp;lt;/code&amp;gt;. A minimal post looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+++&lt;br /&gt;
title = &amp;quot;New mirror index page&amp;quot;&lt;br /&gt;
date = &amp;quot;2023-05-04&amp;quot;&lt;br /&gt;
+++&lt;br /&gt;
&lt;br /&gt;
We&#039;ve updated the mirror index page to include more detailed synchronization status information.&lt;br /&gt;
&lt;br /&gt;
If you experienced any usability issues due to browser compatibility, please let us know on [syscom@csclub.uwaterloo.ca](syscom@csclub.uwaterloo.ca).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also put up a big banner on the front page for notifying critical information. Just edit &amp;lt;code&amp;gt;~mirror/mirror-index/config.toml&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[params]&lt;br /&gt;
banner = true&lt;br /&gt;
# Supported options: blue orange red&lt;br /&gt;
banner_color = &amp;quot;orange&amp;quot;&lt;br /&gt;
banner_title = &amp;quot;Scheduled Downtime&amp;quot;&lt;br /&gt;
# You can write markdown here&lt;br /&gt;
banner_text = &amp;quot;CSC Mirror will be down on Dec 22, 2024 from 7am to 4pm (EST/UTC-5). [More](/news/dec-22-2024-scheduled-downtime/)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And once the incident is over just set banner to be false.&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin-config.ini&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin-go&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/log/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/log-$PROTOCOL/$PROJECT_NAME-*.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)  [&#039;&#039;&#039;NOTE: The backup machine is currently unavailable, so this step is not currently needed]&#039;&#039;&#039;&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update the rsync daemon&lt;br /&gt;
#* Edit &amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;, adding a new entry for the new name (keep the old name too). Restart with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5311</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5311"/>
		<updated>2024-12-20T18:40:04Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Writing Mirror News &amp;amp; Warning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Spring 2023, it is now generated by Hugo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/deploy.sh&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run every minute.&lt;br /&gt;
&lt;br /&gt;
The script will first run &amp;lt;code&amp;gt;synctask2project&amp;lt;/code&amp;gt;, which pull project synchronization status from Merlin (using merlin&#039;s socket), combine sub-projects (for example &amp;lt;code&amp;gt;racket&amp;lt;/code&amp;gt; is a combination for two merlin tasks, &amp;lt;code&amp;gt;plt-bundles&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;racket-installers&amp;lt;/code&amp;gt;) and read the size of the project using &amp;lt;code&amp;gt;zfs list -Hp&amp;lt;/code&amp;gt;. This Python script then spits out a json file to &amp;lt;code&amp;gt;data/sync.json&amp;lt;/code&amp;gt;. Hugo then read the json file and generate the HTML table based on it. The table part is also generated separately into &amp;lt;code&amp;gt;public/project_table/index.html&amp;lt;/code&amp;gt;, which can be read by htmx (JS library used on index page) to achieve live reload on sync status. Finally, the generated product of Hugo is copied to mirror root for display by nginx.&lt;br /&gt;
&lt;br /&gt;
Project information is located at &amp;lt;code&amp;gt;synctask2project/config.toml&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOT&#039;&#039;&#039; the config.toml in the root folder! That&#039;s the config for Hugo). Its format is as follows:&lt;br /&gt;
&amp;lt;pre class=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
merlin_sock = &amp;quot;/path/to/merlin/socket&amp;quot;&lt;br /&gt;
zfs_pools = [&amp;quot;mirror_zfs_pool1&amp;quot;, &amp;quot;mirror_zfs_pool2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[project_name]&lt;br /&gt;
# This is supposed to be the short version shown on the website&lt;br /&gt;
# Mandatory field&lt;br /&gt;
site = &amp;quot;project.site&amp;quot;&lt;br /&gt;
# The full URL&lt;br /&gt;
# Mandatory field&lt;br /&gt;
url = &amp;quot;https://full.project.site&amp;quot;&lt;br /&gt;
# We are the upstream or archived project. Don&#039;t show sync error or last sync time&lt;br /&gt;
# Optional. Default: no&lt;br /&gt;
upstream = yes &lt;br /&gt;
# If this project contains multiple merlin sync tasks, list them here&lt;br /&gt;
# Optional. Default: project_name&lt;br /&gt;
merlin-tasks = [&amp;quot;task1&amp;quot;, &amp;quot;task2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# define more projects below...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mirror-index also supports news. When adding new projects or making modifications, create a markdown file in &amp;lt;code&amp;gt;mirror-index/content/news/&amp;lt;/code&amp;gt; to tell the user what was changed. It should be picked up by Hugo automatically on next generation.&lt;br /&gt;
&lt;br /&gt;
On first setup, run &amp;lt;code&amp;gt;setup.sh&amp;lt;/code&amp;gt;. When doing development (like change the sass or static files), run &amp;lt;code&amp;gt;build.sh&amp;lt;/code&amp;gt; to build assets.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Making changes ===&lt;br /&gt;
Everything in the &amp;lt;code&amp;gt;~mirror&amp;lt;/code&amp;gt; is managed by git (so a monorepo containing all sub-projects like Merlin and mirror-index). To make changes, switch to the mirror user and commit with &amp;lt;code&amp;gt;--author &amp;quot;FirstName LastName &amp;lt;email@csc&amp;gt;&amp;lt;/code&amp;gt; to show who made the change. Then run &amp;lt;code&amp;gt;git push&amp;lt;/code&amp;gt; to push the changes. The remote is using the HTTPS URL, so just enter your CSC credentials.&lt;br /&gt;
&lt;br /&gt;
=== Writing Mirror News &amp;amp; Banner ===&lt;br /&gt;
You can add news by putting a Markdown file into &amp;lt;code&amp;gt;~mirror/mirror-index/content/news&amp;lt;/code&amp;gt;. A minimal post looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+++&lt;br /&gt;
title = &amp;quot;New mirror index page&amp;quot;&lt;br /&gt;
date = &amp;quot;2023-05-04&amp;quot;&lt;br /&gt;
+++&lt;br /&gt;
&lt;br /&gt;
We&#039;ve updated the mirror index page to include more detailed synchronization status information.&lt;br /&gt;
&lt;br /&gt;
If you experienced any usability issues due to browser compatibility, please let us know on [syscom@csclub.uwaterloo.ca](syscom@csclub.uwaterloo.ca).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also put up a big banner on the front page for notifying critical information. Just edit &amp;lt;code&amp;gt;~mirror/mirror-index/config.toml&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[params]&lt;br /&gt;
banner = true&lt;br /&gt;
# Supported options: blue orange red&lt;br /&gt;
banner_color = &amp;quot;orange&amp;quot;&lt;br /&gt;
banner_title = &amp;quot;Scheduled Downtime&amp;quot;&lt;br /&gt;
# You can write markdown here&lt;br /&gt;
banner_text = &amp;quot;CSC Mirror will be down on Dec 22, 2024 from 7am to 4pm (EST/UTC-5). [More](/news/dec-22-2024-scheduled-downtime/)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And once the warning is over just set warning to be false.&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin-config.ini&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin-go&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/log/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/log-$PROTOCOL/$PROJECT_NAME-*.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)  [&#039;&#039;&#039;NOTE: The backup machine is currently unavailable, so this step is not currently needed]&#039;&#039;&#039;&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update the rsync daemon&lt;br /&gt;
#* Edit &amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;, adding a new entry for the new name (keep the old name too). Restart with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5310</id>
		<title>Mirror</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mirror&amp;diff=5310"/>
		<updated>2024-12-19T18:35:48Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Mirror Administration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [https://csclub.uwaterloo.ca Computer Science Club] runs a public mirror ([http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca]) on [[Machine_List#potassium-benzoate|potassium-benzoate]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;We are listed on the ResNet &amp;amp;quot;don&#039;t count&amp;amp;quot; list, so downloading from our mirror will not count against one&#039;s ResNet quota.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Software Mirrored ==&lt;br /&gt;
&lt;br /&gt;
A list of current archives (and their respective disk usage) is listed on our mirror&#039;s homepage at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
=== Mirroring Requests ===&lt;br /&gt;
&lt;br /&gt;
Requests to mirror a particular distribution or archive should be made to [mailto:syscom@csclub.uwaterloo.ca syscom@csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
== Implementation Details ==&lt;br /&gt;
&lt;br /&gt;
=== Syncing ===&lt;br /&gt;
&lt;br /&gt;
==== Storage ====&lt;br /&gt;
&lt;br /&gt;
All of our projects are stored on an 8x18TB disk raidz2 array (cscmirror0). There is an additional drive acting as a hot-spare.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;/mirror/root/.cscmirror0&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each project is given a filesystem the pool. Symlinks are created &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; to point to the correct pool and file system.&lt;br /&gt;
&lt;br /&gt;
==== Merlin ====&lt;br /&gt;
Project synchronization is done by &amp;quot;merlin&amp;quot; which is a Go rewrite of the Python script &amp;quot;merlin&amp;quot; originally written by a2brenna.&lt;br /&gt;
&lt;br /&gt;
The program is stored in &amp;lt;code&amp;gt;~mirror/merlin&amp;lt;/code&amp;gt; and is managed by the systemd unit &amp;lt;code&amp;gt;merlin-go.service&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The config file &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt; contains the list of repositories along with their configurations.&lt;br /&gt;
&lt;br /&gt;
To view the sync status, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur status&amp;lt;/code&amp;gt;. To force the sync of a project, execute &amp;lt;code&amp;gt;~mirror/merlin/cmd/arthur/arthur sync:PROJECT_NAME&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Remark&#039;&#039;&#039;: For syncing Debian repositories we were [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1020998 requested] to use ftpsync which has configs in &amp;lt;code&amp;gt;~mirror/ftpsync&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===== Push Sync =====&lt;br /&gt;
&lt;br /&gt;
Some projects support push syncing via SSH.&lt;br /&gt;
&lt;br /&gt;
We are running a special SSHD instance on mirror.csclub.uwaterloo.ca:22. This instance has been locked down, with the following settings:&lt;br /&gt;
&lt;br /&gt;
* Only SSH key authentication&lt;br /&gt;
* Only users of the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; group (except &amp;lt;code&amp;gt;mirror&amp;lt;/code&amp;gt;) are allowed to connect&lt;br /&gt;
* X11 Forwarding, TCP Forwarding, Agent Forwarding, User RC and TTY are disabled&lt;br /&gt;
* Users are chrooted to &amp;lt;code&amp;gt;/mirror/merlin&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most projects will connect using the &amp;lt;code&amp;gt;push&amp;lt;/code&amp;gt; user. The SSH authorized keys file is located at &amp;lt;code&amp;gt;/home/push/.ssh/authorized_keys&amp;lt;/code&amp;gt;. An example entry is:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
restrict,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty,command=&amp;quot;arthur sync:ubuntu &amp;gt;/dev/null 2&amp;gt;/dev/null &amp;lt;/dev/null &amp;amp;&amp;quot;,from=&amp;quot;XXX.XXX.XXX.XXX&amp;quot; ssh-rsa ...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Sync Scripts ====&lt;br /&gt;
&lt;br /&gt;
Our collection of synchronization scripts are located in &amp;lt;code&amp;gt;~mirror/bin&amp;lt;/code&amp;gt;. They currently include:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-apache&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-debian-cd&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-gentoo&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-ssh&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Most of these scripts take the following parameters:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;local_dir rsync_host rsync_dir&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== HTTP(s) ===&lt;br /&gt;
&lt;br /&gt;
We use [https://nginx.org nginx] as our webserver.&lt;br /&gt;
&lt;br /&gt;
==== Index ====&lt;br /&gt;
&lt;br /&gt;
An index of the archives we mirror is available at [http://mirror.csclub.uwaterloo.ca mirror.csclub.uwaterloo.ca].&lt;br /&gt;
&lt;br /&gt;
As of Spring 2023, it is now generated by Hugo.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;~mirror/mirror-index/deploy.sh&amp;lt;/code&amp;gt; is scheduled in &amp;lt;code&amp;gt;/etc/cron.d/csc-mirror&amp;lt;/code&amp;gt; to be run every minute.&lt;br /&gt;
&lt;br /&gt;
The script will first run &amp;lt;code&amp;gt;synctask2project&amp;lt;/code&amp;gt;, which pull project synchronization status from Merlin (using merlin&#039;s socket), combine sub-projects (for example &amp;lt;code&amp;gt;racket&amp;lt;/code&amp;gt; is a combination for two merlin tasks, &amp;lt;code&amp;gt;plt-bundles&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;racket-installers&amp;lt;/code&amp;gt;) and read the size of the project using &amp;lt;code&amp;gt;zfs list -Hp&amp;lt;/code&amp;gt;. This Python script then spits out a json file to &amp;lt;code&amp;gt;data/sync.json&amp;lt;/code&amp;gt;. Hugo then read the json file and generate the HTML table based on it. The table part is also generated separately into &amp;lt;code&amp;gt;public/project_table/index.html&amp;lt;/code&amp;gt;, which can be read by htmx (JS library used on index page) to achieve live reload on sync status. Finally, the generated product of Hugo is copied to mirror root for display by nginx.&lt;br /&gt;
&lt;br /&gt;
Project information is located at &amp;lt;code&amp;gt;synctask2project/config.toml&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOT&#039;&#039;&#039; the config.toml in the root folder! That&#039;s the config for Hugo). Its format is as follows:&lt;br /&gt;
&amp;lt;pre class=&amp;quot;toml&amp;quot;&amp;gt;&lt;br /&gt;
merlin_sock = &amp;quot;/path/to/merlin/socket&amp;quot;&lt;br /&gt;
zfs_pools = [&amp;quot;mirror_zfs_pool1&amp;quot;, &amp;quot;mirror_zfs_pool2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
[project_name]&lt;br /&gt;
# This is supposed to be the short version shown on the website&lt;br /&gt;
# Mandatory field&lt;br /&gt;
site = &amp;quot;project.site&amp;quot;&lt;br /&gt;
# The full URL&lt;br /&gt;
# Mandatory field&lt;br /&gt;
url = &amp;quot;https://full.project.site&amp;quot;&lt;br /&gt;
# We are the upstream or archived project. Don&#039;t show sync error or last sync time&lt;br /&gt;
# Optional. Default: no&lt;br /&gt;
upstream = yes &lt;br /&gt;
# If this project contains multiple merlin sync tasks, list them here&lt;br /&gt;
# Optional. Default: project_name&lt;br /&gt;
merlin-tasks = [&amp;quot;task1&amp;quot;, &amp;quot;task2&amp;quot;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# define more projects below...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The mirror-index also supports news. When adding new projects or making modifications, create a markdown file in &amp;lt;code&amp;gt;mirror-index/content/news/&amp;lt;/code&amp;gt; to tell the user what was changed. It should be picked up by Hugo automatically on next generation.&lt;br /&gt;
&lt;br /&gt;
On first setup, run &amp;lt;code&amp;gt;setup.sh&amp;lt;/code&amp;gt;. When doing development (like change the sass or static files), run &amp;lt;code&amp;gt;build.sh&amp;lt;/code&amp;gt; to build assets.&lt;br /&gt;
&lt;br /&gt;
=== FTP ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;UPDATE&amp;lt;/b&amp;gt;: We now use vsftpd instead. See /etc/vsftpd.conf for details. Official documentation can be found [https://manpages.debian.org/stable/vsftpd/vsftpd.conf.5.en.html here].&lt;br /&gt;
&lt;br /&gt;
We use [http://www.proftpd.org/ proftpd] (standalone daemon) as our FTP server.&lt;br /&gt;
&lt;br /&gt;
To increase performance, we disable DNS lookups in &amp;lt;code&amp;gt;proftpd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;UseReverseDNS           off&lt;br /&gt;
IdentLookups            off&amp;lt;/pre&amp;gt;&lt;br /&gt;
We also limit the amount of CPU/memory resources used (e.g. to minimize [https://en.wikipedia.org/wiki/Globbing Globbing] resources):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;RLimitCPU               session 10&lt;br /&gt;
RLimitMemory            session 4096K&amp;lt;/pre&amp;gt;&lt;br /&gt;
We allow a maximum of 500 concurrent FTP sessions:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;MaxInstances            500&lt;br /&gt;
MaxClients              500&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
=== rsync ===&lt;br /&gt;
&lt;br /&gt;
We use &amp;lt;code&amp;gt;rsyncd&amp;lt;/code&amp;gt; (standalone daemon).&lt;br /&gt;
&lt;br /&gt;
We disable compression and checksumming in &amp;lt;code&amp;gt;rsyncd.conf&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;dont compress = *&lt;br /&gt;
refuse options = c delete&amp;lt;/pre&amp;gt;&lt;br /&gt;
The contents of &amp;lt;code&amp;gt;/mirror/root/include/motd.msg&amp;lt;/code&amp;gt; are displayed when a user connects.&lt;br /&gt;
&lt;br /&gt;
== Mirror Administration ==&lt;br /&gt;
&lt;br /&gt;
=== Making changes ===&lt;br /&gt;
Everything in the &amp;lt;code&amp;gt;~mirror&amp;lt;/code&amp;gt; is managed by git (so a monorepo containing all sub-projects like Merlin and mirror-index). To make changes, switch to the mirror user and commit with &amp;lt;code&amp;gt;--author &amp;quot;FirstName LastName &amp;lt;email@csc&amp;gt;&amp;lt;/code&amp;gt; to show who made the change. Then run &amp;lt;code&amp;gt;git push&amp;lt;/code&amp;gt; to push the changes. The remote is using the HTTPS URL, so just enter your CSC credentials.&lt;br /&gt;
&lt;br /&gt;
=== Writing Mirror News &amp;amp; Warning ===&lt;br /&gt;
You can add news by putting a Markdown file into &amp;lt;code&amp;gt;~mirror/mirror-index/content/news&amp;lt;/code&amp;gt;. A minimal post looks like this:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+++&lt;br /&gt;
title = &amp;quot;New mirror index page&amp;quot;&lt;br /&gt;
date = &amp;quot;2023-05-04&amp;quot;&lt;br /&gt;
+++&lt;br /&gt;
&lt;br /&gt;
We&#039;ve updated the mirror index page to include more detailed synchronization status information.&lt;br /&gt;
&lt;br /&gt;
If you experienced any usability issues due to browser compatibility, please let us know on [syscom@csclub.uwaterloo.ca](syscom@csclub.uwaterloo.ca).&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
You can also put up a big warning on the front page. Just edit &amp;lt;code&amp;gt;~mirror/mirror-index/config.toml&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[params]&lt;br /&gt;
warning = true&lt;br /&gt;
# You can write markdown here&lt;br /&gt;
warning_text = &amp;quot;CSC Mirror will be down on Dec 22, 2024 from 7am to 4pm (EST/UTC-5). [More](/news/dec-22-2024-scheduled-downtime/)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
And once the warning is over just set warning to be false.&lt;br /&gt;
&lt;br /&gt;
=== Adding a new project ===&lt;br /&gt;
&lt;br /&gt;
# Find the instructions for mirroring the project. Ideally, try to sync directly from the project’s source repository.&lt;br /&gt;
#* Note that some projects provide sync scripts, however we generally won’t use them. We will instead use our custom ones.&lt;br /&gt;
# Create a zfs filesystem to store the project in:&lt;br /&gt;
#*&amp;lt;code&amp;gt;zfs create cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change the folder ownership&lt;br /&gt;
#*&amp;lt;code&amp;gt;chown mirror:mirror /mirror/root/.cscmirror0/$PROJECT_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Create the symlink in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;&lt;br /&gt;
#*&amp;lt;code&amp;gt;ln -s .cscmirror0/$PROJECT_NAME $PROJECT_NAME&amp;lt;/code&amp;gt; (&#039;&#039;&#039;NOTE&#039;&#039;&#039;: The symlink must be relative to the &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt; directory. If it isn’t, the symlinks will not work when chrooted)&lt;br /&gt;
# Repeat the above steps on mirror-phys. &amp;lt;code&amp;gt;sudo ssh mirror-dc&amp;lt;/code&amp;gt; on potassium-benzoate [&#039;&#039;&#039;NOTE: This machine is currently unavailable]&#039;&#039;&#039;&lt;br /&gt;
# Configure the project in merlin (&amp;lt;code&amp;gt;~mirror/merlin/merlin-config.ini&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Select the appropriate sync script (typically &amp;lt;code&amp;gt;csc-sync-standard&amp;lt;/code&amp;gt;) and supply the appropriate parameters&lt;br /&gt;
# Restart merlin: &amp;lt;code&amp;gt;systemctl restart merlin-go&amp;lt;/code&amp;gt;&lt;br /&gt;
#* This will kick off the initial sync&lt;br /&gt;
#* Check &amp;lt;code&amp;gt;~mirror/merlin/log/$PROJECT_NAME&amp;lt;/code&amp;gt; for errors, &amp;lt;code&amp;gt;~mirror/merlin/log-$PROTOCOL/$PROJECT_NAME-*.log&amp;lt;/code&amp;gt; for transfer progress&lt;br /&gt;
# Configure the project in zfssync.yml (&amp;lt;code&amp;gt;~mirror/merlin/zfssync.yml&amp;lt;/code&amp;gt;)  [&#039;&#039;&#039;NOTE: The backup machine is currently unavailable, so this step is not currently needed]&#039;&#039;&#039;&lt;br /&gt;
# Update the mirror index configuration (&amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Add the project to rsync (&amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;)&lt;br /&gt;
#* Restart rsync with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If push mirroring is available/required, see [[#Push_Sync|Push Sync]].&lt;br /&gt;
&lt;br /&gt;
=== Rename project ===&lt;br /&gt;
&lt;br /&gt;
# Change project name (title) and local_dir in &amp;lt;code&amp;gt;merlin-config.ini&amp;lt;/code&amp;gt;&lt;br /&gt;
# Change zfs dataset name&lt;br /&gt;
#* &amp;lt;code&amp;gt;zfs rename cscmirror0/OLD_NAME cscmirror0/NEW_NAME&amp;lt;/code&amp;gt;&lt;br /&gt;
# Reload merlin config&lt;br /&gt;
#* &amp;lt;code&amp;gt;systemctl reload merlin-go.service&amp;lt;/code&amp;gt;&lt;br /&gt;
# Remove old symlink and create new symlink in mirror root&lt;br /&gt;
#* &amp;lt;code&amp;gt;rm OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s .cscmirror0/NEW_DIR NEW_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Add a symlink for the old name (in &amp;lt;code&amp;gt;/mirror/root&amp;lt;/code&amp;gt;) so that existing users won&#039;t be broken by the change&lt;br /&gt;
#* &amp;lt;code&amp;gt;ln -s NEW_DIR OLD_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update the rsync daemon&lt;br /&gt;
#* Edit &amp;lt;code&amp;gt;/etc/rsyncd.conf&amp;lt;/code&amp;gt;, adding a new entry for the new name (keep the old name too). Restart with &amp;lt;code&amp;gt;systemctl restart rsync&amp;lt;/code&amp;gt;&lt;br /&gt;
# Modify index page generator config&lt;br /&gt;
#* At &amp;lt;code&amp;gt;~mirror/mirror-index-ng/synctask2project/config.toml&amp;lt;/code&amp;gt;&lt;br /&gt;
# Update an mirror registrations with the project to ensure the new URLs are used&lt;br /&gt;
&lt;br /&gt;
=== Secondary Mirror ===&lt;br /&gt;
&lt;br /&gt;
The School of Computer Science&#039;s CSCF has provided us with a secondary mirror machine located in DC. This will limit the downtime of mirror.csclub in the event of an outage affecting the MC machine room.&lt;br /&gt;
&lt;br /&gt;
As of June 2023, CSCF mirror is down. CSCF is planing to bring it back with new hardware but no ETA.&lt;br /&gt;
&lt;br /&gt;
==== Keepalived ====&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s IP addresses (129.97.134.71 and 2620:101:f000:4901:c5c::f:1055) have been configured has VRRP address on both machines. Keepalived does the monitoring and selecting of the active node.&lt;br /&gt;
&lt;br /&gt;
Potassium-benzoate has higher priority and will typically be the active node. A node&#039;s priority is reduced when nginx, proftpd or rsync are not running. Potassium-benzoate starts with a score of 100 and mirror-dc starts with a priority of 90 (higher score wins).&lt;br /&gt;
&lt;br /&gt;
When nginx is unavailable (checked w/ curl), the priority is reduced by 20. When proftpd is unavailable (checked with curl), the priority is reduced by 5. When rsync is unavailable (checking with rsync), the priority is reduced by 15.&lt;br /&gt;
&lt;br /&gt;
The Systems Committee should received an email when the nodes swap position.&lt;br /&gt;
&lt;br /&gt;
==== Project synchronization ====&lt;br /&gt;
&lt;br /&gt;
Only potassium-benzoate is configure with merlin. mirror-dc has the software components, but they are probably not update to date nor configured to run correctly.&lt;br /&gt;
&lt;br /&gt;
When a project sync is complete, merlin will kick off a custom script to sync the zfs dataset to the other node. These scripts live in /usr/local/bin and in ~mirror/merlin.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5286</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5286"/>
		<updated>2024-10-22T13:26:24Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* phosphoric-acid */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our machines are in the E7, F7, G7 and H7 racks (as of Jan. 2022) in the MC 3015 server room. There is an additional rack in the DC 3558 machine room on the third floor. Our office terminals are in the CSC office, in MC 3036/3037.&lt;br /&gt;
&lt;br /&gt;
= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;(Redundant active backup coming soon...)&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* LXC virtual machine hosted on [[Machine List#phosphoric-acid|phosphoric-acid]]&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [https://www.apache.org/ Apache]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no cryptocurrency mining).&lt;br /&gt;
&lt;br /&gt;
For computationally-intensive jobs (CPU/memory bound) we recommend running on high-fructose-corn-syrup, carbonated-water, sorbitol, mannitol, or corn-syrup, listed in roughly decreasing order of available resources. For low-intensity interactive jobs, such as IRC clients, we recommend running on neotame. &#039;&#039;&#039;&amp;lt;u&amp;gt;If you have a long-running computationally intensive job, it&#039;s good to nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&amp;lt;/u&amp;gt;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is a large SuperMicro server. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Missing moba IO shield (as of January 2024)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;neotame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
neotame is a SuperMicro server funded by MEF. It is the successor to taurine.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;We strongly discourage running computationally-intensive jobs&#039;&#039;&#039; on neotame as many users run interactive applications such as IRC clients on it and any significant service degradation will be more likely to affect other users (who will probably notice right away).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* SSH server also listens on ports 21, 22, 53, 80, 81, 443, 8000, 8080 for your convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sorbitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sorbitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
== &#039;&#039;mannitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mannitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cyanide&#039;&#039; ==&lt;br /&gt;
cyanide is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)], identical in specification to powernap&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;suika&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Suika is an office terminal built from various components donated by our members.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* AMD Ryzen 7 2700X&lt;br /&gt;
* 2x 8GB DDR4&lt;br /&gt;
* 1x Samsung 256GB SSD&lt;br /&gt;
* AMD Radeon RX 550 4GB&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;powernap&#039;&#039;==&lt;br /&gt;
powernap is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)].&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
=== Speaker === &lt;br /&gt;
powernap has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
* MPD for playing music. Only office/termcom/syscom can log into powernap. Use `ncmpcpp` to control MPD.&lt;br /&gt;
** TODO: this is not the case anymore&lt;br /&gt;
* Bluetooth audio receiver. Only syscom can control bluetooth pairing. Use `bluetoothctl` to control bluetooth.&lt;br /&gt;
&lt;br /&gt;
Music is located in `/music` on the office terminals.&lt;br /&gt;
&lt;br /&gt;
= Progcom Only =&lt;br /&gt;
The Programme Committee has access to a VM on corn-syrup called &#039;progcom&#039;. They have sudo rights in this VM so they may install and run their own software inside it. This VM should only be accessible by members of progcom or syscom.&lt;br /&gt;
&lt;br /&gt;
= Codey Bot Only =&lt;br /&gt;
Ran on CSC Cloud in a separate Cloudstack project. codey-staging, codey-dev, codey-prod.&lt;br /&gt;
&lt;br /&gt;
TODO: migrating from cloudstack&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems are only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;xylitol&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
xylitol is a Dell PowerEdge R815 donated by CSCF. It is primarily a container host for services previously hosted on aspartame and dextrose, including munin, rt, mathnews, auth1, and dns1. It was provisioned with the intent to replace both of those hosts.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Dual AMD Opteron 6176 (2.3 GHz, 48 cores total)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 500GB volume group on RAID1 SSD (xylitol-mirrored)&lt;br /&gt;
* 500ish-GB volume group on RAID10 HDD (xylitol-raidten)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth1&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] primary&lt;br /&gt;
*[[Kerberos]] primary&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chat&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* The Lounge web IRC client (https://chat.csclub.uwaterloo.ca)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phosphoric-acid&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phosphoric-acid is a Dell PowerEdge R815 donated by CSCF and is a clone of xylitol. It may be used to provide redundant cloud services in the future.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* (clone of Xylitol)&lt;br /&gt;
* 4x 2TB Kingston KC3000 (ZFS Z2 [Sustain 2-failures]) (KIN-SKC3000D2048G)&lt;br /&gt;
** Mounted on 2x Startech Dual M.2 PCIE SSD Adapter Cards (STA-PEX8M2E2)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[#caffeine|caffeine]]&lt;br /&gt;
*[[#coffee|coffee]]&lt;br /&gt;
*prometheus&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;coffee&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on phosphoric-acid.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Database#MySQL|MySQL]]&lt;br /&gt;
*[[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;cobalamin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics, on Science Computing Rack 2. NICs are plugged into A1 and A2 on the adjacent rack. Acts as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
TODO: should replace with another Syscom server when Science Computing clears out the rack (ETA before 09/2024)&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
*http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]] (kerberos)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;TODO: Mega unreliable.&#039;&#039;&#039; (Goes down once every few weeks... due to power outages in the PHYS server room)&lt;br /&gt;
** It is plugged into a UPS but the UPS has dead batteries.&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
* Physical access to the PHYS server rooms can be acquired by visiting Science Computing in PHYS 2006.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth2&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] secondary&lt;br /&gt;
*[[Kerberos]] secondary&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mail&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
TODO: &amp;quot;HA&amp;quot;-ish configuration&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
*[[Webmail]]&lt;br /&gt;
*[[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
TODO: gone??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 1 x Intel Xeon E5-2630 v3 (8 cores, 2.40 GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 9 x 18TB Seagate Exos X18 (8 ZFS, Z2,1 hot-spare)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mirror]]&lt;br /&gt;
*[[Talks]] mirror&lt;br /&gt;
*[[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;munin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
TODO: Debian 9?&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;yerba-mate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
* TODO: use as backup server&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;citric-acid&#039;&#039;==&lt;br /&gt;
A Dell PowerEdge R815 (TODO: check model) provided by CSCF to replace [[Machine List#aspartame|aspartame]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* 2 x AMD Opteron 6174 (12 cores, 2.20 GHz)&lt;br /&gt;
* 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Services&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Configured for [https://pass.uwaterloo.ca pass.uwaterloo.ca], a university-wide password manager hosted by CSC as a demo service for all Nexus (ADFS) user.&lt;br /&gt;
* [[Plane]], an internal (CSC) project management tool.&lt;br /&gt;
* Minio&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Being repurposed for Termcom training and development.&lt;br /&gt;
* TODO: migrate Vaultwarden (https://pass.csclub.uwaterloo.ca/)??&lt;br /&gt;
* UFW opened-ports: SSH, HTTP/HTTPS&lt;br /&gt;
* Upgraded to Podman 4.x&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chamomile&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x 2.20GHz 12-core processors (AMD Opteron(tm) Processor 6174)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Cloudstack host&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;riboflavin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R515 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 2.6 GHz 8-core processors (AMD Opteron(tm) Processor 4376 HE)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
* 2x 500GB internal SSD&lt;br /&gt;
* 12x Seagate 4TB SSHD&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack block and object storage for csclub.cloud&lt;br /&gt;
* ????&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;guayusa&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
&lt;br /&gt;
Was used to experiment the following then-new CSC services:&lt;br /&gt;
&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* TODO: ???&lt;br /&gt;
** block1.cloud&lt;br /&gt;
** object1.cloud&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
* TODO: ditch... Currently being used to set up NextCloud.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginkgo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;biloba&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558. TODO: rack??&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&#039;&#039;&#039;Notes&#039;&#039;&#039;&lt;br /&gt;
* TODO: cloudstack migration&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs00&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs00 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* dual SFP connection to core switch&lt;br /&gt;
&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs01&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs01 is a &#039;&#039;&#039;NetApp FAS3040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
... TODO&lt;br /&gt;
&lt;br /&gt;
TODO: disconnected??&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs10&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs10 is a &#039;&#039;&#039;NetApp FAS8040&#039;&#039;&#039; series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* FAS8040 (dual heads)&lt;br /&gt;
** ... TODO&lt;br /&gt;
* 6 DS4324 HDD shelves (24-disks each)&lt;br /&gt;
** 24 x 2TB HDDs (assorted brands/models)&lt;br /&gt;
** Dual IOM3 controllers.&lt;br /&gt;
** Loop 1: bottom 4 shelves&lt;br /&gt;
** Loop 2: top 2 shelves + SSD shelf&lt;br /&gt;
* 1 DS2246 SSD shelf (TODO: right model?)&lt;br /&gt;
** 24 Samsung SM1625 SSDs (MZ-6ER2000/0G3), 200GB (SAS 2, 2.5&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
== ps3 ==&lt;br /&gt;
This is just a very wide PS3, the model that supported running Linux natively before it was removed. Firmware was updated to remove this feature, however it can still be done via. homebrew. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* It&#039;s a PS3.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022-10-24&#039;&#039;&#039; - Thermal paste replaced + firmware updated to latest supported version, also modded.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;binaerpilot&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;anamanaguchi&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;digital cutter&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;aspartame&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
aspartame was a taurine clone donated by CSCF. It was once our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. Decomissioned in March 2021 after refusing to boot following a power outage.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;psilodump&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
psilodump was plugged into aspartame. It&#039;s still installed but inaccessible.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phlogiston&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes from before decommissioning ====&lt;br /&gt;
&lt;br /&gt;
* The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;glomag&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decommissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Before its decommissioning, glomag hosted [[#caffeine|caffeine]], [[#mail|mail]], and [[#munin|munin]] as [[Virtualization#Linux_Container|Linux containers]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;Lisp machine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginseng&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [[wikipedia:Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29|RAID 10]] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;calum&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Calum used to be our main server and was named after Calum T Dalek.  Purchased new by the club in 1994. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* SPARCserver 10 (headless SPARCstation 10)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;paza&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;romana&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;acesulfame-potassium&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;skynet&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
*[http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
*[http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
*[http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;freebsd&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;rainbowdragoneyes&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;denardo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
==&#039;&#039;artificial-flavours&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-nitrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;taurine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, taurine has been decommissioned as of Fall 2019.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
*[[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;dextrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
dextrose was a [[#taurine|taurine]] clone donated by CSCF and was decommissioned in Fall 2019 after being replaced with a more powerful server.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sucrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
sucrose was a [[#taurine|taurine]] clone donated by CSCF. It was decommissioned in Fall 2019 following multiple hardware failures.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;goto80&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Note (2022-10-25): This seems to have gone missing or otherwise left our hands.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
This was small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located was the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;nullsleep&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It&#039;s decommissioned on 2023-03-20 due to repeated unexpected shutdown. Replaced by [[#powernap|powernap]]. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminal&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
bit-shifter was an office terminal, decommissioned April 2023 due to extended age. It was upgraded to the same specs as Strombola at an unknown point in time.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core 2 Quad CPU Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
Strombola was an office terminal named after Gordon Strombola. It was retired in April 2023.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium G4600 2 cores @ 3.6Ghz&lt;br /&gt;
* 8 GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
gwem was an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
(*specs are outdated at least as of 2023-05-27*)&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin was an office terminal. It was upgraded in Spring 2014 after an unidentified failure. Not operational (no video output) as of July 2022.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the MC 3015 machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mail&amp;diff=5284</id>
		<title>Mail</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mail&amp;diff=5284"/>
		<updated>2024-10-21T19:31:53Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: add instructions on sieve/managesieve&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mail services are currently handled by [[Machine_List#mail|the mail container]] on [[Machine_List#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
== Reading your mail ==&lt;br /&gt;
&lt;br /&gt;
You can use any user agent that supports maildir locally (mutt, alpine, etc), and any client that supports IMAP either locally or remotely. We also have webmail.&lt;br /&gt;
&lt;br /&gt;
Here are the details:&lt;br /&gt;
&lt;br /&gt;
* maildir&lt;br /&gt;
** Location: $HOME/.maildir/&lt;br /&gt;
&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
** URL: https://mail.csclub.uwaterloo.ca/&lt;br /&gt;
&lt;br /&gt;
* POP3&lt;br /&gt;
** No longer supported.&lt;br /&gt;
&lt;br /&gt;
* IMAP&lt;br /&gt;
** Hostname: mail.csclub.uwaterloo.ca&lt;br /&gt;
** Port: 143 (IMAP), 993 (IMAPS)&lt;br /&gt;
&lt;br /&gt;
* SMTP&lt;br /&gt;
** Hostname: mail.csclub.uwaterloo.ca&lt;br /&gt;
** SSL encryption and authentication required&lt;br /&gt;
** Port: 25, 465, or 587&lt;br /&gt;
&lt;br /&gt;
== Mail Filtering ==&lt;br /&gt;
Mail filtering allows you to automatically organize mails into different places, like putting potential spam mail into Junk folder, or to put notifications into a separate folder to make your inbox clean.&lt;br /&gt;
&lt;br /&gt;
Mail filtering can be done by writing a sieve script. Traditionally mail filtering is done through procmail, but it&#039;s currently being phased out due to its complex syntax and unmaintained state.&lt;br /&gt;
&lt;br /&gt;
The easiest way to do it is to use the Filters setting on our [https://mail.csclub.uwaterloo.ca Webmail]. You can either edit with the GUI, or import a script. A simple script that puts suspected spam into &amp;quot;Junk&amp;quot; and puts syscom emails into &amp;quot;Mailing List&amp;quot; folder looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require [&amp;quot;fileinto&amp;quot;];&lt;br /&gt;
# rule:[Spam]&lt;br /&gt;
if allof (header :contains &amp;quot;X-Spam-Level&amp;quot; &amp;quot;******&amp;quot;)&lt;br /&gt;
{&lt;br /&gt;
	fileinto &amp;quot;Junk&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
# rule:[Mailing List]&lt;br /&gt;
if anyof (header :contains &amp;quot;list-id&amp;quot; &amp;quot;syscom.csclub.uwaterloo.ca&amp;quot;, header :contains &amp;quot;list-id&amp;quot; &amp;quot;syscom-alerts.csclub.uwaterloo.ca&amp;quot;, header :contains &amp;quot;list-id&amp;quot; &amp;quot;ceo.csclub.uwaterloo.ca&amp;quot;)&lt;br /&gt;
{&lt;br /&gt;
	fileinto &amp;quot;Mailing List&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more advanced use of sieve check out [https://doc.dovecot.org/2.3/configuration_manual/sieve/examples/ Pigeonhole Sieve examples - Dovecot].&lt;br /&gt;
&lt;br /&gt;
== Mail User Agents ==&lt;br /&gt;
Here are instructions on how to access your CSC email using some common Mail User Agents (a.k.a. &amp;quot;email clients&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
=== Apple Mail ===&lt;br /&gt;
Open the Mail app. On the Menu Bar, click on &#039;Mail&#039;, then &#039;Add account&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_select_account_provider.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Select &#039;Other mail account&#039;, then &#039;Continue&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_add_a_mail_account.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your real name, your CSC email address (should be watiam_id@csclub.uwaterloo.ca), and your CSC password. Click &#039;Sign in&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_imap_details.png|300px]]&lt;br /&gt;
&lt;br /&gt;
You will get an error saying &#039;Unable to verify account name or password&#039;. Fill in the details as shown above, then click &#039;Sign in&#039;.&lt;br /&gt;
Make sure to specify your WatIAM username as the username, and use &amp;lt;code&amp;gt;mail.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; for the incoming/outgoing&lt;br /&gt;
mail servers.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_select_apps_to_use_with_account.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Finally, check &#039;Mail&#039;, and click &#039;Done&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_mailboxes_button.png|200px]]&lt;br /&gt;
&lt;br /&gt;
If you had an existing Mail account, you will need to click on the &#039;Mailboxes&#039; button to see your CSC account. There will be a dropdown&lt;br /&gt;
button beside &#039;Inboxes&#039; on the left hand side where you can toggle between different inboxes.&lt;br /&gt;
&lt;br /&gt;
=== Windows Mail ===&lt;br /&gt;
&amp;lt;b&amp;gt;Note&amp;lt;/b&amp;gt;: Windows Mail can be &amp;lt;i&amp;gt;very&amp;lt;/i&amp;gt; slow some times. I have no idea why. If you&#039;re looking for a decent email client on Windows, I strongly suggest using Thunderbird or Evolution instead.&lt;br /&gt;
&lt;br /&gt;
Open the Mail app (as of this writing, 2021-04-23, its icon is a blue envelope). Click on &#039;Accounts&#039; on the left hand side, then click on the &#039;+ Add account&#039; button. Select &#039;Advanced setup&#039;:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_choose_account_type.PNG|300px]]&lt;br /&gt;
&lt;br /&gt;
Then choose &#039;Internet email&#039;:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_advanced_setup_type.PNG|300px]]&lt;br /&gt;
&lt;br /&gt;
Here are some of the settings you&#039;ll need (replace your username, address, etc.):&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_internet_account_info_1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
Here are the rest:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_internet_account_info_2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
Then click &#039;Sign in&#039;. It may take you a &amp;lt;i&amp;gt;very&amp;lt;/i&amp;gt; long time to connect for the first time, especially if Windows is doing one if its dreaded updates in the background. If it&#039;s still hanging after a few hours, it might be a good idea to close the window and try again.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re signed in, you should be able to see your CSC account in the Mail app on the left hand side.&lt;br /&gt;
&lt;br /&gt;
=== Gmail (SMTP Relay) ===&lt;br /&gt;
It is possible to [https://support.google.com/mail/answer/6304825 link third-party email accounts to Gmail]. Here&#039;s one way to do it.&lt;br /&gt;
&lt;br /&gt;
Login to Gmail, go to Settings, and then under &#039;Accounts and Import&#039;, click &#039;Add another email address&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_settings_accounts_1.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your real name and CSC email address (should be watiam_id@csclub.uwaterloo.ca). I would suggest unchecking the &#039;Treat as an alias&#039;&lt;br /&gt;
box unless you want your CSC and Gmail addresses to be treated the same. See more info [https://support.google.com/a/answer/1710338 here].&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_another_email_address_you_own.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your CSC username and password:&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_account_credentials.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Google will send a confirmation email to your CSC address. Either click on the link in the email or enter the confirmation code.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_address_confirmation.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you return to Gmail, you should now see your CSC account under your settings. I suggest selecting the &#039;Reply from the same address the message was sent to&#039;&lt;br /&gt;
option.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_settings_accounts_2.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Now, if you click on the &#039;Compose&#039; button on the left hand side, you should be able to select your CSC address as the sender.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_choose_sender.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you want to receive your CSC messages via Gmail, just append your Gmail address to the end of the &amp;lt;code&amp;gt;.forward&amp;lt;/code&amp;gt; file in your home directory on the CSC servers (it needs to be on a new line).&lt;br /&gt;
&lt;br /&gt;
=== Outlook Desktop ===&lt;br /&gt;
&lt;br /&gt;
This is probably the world&#039;s most powerful email client, but you need to jump through a lot of hoops to setup your CSC email with it. Luckily I&#039;ve done those for you so just follow these steps:&lt;br /&gt;
&lt;br /&gt;
[[File:Ol1.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Open Outlook and click File at the top left.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol2.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click Account Settings and then Manage Profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol3.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click Email accounts...&lt;br /&gt;
&lt;br /&gt;
[[File:Ol4.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click New...&lt;br /&gt;
&lt;br /&gt;
[[File:Ol5.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Enter your name, CSC email and password. If you have an email alias, don&#039;t use your alias, use your QuestID@csclub.uwaterloo.ca email. Click Next &amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Ol6.png|600px]]&lt;br /&gt;
&lt;br /&gt;
It will start searching for your account, this can take a minute or two.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol7.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Once it finishes configuring it you&#039;ll get a test email.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol8.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Uncheck Set up Outlook Mobile on my phone (unless you want to), and check Change account settings. Then click Next &amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Ol9.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you have an email alias, you can now change your email to that in the Email Address field. Don&#039;t change your logon info. You can click More Settings to change your mailbox name, or click Finish (setup is complete).&lt;br /&gt;
&lt;br /&gt;
[[File:Ol10.png|600px]]&lt;br /&gt;
&lt;br /&gt;
You can change the name here. That&#039;s it. I&#039;ve provided the other two tabs&#039; configs below just in case anyone (including future me) needs it.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol11.png|600px]]&lt;br /&gt;
[[File:Ol12.png|600px]]&lt;br /&gt;
&lt;br /&gt;
=== Gnus ===&lt;br /&gt;
&lt;br /&gt;
Gnus is one of the MUAs built into GNU Emacs.  Gnus is very powerful and flexible, and comes with several &amp;quot;backend&amp;quot;s out of the box for reading newsgroups, email, RSS feeds, and more.  Over the years people have written many other backends for it as well.&lt;br /&gt;
&lt;br /&gt;
To get started using Gnus for reading your CSC mail over IMAPS, you can start with the following simple configuration based on Gnus&#039;s &amp;lt;code&amp;gt;nnimap&amp;lt;/code&amp;gt; backend:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
(setq mail-user-agent &#039;gnus-user-agent&lt;br /&gt;
      read-mail-command &#039;gnus&lt;br /&gt;
      gnus-select-method &#039;(nnnil &amp;quot;&amp;quot;)&lt;br /&gt;
      gnus-secondary-select-methods&lt;br /&gt;
      &#039;((nnimap &amp;quot;csc&amp;quot;&lt;br /&gt;
                (nnimap-stream tls)&lt;br /&gt;
                (nnimap-address &amp;quot;mail.csclub.uwaterloo.ca&amp;quot;)&lt;br /&gt;
                (nnimap-user &amp;quot;abandali&amp;quot;))))&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;gnus-secondary-select-methods&amp;lt;/code&amp;gt; variable set above is the most important bit.&lt;br /&gt;
&lt;br /&gt;
For reference sake, here&#039;s how we can do client-side mail splitting in Gnus: say we want to move all messages with a &amp;lt;code&amp;gt;X-Spam-Flag&amp;lt;/code&amp;gt; header of &amp;lt;code&amp;gt;YES&amp;lt;/code&amp;gt; to the Junk folder; here&#039;s how we tell Gnus to do that:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
(setq gnus-secondary-select-methods&lt;br /&gt;
      &#039;((nnimap &amp;quot;csc&amp;quot;&lt;br /&gt;
                (nnimap-stream tls)&lt;br /&gt;
                (nnimap-address &amp;quot;mail.csclub.uwaterloo.ca&amp;quot;)&lt;br /&gt;
                (nnimap-user &amp;quot;abandali&amp;quot;)&lt;br /&gt;
                (nnimap-inbox &amp;quot;INBOX&amp;quot;)&lt;br /&gt;
                (nnimap-split-methods &#039;nnimap-split-fancy)&lt;br /&gt;
                (nnimap-split-fancy&lt;br /&gt;
                 (|&lt;br /&gt;
                  ;; move spam to Junk&lt;br /&gt;
                  (&amp;quot;X-Spam-Flag&amp;quot; &amp;quot;YES&amp;quot; &amp;quot;Junk&amp;quot;)&lt;br /&gt;
                  ;; catch-all; leave everything else in inbox&lt;br /&gt;
                  &amp;quot;INBOX&amp;quot;)))))&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Gnus has a plethora of useful and complex features, and one cat get very fancy with it.  But that is left as an exercise for the [https://www.gnu.org/software/emacs/manual/gnus.html interested reader]. :-)&lt;br /&gt;
&lt;br /&gt;
== Technical Details ==&lt;br /&gt;
&lt;br /&gt;
=== Mail Transfer (Incoming) ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.postfix.org/ Postfix] is our MTA and runs on mail. Incoming mail is received inbound on smtp/25 or ssmtp/465 and goes through a sequence of filters before being delivered to users.&lt;br /&gt;
&lt;br /&gt;
We are using the following filters for incoming mail, to combat spam and malware:&lt;br /&gt;
&lt;br /&gt;
* zen.spamhaus.org RBL&lt;br /&gt;
* Greylisting with rspamd (see below)&lt;br /&gt;
&lt;br /&gt;
These filters reject truckloads of spam, preventing them from reaching your inbox. Greylisting adds a delay to mail delivery from unknown servers, but after a small number of successes they will be auto-whitelisted. If that isn&#039;t good enough, ask systems-committee@csclub.uwaterloo.ca to whitelist all mail to your address.&lt;br /&gt;
&lt;br /&gt;
=== Spam filtering ===&lt;br /&gt;
Before mail is delivered, it is sent to rspamd for spam checking. rspamd might greylist and/or add headers to the mail. rspamd WON&#039;T reject the mail on its own. It is up to the user&#039;s filter to decide what to do based on the spam headers (usually put mails tagged as spam into a folder like Junk).&lt;br /&gt;
&lt;br /&gt;
=== Mail Delivery ===&lt;br /&gt;
&lt;br /&gt;
User mail is delivered by LMTP to dovecot. This is configurable by adding a comma-separated list of destinations in $HOME/.forward. See aliases(5) for more details.&lt;br /&gt;
&lt;br /&gt;
Dovecot, in turn, runs the mail through user&#039;s sieve filter script (in $HOME/.maildir/sieve/ with the active filer symlink-ed to $HOME/.maildir/.dovecot.sieve). If no sieve script is found, Dovecot defaults to an internal sieve script, which pipes the mail though procmail to maintain compatibility with existing $HOME/.procmailrc scripts. You can write sieve scripts by hand, or use the graphical editor provided by https://mail.csclub.uwaterloo.ca, under Settings/Filters.&lt;br /&gt;
&lt;br /&gt;
Note that procmail compatibility might be removed in the future.&lt;br /&gt;
&lt;br /&gt;
==== Failures ====&lt;br /&gt;
&lt;br /&gt;
If you are out of quota or another error occurs writing to your home directory, dovecot will deliver your message to /var/mail/$USER on the mail server. If that too fails, the server is probably on fire. The message will be returned to the queue where it will eventually bounce.&lt;br /&gt;
&lt;br /&gt;
==== Sieve/ManageSieve ====&lt;br /&gt;
Dovecot also allows editing sieve scripts via ManageSieve protocol on 4190.&lt;br /&gt;
&lt;br /&gt;
==== Forwarding ====&lt;br /&gt;
&lt;br /&gt;
Place the following in $HOME/.forward to keep a local copy of your mail as well as forward it to some other email account. Replace ctdalek with your CSC username, but make sure the backslash stays.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
\ctdalek&lt;br /&gt;
calumt@dalek.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mail Retrieval ===&lt;br /&gt;
&lt;br /&gt;
We run [http://www.dovecot.org Dovecot], an IMAP server. It reads messages from $HOME/.maildir, so if you have procmail deliver your mail elsewhere you will be unable to retrieve your mail using IMAP.&lt;br /&gt;
&lt;br /&gt;
=== Mail Submission (Outgoing) ===&lt;br /&gt;
&lt;br /&gt;
On the mail container, outgoing mail is submitted directly to Postfix via the sendmail(1) wrapper or on submission/587. Submitted mail is then queued for delivery to its destination. The other systems have no MTA and instead run sSMTP, which relays mail through the mail container immediately without any queue or daemon.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Mail&amp;diff=5283</id>
		<title>Mail</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Mail&amp;diff=5283"/>
		<updated>2024-10-21T19:21:42Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: mail refresh on 2024-10&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mail services are currently handled by [[Machine_List#mail|the mail container]] on [[Machine_List#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
== Reading your mail ==&lt;br /&gt;
&lt;br /&gt;
You can use any user agent that supports maildir locally (mutt, alpine, etc), and any client that supports IMAP either locally or remotely. We also have webmail.&lt;br /&gt;
&lt;br /&gt;
Here are the details:&lt;br /&gt;
&lt;br /&gt;
* maildir&lt;br /&gt;
** Location: $HOME/.maildir/&lt;br /&gt;
&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
** URL: https://mail.csclub.uwaterloo.ca/&lt;br /&gt;
&lt;br /&gt;
* POP3&lt;br /&gt;
** No longer supported.&lt;br /&gt;
&lt;br /&gt;
* IMAP&lt;br /&gt;
** Hostname: mail.csclub.uwaterloo.ca&lt;br /&gt;
** Port: 143 (IMAP), 993 (IMAPS)&lt;br /&gt;
&lt;br /&gt;
* SMTP&lt;br /&gt;
** Hostname: mail.csclub.uwaterloo.ca&lt;br /&gt;
** SSL encryption and authentication required&lt;br /&gt;
** Port: 25, 465, or 587&lt;br /&gt;
&lt;br /&gt;
== Mail User Agents ==&lt;br /&gt;
Here are instructions on how to access your CSC email using some common Mail User Agents (a.k.a. &amp;quot;email clients&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
=== Apple Mail ===&lt;br /&gt;
Open the Mail app. On the Menu Bar, click on &#039;Mail&#039;, then &#039;Add account&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_select_account_provider.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Select &#039;Other mail account&#039;, then &#039;Continue&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_add_a_mail_account.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your real name, your CSC email address (should be watiam_id@csclub.uwaterloo.ca), and your CSC password. Click &#039;Sign in&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_imap_details.png|300px]]&lt;br /&gt;
&lt;br /&gt;
You will get an error saying &#039;Unable to verify account name or password&#039;. Fill in the details as shown above, then click &#039;Sign in&#039;.&lt;br /&gt;
Make sure to specify your WatIAM username as the username, and use &amp;lt;code&amp;gt;mail.csclub.uwaterloo.ca&amp;lt;/code&amp;gt; for the incoming/outgoing&lt;br /&gt;
mail servers.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_select_apps_to_use_with_account.png|300px]]&lt;br /&gt;
&lt;br /&gt;
Finally, check &#039;Mail&#039;, and click &#039;Done&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Apple_mail_mailboxes_button.png|200px]]&lt;br /&gt;
&lt;br /&gt;
If you had an existing Mail account, you will need to click on the &#039;Mailboxes&#039; button to see your CSC account. There will be a dropdown&lt;br /&gt;
button beside &#039;Inboxes&#039; on the left hand side where you can toggle between different inboxes.&lt;br /&gt;
&lt;br /&gt;
=== Windows Mail ===&lt;br /&gt;
&amp;lt;b&amp;gt;Note&amp;lt;/b&amp;gt;: Windows Mail can be &amp;lt;i&amp;gt;very&amp;lt;/i&amp;gt; slow some times. I have no idea why. If you&#039;re looking for a decent email client on Windows, I strongly suggest using Thunderbird or Evolution instead.&lt;br /&gt;
&lt;br /&gt;
Open the Mail app (as of this writing, 2021-04-23, its icon is a blue envelope). Click on &#039;Accounts&#039; on the left hand side, then click on the &#039;+ Add account&#039; button. Select &#039;Advanced setup&#039;:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_choose_account_type.PNG|300px]]&lt;br /&gt;
&lt;br /&gt;
Then choose &#039;Internet email&#039;:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_advanced_setup_type.PNG|300px]]&lt;br /&gt;
&lt;br /&gt;
Here are some of the settings you&#039;ll need (replace your username, address, etc.):&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_internet_account_info_1.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
Here are the rest:&lt;br /&gt;
&lt;br /&gt;
[[File:Windows_mail_internet_account_info_2.PNG|400px]]&lt;br /&gt;
&lt;br /&gt;
Then click &#039;Sign in&#039;. It may take you a &amp;lt;i&amp;gt;very&amp;lt;/i&amp;gt; long time to connect for the first time, especially if Windows is doing one if its dreaded updates in the background. If it&#039;s still hanging after a few hours, it might be a good idea to close the window and try again.&lt;br /&gt;
&lt;br /&gt;
Once you&#039;re signed in, you should be able to see your CSC account in the Mail app on the left hand side.&lt;br /&gt;
&lt;br /&gt;
=== Gmail (SMTP Relay) ===&lt;br /&gt;
It is possible to [https://support.google.com/mail/answer/6304825 link third-party email accounts to Gmail]. Here&#039;s one way to do it.&lt;br /&gt;
&lt;br /&gt;
Login to Gmail, go to Settings, and then under &#039;Accounts and Import&#039;, click &#039;Add another email address&#039;.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_settings_accounts_1.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your real name and CSC email address (should be watiam_id@csclub.uwaterloo.ca). I would suggest unchecking the &#039;Treat as an alias&#039;&lt;br /&gt;
box unless you want your CSC and Gmail addresses to be treated the same. See more info [https://support.google.com/a/answer/1710338 here].&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_another_email_address_you_own.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Fill in your CSC username and password:&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_account_credentials.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Google will send a confirmation email to your CSC address. Either click on the link in the email or enter the confirmation code.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_add_address_confirmation.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you return to Gmail, you should now see your CSC account under your settings. I suggest selecting the &#039;Reply from the same address the message was sent to&#039;&lt;br /&gt;
option.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_settings_accounts_2.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Now, if you click on the &#039;Compose&#039; button on the left hand side, you should be able to select your CSC address as the sender.&lt;br /&gt;
&lt;br /&gt;
[[File:Gmail_choose_sender.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you want to receive your CSC messages via Gmail, just append your Gmail address to the end of the &amp;lt;code&amp;gt;.forward&amp;lt;/code&amp;gt; file in your home directory on the CSC servers (it needs to be on a new line).&lt;br /&gt;
&lt;br /&gt;
=== Outlook Desktop ===&lt;br /&gt;
&lt;br /&gt;
This is probably the world&#039;s most powerful email client, but you need to jump through a lot of hoops to setup your CSC email with it. Luckily I&#039;ve done those for you so just follow these steps:&lt;br /&gt;
&lt;br /&gt;
[[File:Ol1.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Open Outlook and click File at the top left.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol2.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click Account Settings and then Manage Profiles.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol3.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click Email accounts...&lt;br /&gt;
&lt;br /&gt;
[[File:Ol4.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Click New...&lt;br /&gt;
&lt;br /&gt;
[[File:Ol5.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Enter your name, CSC email and password. If you have an email alias, don&#039;t use your alias, use your QuestID@csclub.uwaterloo.ca email. Click Next &amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Ol6.png|600px]]&lt;br /&gt;
&lt;br /&gt;
It will start searching for your account, this can take a minute or two.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol7.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Once it finishes configuring it you&#039;ll get a test email.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol8.png|600px]]&lt;br /&gt;
&lt;br /&gt;
Uncheck Set up Outlook Mobile on my phone (unless you want to), and check Change account settings. Then click Next &amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Ol9.png|600px]]&lt;br /&gt;
&lt;br /&gt;
If you have an email alias, you can now change your email to that in the Email Address field. Don&#039;t change your logon info. You can click More Settings to change your mailbox name, or click Finish (setup is complete).&lt;br /&gt;
&lt;br /&gt;
[[File:Ol10.png|600px]]&lt;br /&gt;
&lt;br /&gt;
You can change the name here. That&#039;s it. I&#039;ve provided the other two tabs&#039; configs below just in case anyone (including future me) needs it.&lt;br /&gt;
&lt;br /&gt;
[[File:Ol11.png|600px]]&lt;br /&gt;
[[File:Ol12.png|600px]]&lt;br /&gt;
&lt;br /&gt;
=== Gnus ===&lt;br /&gt;
&lt;br /&gt;
Gnus is one of the MUAs built into GNU Emacs.  Gnus is very powerful and flexible, and comes with several &amp;quot;backend&amp;quot;s out of the box for reading newsgroups, email, RSS feeds, and more.  Over the years people have written many other backends for it as well.&lt;br /&gt;
&lt;br /&gt;
To get started using Gnus for reading your CSC mail over IMAPS, you can start with the following simple configuration based on Gnus&#039;s &amp;lt;code&amp;gt;nnimap&amp;lt;/code&amp;gt; backend:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
(setq mail-user-agent &#039;gnus-user-agent&lt;br /&gt;
      read-mail-command &#039;gnus&lt;br /&gt;
      gnus-select-method &#039;(nnnil &amp;quot;&amp;quot;)&lt;br /&gt;
      gnus-secondary-select-methods&lt;br /&gt;
      &#039;((nnimap &amp;quot;csc&amp;quot;&lt;br /&gt;
                (nnimap-stream tls)&lt;br /&gt;
                (nnimap-address &amp;quot;mail.csclub.uwaterloo.ca&amp;quot;)&lt;br /&gt;
                (nnimap-user &amp;quot;abandali&amp;quot;))))&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;gnus-secondary-select-methods&amp;lt;/code&amp;gt; variable set above is the most important bit.&lt;br /&gt;
&lt;br /&gt;
For reference sake, here&#039;s how we can do client-side mail splitting in Gnus: say we want to move all messages with a &amp;lt;code&amp;gt;X-Spam-Flag&amp;lt;/code&amp;gt; header of &amp;lt;code&amp;gt;YES&amp;lt;/code&amp;gt; to the Junk folder; here&#039;s how we tell Gnus to do that:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
(setq gnus-secondary-select-methods&lt;br /&gt;
      &#039;((nnimap &amp;quot;csc&amp;quot;&lt;br /&gt;
                (nnimap-stream tls)&lt;br /&gt;
                (nnimap-address &amp;quot;mail.csclub.uwaterloo.ca&amp;quot;)&lt;br /&gt;
                (nnimap-user &amp;quot;abandali&amp;quot;)&lt;br /&gt;
                (nnimap-inbox &amp;quot;INBOX&amp;quot;)&lt;br /&gt;
                (nnimap-split-methods &#039;nnimap-split-fancy)&lt;br /&gt;
                (nnimap-split-fancy&lt;br /&gt;
                 (|&lt;br /&gt;
                  ;; move spam to Junk&lt;br /&gt;
                  (&amp;quot;X-Spam-Flag&amp;quot; &amp;quot;YES&amp;quot; &amp;quot;Junk&amp;quot;)&lt;br /&gt;
                  ;; catch-all; leave everything else in inbox&lt;br /&gt;
                  &amp;quot;INBOX&amp;quot;)))))&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Gnus has a plethora of useful and complex features, and one cat get very fancy with it.  But that is left as an exercise for the [https://www.gnu.org/software/emacs/manual/gnus.html interested reader]. :-)&lt;br /&gt;
&lt;br /&gt;
== Spamfiltering ==&lt;br /&gt;
&lt;br /&gt;
SpamAssassin is run on all incoming mail, but no action is taken based on the results. The results are appended to the headers of the email, so you can take action on it. We are running a shared Bayesian learner for all users&#039; email, so there stands a chance of you not receiving legitimate mails due to false positives.&lt;br /&gt;
&lt;br /&gt;
To use your own Bayesian learner instead of the site-wide one, simply add the following to &amp;lt;code&amp;gt;~/.spamassassin/user_prefs&amp;lt;/code&amp;gt;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bayes_path ~/.spamassassin/bayes&lt;br /&gt;
bayes_auto_learn 1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Alternatively, to disable Bayesian tests altogether:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
use_bayes 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can configure procmail (the application that postfix calls to deliver any mail that it received to the user that it was sent to) to place a message in a special folder and/or delete it based on its spam score and/or whether it got flagged as spam or not. In order to do this, you need to configure procmail via .procmailrc in your home directory. An example such .procmailrc is below (adapted from [https://wiki2.dovecot.org/procmail here]):&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
SHELL=&amp;quot;/bin/bash&amp;quot;&lt;br /&gt;
DELIVER=&amp;quot;/usr/lib/dovecot/deliver -d $LOGNAME&amp;quot;&lt;br /&gt;
DEFAULT=&amp;quot;$HOME/.maildir/&amp;quot;&lt;br /&gt;
MAILDIR=&amp;quot;$HOME/.maildir/&amp;quot;&lt;br /&gt;
LOGFILE=$MAILDIR/procmail.log&lt;br /&gt;
LOGABSTRACT=all&lt;br /&gt;
VERBOSE=off&lt;br /&gt;
&lt;br /&gt;
# send spam to Trash folder&lt;br /&gt;
:0 w&lt;br /&gt;
* ^X-Spam-Status: Yes&lt;br /&gt;
| $DELIVER -m Trash&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The folder to which the messages are sent must exist first. To create a new IMAP folder in the Roundcube web client, click on the gear icon in the lower left corner.&lt;br /&gt;
&lt;br /&gt;
== Technical Details ==&lt;br /&gt;
&lt;br /&gt;
=== Mail Transfer (Incoming) ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[http://www.postfix.org/ Postfix] is our MTA and runs on mail. Incoming mail is received inbound on smtp/25 or ssmtp/465 and goes through a sequence of filters before being delivered to users.&lt;br /&gt;
&lt;br /&gt;
We are using the following filters for incoming mail, to combat spam and malware:&lt;br /&gt;
&lt;br /&gt;
* zen.spamhaus.org RBL&lt;br /&gt;
* Greylisting with rspamd (see below)&lt;br /&gt;
&lt;br /&gt;
These filters reject truckloads of spam, preventing them from reaching your inbox. Greylisting adds a delay to mail delivery from unknown servers, but after a small number of successes they will be auto-whitelisted. If that isn&#039;t good enough, ask systems-committee@csclub.uwaterloo.ca to whitelist all mail to your address.&lt;br /&gt;
&lt;br /&gt;
=== Spam filtering ===&lt;br /&gt;
Before mail is delivered, it is sent to rspamd for spam checking. rspamd might greylist and/or add headers to the mail. rspamd WON&#039;T reject the mail on its own. It is up to the user&#039;s filter to decide what to do based on the spam headers (usually put mails tagged as spam into a folder like Junk).&lt;br /&gt;
&lt;br /&gt;
=== Mail Delivery ===&lt;br /&gt;
&lt;br /&gt;
User mail is delivered by LMTP to dovecot. This is configurable by adding a comma-separated list of destinations in $HOME/.forward. See aliases(5) for more details.&lt;br /&gt;
&lt;br /&gt;
Dovecot, in turn, runs the mail through user&#039;s sieve filter script (in $HOME/.maildir/sieve/ with the active filer symlink-ed to $HOME/.maildir/.dovecot.sieve). If no sieve script is found, Dovecot defaults to an internal sieve script, which pipes the mail though procmail to maintain compatibility with existing $HOME/.procmailrc scripts. You can write sieve scripts by hand, or use the graphical editor provided by https://mail.csclub.uwaterloo.ca, under Settings/Filters.&lt;br /&gt;
&lt;br /&gt;
Note that procmail compatibility might be removed in the future.&lt;br /&gt;
&lt;br /&gt;
==== Failures ====&lt;br /&gt;
&lt;br /&gt;
If you are out of quota or another error occurs writing to your home directory, dovecot will deliver your message to /var/mail/$USER on the mail server. If that too fails, the server is probably on fire. The message will be returned to the queue where it will eventually bounce.&lt;br /&gt;
&lt;br /&gt;
==== Forwarding ====&lt;br /&gt;
&lt;br /&gt;
Place the following in $HOME/.forward to keep a local copy of your mail as well as forward it to some other email account. Replace ctdalek with your CSC username, but make sure the backslash stays.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
\ctdalek&lt;br /&gt;
calumt@dalek.com&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Mail Retrieval ===&lt;br /&gt;
&lt;br /&gt;
We run [http://www.dovecot.org Dovecot], an IMAP server. It reads messages from $HOME/.maildir, so if you have procmail deliver your mail elsewhere you will be unable to retrieve your mail using IMAP.&lt;br /&gt;
&lt;br /&gt;
=== Mail Submission (Outgoing) ===&lt;br /&gt;
&lt;br /&gt;
On the mail container, outgoing mail is submitted directly to Postfix via the sendmail(1) wrapper or on submission/587. Submitted mail is then queued for delivery to its destination. The other systems have no MTA and instead run sSMTP, which relays mail through the mail container immediately without any queue or daemon.&lt;br /&gt;
&lt;br /&gt;
[[Category:Software]]&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Past_Executive&amp;diff=5281</id>
		<title>Past Executive</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Past_Executive&amp;diff=5281"/>
		<updated>2024-10-15T14:25:51Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: split current positions and historical positions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Data sources for this exec list have been: CSC records, MathNEWS.&lt;br /&gt;
According to the warrior wiki dudes, there was an article about the CSC being founded in the chevron: &#039;&#039;This week on campus&#039;&#039;. The Chevron. January 5 1968. Page 16. -- somebody should get a copy of that.&lt;br /&gt;
&lt;br /&gt;
= Definitions =&lt;br /&gt;
 #define PR President&lt;br /&gt;
 #define VP Vice-president&lt;br /&gt;
 #define TR Treasurer&lt;br /&gt;
 #define SE Secretary&lt;br /&gt;
 #define AV Assistant Vice-president&lt;br /&gt;
 #define SA Sysadmin&lt;br /&gt;
 #define OF Office Manager&lt;br /&gt;
 #define LI Librarian&lt;br /&gt;
 #define WW Webmaster&lt;br /&gt;
&lt;br /&gt;
 #ifdef __HISTORICAL__&lt;br /&gt;
 #define FL Flasher&lt;br /&gt;
 #define DE Deity&lt;br /&gt;
 #define SE-TR Secretary-Treasurer (Position was split)&lt;br /&gt;
 #define FR Fridge Regent (IMAPd)&lt;br /&gt;
 #endif /* __HISTORICAL__ */&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float:right;margin-left: 1em;&amp;quot;&amp;gt;__TOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Founding 1967=&lt;br /&gt;
&lt;br /&gt;
 Sponsor - J. Peter Sprung&lt;br /&gt;
 PR: K. Rugger&lt;br /&gt;
 VP: R. Jaques&lt;br /&gt;
 SE-TR: G. Sutherland&lt;br /&gt;
&lt;br /&gt;
 Founding Members:&lt;br /&gt;
 B. Kindree&lt;br /&gt;
 R. Melen&lt;br /&gt;
 V. Neglia&lt;br /&gt;
 R. Charney&lt;br /&gt;
 R. Truman&lt;br /&gt;
 Glenn Berry&lt;br /&gt;
 D. Meek&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Bill Kindred&lt;br /&gt;
 VP: Rick Jacques&lt;br /&gt;
 SE-TR: Graham Sutherland&lt;br /&gt;
&lt;br /&gt;
Committee members: R. Stallwerthy, C. de Vries&lt;br /&gt;
&lt;br /&gt;
=1968=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 PR: Bill Kindred&lt;br /&gt;
 VP: Rick Jacques&lt;br /&gt;
 SE-TR: Graham Sutherland&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 SE-TR: Glenn Berry&lt;br /&gt;
&lt;br /&gt;
=1969=&lt;br /&gt;
&lt;br /&gt;
Unknown, only one letter found in the folder &#039;ACM History&#039; addressed to Glenn Berry, which makes it likely that he was SE-TR once again. May be indicated in membership lists. The club appears to have died this academic year.&lt;br /&gt;
&lt;br /&gt;
=1970=&lt;br /&gt;
&lt;br /&gt;
===A note on ACM affiliation===&lt;br /&gt;
&lt;br /&gt;
The first attempt at joining the ACM was started with an informal inquiry Dec 5, 1967. This lead to a series of constitution edits (working towards affiliation) in Winter 1968. There was a break for the spring (no correspondence found, I presume we were waiting on a reply). In the fall records indicate that our constitution and chartering was rejected, further correspondence was sent in Fall 1968 by Glenn Berry. A new inquiry, seemingly unaware of the first was sent Dec 7, 1970&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Rick Beach&lt;br /&gt;
 VP: Lee Santon&lt;br /&gt;
 TR: Randy Melen&lt;br /&gt;
 SE: Vic Neglia&lt;br /&gt;
&lt;br /&gt;
=1971=&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 VP: James H. &amp;quot;Jim&amp;quot; Finch and James W. Welch both signed letters as VP.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 VP: James W. Welch&lt;br /&gt;
&lt;br /&gt;
=1972=&lt;br /&gt;
&lt;br /&gt;
It appears we visited Western and Western visited us this year (there is some reference to a similar occurrence the year previous). Documents from 1973 indicate a termly exec structure, this probably goes back to 1972.&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 PR: Mike Campbell&lt;br /&gt;
 VP: Edgar Hew&lt;br /&gt;
 SE-TR: Doug Lacy&lt;br /&gt;
&lt;br /&gt;
There is also stuff from James W. Welch without a position.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Ian McIntosh&lt;br /&gt;
&lt;br /&gt;
=1973=&lt;br /&gt;
&lt;br /&gt;
 Faculty Sponsor: Morven Gentleman&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 SE: Douglas E. Lacy&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 PR: Jim Parry&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Jim Parry&lt;br /&gt;
 VP: Ray Walden&lt;br /&gt;
 TR: Slavko Stemberger&lt;br /&gt;
 SE: Mario Festival&lt;br /&gt;
&lt;br /&gt;
=1974=&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Russell Crook&lt;br /&gt;
&lt;br /&gt;
=1975-1977=&lt;br /&gt;
&lt;br /&gt;
 Faculty Sponsor: Morven Gentleman??&lt;br /&gt;
&lt;br /&gt;
 Peter Raynham reports (first hand account): president for at least 2 or 3 terms in this period.&lt;br /&gt;
 Sylvia Eng: 1975/6 as some position.&lt;br /&gt;
 Dave Buckingham: a VP at some point&lt;br /&gt;
 Allison Nolan: 1977 time&lt;br /&gt;
 Peter Stevens: 1977&lt;br /&gt;
 Russel Crook???&lt;br /&gt;
&lt;br /&gt;
Dennis Ritchie came. So did Jeffrey D. Ullman.&lt;br /&gt;
&lt;br /&gt;
=1976=&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 &amp;lt;code&amp;gt;Progcom: Peter Stevens&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=1977=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 Progcom: Allison Nowlan&lt;br /&gt;
&lt;br /&gt;
===Spring=== &lt;br /&gt;
&lt;br /&gt;
 PR: Peter Stevens&lt;br /&gt;
 Progcom: Allison Nowlan&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Andrzej Jan Taramina&lt;br /&gt;
 Progcom: Allison Nowlan&lt;br /&gt;
&lt;br /&gt;
=1978=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 PR: Peter Stevens&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 TR: K.G. Dykes&lt;br /&gt;
 SE: Kandry Mutheardy&lt;br /&gt;
&lt;br /&gt;
Brian Kernighan gave a talk this term. So did Ken Thompson.&lt;br /&gt;
&lt;br /&gt;
=1979=&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 PR: Robert Biddle&lt;br /&gt;
=1987=&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Jim Boritz&lt;br /&gt;
 VP: Ted Timar&lt;br /&gt;
 TR: Gayla Boritz&lt;br /&gt;
 SE: Edwin Hoogerbeets&lt;br /&gt;
&lt;br /&gt;
=1988=&lt;br /&gt;
&lt;br /&gt;
Tim Timar - cc&#039;d on memos/mentioned on mathsoc minutes in 1987/88.&lt;br /&gt;
The Sysadmin and Office Manager positions seem to have been created somewhere in here. The &#039;Record Management Profile&#039; that Rob*n Stewart did as an assignment in 1991-1992 for some class at UBC&lt;br /&gt;
indicates the existence of both positions. We acquired an HP-9000 in the summer of 1988 and as this was our first &amp;quot;real&amp;quot; computer (previously we had an IBM PC and terminal), the sysadmin position was created, starting with the Fall 1988 term.&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Jim Boritz&lt;br /&gt;
(Source: https://csclub.uwaterloo.ca/misc/procedure.pdf)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 SA: Wade Richards&lt;br /&gt;
&lt;br /&gt;
=1989=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
https://mirror.csclub.uwaterloo.ca/csclub/bill-gates-1989-big.jpg&lt;br /&gt;
&lt;br /&gt;
Left to right:  Jim Boritz (bottom), Wade Richards (top), Ted Timar, ???, Keven Smith, Bill Gates (not exec), Angela Chambers, Ross Ridge (top), Sean Goggin (bottom), ??? &lt;br /&gt;
&lt;br /&gt;
 PR: Barry W. Smith&lt;br /&gt;
 VP: Angela Chambers&lt;br /&gt;
 SE: Sean Goggin&lt;br /&gt;
 SA: Wade Richards / Ross Ridge&lt;br /&gt;
&lt;br /&gt;
(President Kevin Smith confirmed: https://csclub.uwaterloo.ca/misc/procedure.pdf)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 PR: Jim Thornton&lt;br /&gt;
 VP: Gayla Boritz&lt;br /&gt;
 TR: David Fenger&lt;br /&gt;
 SE: Kivi Shapiro&lt;br /&gt;
 SA: Reid Pinchback&lt;br /&gt;
&lt;br /&gt;
Assistance to sysadmin: Jim Boritz.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: James Boritz&lt;br /&gt;
 VP: Edmond Bourne&lt;br /&gt;
 SA: Ross Ridge&lt;br /&gt;
&lt;br /&gt;
=1990=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 TR: Jim Thornton&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 TR: Karen Smith&lt;br /&gt;
 SE: Rob*n Stewart&lt;br /&gt;
Robyn/Robin signed her emails as Rob*n and requested to be listed as such on this page.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Wade Richards&lt;br /&gt;
 TR: Carolyn Duke&lt;br /&gt;
 SE: Rob*n Stewart - attended mathsoc meeting on our behalf.&lt;br /&gt;
 Kivi Shapiro - attended mathsoc meeting on our behalf.&lt;br /&gt;
              - Censured by mathsoc for his actions during the election.&lt;br /&gt;
 Shannon Mann - attended mathsoc meeting on our behalf.&lt;br /&gt;
&lt;br /&gt;
=1991=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 VP: Edmond Bourne&lt;br /&gt;
 TR: Carolyn Duke&lt;br /&gt;
 SE: Rob*n Stewart&lt;br /&gt;
 Shannon Mann - attended mathsoc meeting on our behalf.&lt;br /&gt;
&lt;br /&gt;
John McCarthy came this term.&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 TR: Rob Leitman&lt;br /&gt;
 Jason Knell - attended mathsoc meeting on our and PMC&#039;s behalf.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 TR: Mike Van Lingen&lt;br /&gt;
 Wiktor Wiewiorowski - attended mathsoc meeting on our behalf this term.&lt;br /&gt;
=1992=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 TR: Norm Ross&lt;br /&gt;
 SE: Brent Williams&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Dale Wick&lt;br /&gt;
 TR: Stephen A. Mills&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 TR: Mark Plumb&lt;br /&gt;
=1993=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 TR: Rob Leitman&lt;br /&gt;
 VP: Tim Prime&lt;br /&gt;
 OF: Dave Ebbo&lt;br /&gt;
 LI: Norm Ross&lt;br /&gt;
&lt;br /&gt;
Other exec for this term: Ellen Hsiang, Sam Coulombe, Peter Gray&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 TR: Mark Tompsett &lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Ian Goldberg&lt;br /&gt;
&lt;br /&gt;
=1994=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Ian Goldberg&lt;br /&gt;
 TR: Mark Tompsett&lt;br /&gt;
 SE: Tom Rathbourne&lt;br /&gt;
 LI: Michael Van Biesbrouck&lt;br /&gt;
Norm Ross assisted with finances.&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Dale Wick (?)&lt;br /&gt;
 TR: Steve Mills&lt;br /&gt;
 SA: Ian Goldberg (?)&lt;br /&gt;
Norm Ross assisted with finances.&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Ross Ridge&lt;br /&gt;
 VP: Tom Rathbourne (?)&lt;br /&gt;
 TR: Rob Leitman&lt;br /&gt;
 SA: Zygo Blaxell&lt;br /&gt;
 LI: Michael Van Biesbrouck&lt;br /&gt;
=1995=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 TR: Sharlene Schmeichel&lt;br /&gt;
 Amy Brown and Rob Ridge purchased books.&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 TR: Steve Mills&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Amy Brown (arbrown) &lt;br /&gt;
 VP: Christina Norman (cbnorman)&lt;br /&gt;
 TR: Steven Mills (samills)&lt;br /&gt;
 SE: Allyson Graham (akgraham)&lt;br /&gt;
 SA: Gavin Peters&lt;br /&gt;
=1996=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Nikita Borisov (nborisov)&lt;br /&gt;
 VP: Joseph Deu Ngoc (dtdeungo) &lt;br /&gt;
 TR: Stephen Mills (samills)&lt;br /&gt;
 SE: Sharlene Schmeichel (saschmei)&lt;br /&gt;
 SA: Dave Brown (dagbrown)&lt;br /&gt;
 OF: Somsack Tsai (stsai)&lt;br /&gt;
 LI: Devin Carless (dccarles)&lt;br /&gt;
 FL: Allyson Graham (akgraham)&lt;br /&gt;
 DE: Ian Goldberg (iagoldbe)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Blake Winton (bwinton)&lt;br /&gt;
 VP: Nick Harvey (njaharve)&lt;br /&gt;
 TR: Nikita Borisov (nborisov)&lt;br /&gt;
 SE: Viet-Trung Luu (vluu)&lt;br /&gt;
 SA: Drew Hamilton (awhamilt)&lt;br /&gt;
 OF: Jillian Arnott (jarnott)&lt;br /&gt;
 LI: Ross Ridge (rridge)&lt;br /&gt;
 FL: Devin Carless (dccarles)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Shannon Mann (sjbmann) &lt;br /&gt;
 VP: Joe &amp;quot;Frosh&amp;quot; Deu Ngoc (jtdeungo)    resigned (heavy workload)&lt;br /&gt;
 TR: Michal Van Biesbrouck (mlvanbie) &lt;br /&gt;
 SE: Nikita Borisov (nborisov) &lt;br /&gt;
 SA: Chris Rovers &lt;br /&gt;
 OF: Dax Hutcheon (ddhutche)            became VP upon jtduengo&#039;s resignation&lt;br /&gt;
 LI: Aliz Csenki (acsenki) &lt;br /&gt;
 FL: Aaron Chmielowiec (archmiel) &lt;br /&gt;
 DE: Skuld (no uwuserid yet...)&lt;br /&gt;
=1997=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Dima Brodsky &lt;br /&gt;
 VP: Nikita Borisov (nborisov)&lt;br /&gt;
 TR: Stephen Mills (samills)&lt;br /&gt;
 SE: Evan Jones (ejones)&lt;br /&gt;
 SA: Alex Brodsky&lt;br /&gt;
 OF: Chris Doherty&lt;br /&gt;
 LI: Matt Corks &lt;br /&gt;
 FL: Paul Prescod&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Chris Rovers (cdrovers) &lt;br /&gt;
 VP: Michael van Biesbrouck (mlvanbie) &lt;br /&gt;
 TR: Somsack Tsai (stsai) &lt;br /&gt;
 SE: Matt Corks (mvcorks)&lt;br /&gt;
 SA: Lennart Sorensen (lsorense) &lt;br /&gt;
 LI: Chmielowiec (archmiel) &lt;br /&gt;
 OF: Devin Carless (dccarles) &lt;br /&gt;
 FL: Aaron Chmielowiec (archmiel)&lt;br /&gt;
= 1998 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Suresh Naidu  &lt;br /&gt;
 VP: Viet-Trung Luu &lt;br /&gt;
 TR: Tim Coleman &lt;br /&gt;
 SE: Dax Hutcheon &lt;br /&gt;
 LI: Dax Hutcheon &lt;br /&gt;
 Flasher: Dax Hutcheon &lt;br /&gt;
 WW: Dax Hutcheon &lt;br /&gt;
 SA: Robin Powell&lt;br /&gt;
 OF: Aaron Chmielowiec&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 Position	Name	You might call them...&lt;br /&gt;
 President	roconnor	Russell O&#039;Connor&lt;br /&gt;
 Vice-president	trwcolem	Tim Coleman&lt;br /&gt;
 Treasurer	knzarysk	Karl Zaryski&lt;br /&gt;
 Secretary	(bwinton)	(Blake Winton)&lt;br /&gt;
 Sysadmin	wbiggs	Billy Biggs&lt;br /&gt;
 Librarian	snaidu	Suresh Naidu&lt;br /&gt;
 Flasher	pechrysl	Paul Chrysler&lt;br /&gt;
 Office Manager	dccarles	Devin Carless&lt;br /&gt;
 WWWW	trwcolem	Tim Coleman&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 President	Joe Deu Ngoc	jtdeungo&lt;br /&gt;
 Vice-President	Wai Ling Yee	wlyee&lt;br /&gt;
 Treasurer	Fjord	j2lynn&lt;br /&gt;
 Secretary	Matt Corks	mvcorks&lt;br /&gt;
 Sysadmin	Andrew Hamilton	awhamilt&lt;br /&gt;
&lt;br /&gt;
 World Wide Web Wench	Dax Hutcheon	ddhutche&lt;br /&gt;
 Office Manager	Richard Bell	rlbell&lt;br /&gt;
 Librarian	Damian Gryski	dgryski&lt;br /&gt;
 Flasher	Paul Chrysler	pechrysl&lt;br /&gt;
 Official Deity	Ian Goldberg	iagoldbe&lt;br /&gt;
 Official Chairbeing	Calum T. Dalek	calum&lt;br /&gt;
=1999=&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: geduggan&lt;br /&gt;
=2000=&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Will Chartrand (wgchartr)&lt;br /&gt;
 VP: Gavin Duggan (geduggan)&lt;br /&gt;
 SA: Lennart Sorensen (lsorense)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: geduggan&lt;br /&gt;
 SA: bioster&lt;br /&gt;
=2001=&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: geduggan&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: geduggan&lt;br /&gt;
&lt;br /&gt;
=2002=&lt;br /&gt;
&lt;br /&gt;
https://web.archive.org/web/20130715012002/http://www.mathnews.uwaterloo.ca/Issues/mn8902/cscflash.php&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Billy Biggs&lt;br /&gt;
 VP: Stefanus Du Toit&lt;br /&gt;
 TR: Melissa Basinger&lt;br /&gt;
 SE: James Perry&lt;br /&gt;
 SA: Barry Genova&lt;br /&gt;
 LI: Ryan Golbeck&lt;br /&gt;
 WW: Jonathan Beverley&lt;br /&gt;
 Office Manager: Sayan Li&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: Alex Pop&lt;br /&gt;
 VP: Melissa Basinger&lt;br /&gt;
 TR: Siyan Li&lt;br /&gt;
 SE: James A Morrison&lt;br /&gt;
 SA: Jonathan Beverley&lt;br /&gt;
 WW: Stefanus Du Toit&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: James A. Morrison&lt;br /&gt;
 VP: Stefanus Du Toit&lt;br /&gt;
 TR: James Perry&lt;br /&gt;
 SE: Michael Biggs&lt;br /&gt;
 SA: Ryan Golbeck&lt;br /&gt;
 LI: Mark Sherry, Cassandra Schopf&lt;br /&gt;
 WW: Stefanus Du Toit&lt;br /&gt;
=2003=&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Kannan Vijayan (kvijayan)&lt;br /&gt;
 VP: Meg Darragh (m2darrag)&lt;br /&gt;
 TR: James Perry (jeperry)&lt;br /&gt;
 SE: Wojciech Kosnik (wkosnik)&lt;br /&gt;
 SA: Stefanus Du Toit (sjdutoit)&lt;br /&gt;
 LI: Simon Law (sfllaw)&lt;br /&gt;
 WM: Julie Lavoie (jlavoie)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Stefanus Du Toit (sjdutoit)&lt;br /&gt;
 VP: Meg Darragh (m2darrag)&lt;br /&gt;
 TR: Tor Myklebust (tmyklebu)&lt;br /&gt;
 SE: James Perry (jeperry)&lt;br /&gt;
 SA: Simon Law (sfllaw)&lt;br /&gt;
=2004=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Simon Law (sfllaw)&lt;br /&gt;
 VP: fspacek&lt;br /&gt;
 TR: ljain&lt;br /&gt;
 SE: Julie Lavoie (jlavoie)&lt;br /&gt;
 SA: Tor Myklebust(tmyklebu)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: dnmorton ?&lt;br /&gt;
 VP: Tim Loach (tloach)&lt;br /&gt;
 TR: Michael Biggs (mbiggs)&lt;br /&gt;
 SE: Lesley Northam (lanortha)&lt;br /&gt;
&lt;br /&gt;
===Fall ===&lt;br /&gt;
 PR: jeperry&lt;br /&gt;
 VP: mtsay&lt;br /&gt;
 TR: Mark Sherry (mdsherry)&lt;br /&gt;
 SE: Tor Myklebust (tmyklebu)&lt;br /&gt;
 SA: jlavoie&lt;br /&gt;
=2005=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 PR: mtsay&lt;br /&gt;
 VP: Lesley Northam (lanortha)&lt;br /&gt;
 TR: Holden Karau (hkarau)&lt;br /&gt;
 SE: domorton&lt;br /&gt;
 SA: Tor Myklebust (tmyklebu)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
&lt;br /&gt;
 PR: Mark Sherry (mdsherry)&lt;br /&gt;
 VP: Martin Kess (mdkess)&lt;br /&gt;
 TR: Ali Piccioni (apiccion)&lt;br /&gt;
 SE: Michael Biggs (mbiggs)&lt;br /&gt;
 SA: Tor Myklebust (tmyklebu)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Tim Loach (tloach)&lt;br /&gt;
 VP: Lesley Northam (lanortha)&lt;br /&gt;
 TR: Caelyn McAulay (cmcaulay)&lt;br /&gt;
 SE: The Professor&lt;br /&gt;
 SA: Holden Karau (hkarau)&lt;br /&gt;
=2006=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
&lt;br /&gt;
 PR: Tor Myklebust (tmyklebu)&lt;br /&gt;
 VP: Michael Druker (mdruker)&lt;br /&gt;
 TR: Caelyn McAulay (cmcaulay)&lt;br /&gt;
 SE: Mark Sherry (mdsherry)&lt;br /&gt;
 SA: William O&#039;Connor (woconnor)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: David Bartley (dtbartle)&lt;br /&gt;
 VP: David Belanger (dbelange)&lt;br /&gt;
 TR: David Tenty (daltenty)&lt;br /&gt;
 SE: Chris Evensen (cevensen)&lt;br /&gt;
 SA: Holden Karau (hkarau)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
&lt;br /&gt;
 PR: Martin Kess (mdkess)&lt;br /&gt;
 VP: Mark Sherry (mdsherry)&lt;br /&gt;
 TR: Sylvan L. Mably (slmably)&lt;br /&gt;
 SE: Caelyn McAulay (cmcaulay) &lt;br /&gt;
 SA: William O&#039;Connor (woconnor)&lt;br /&gt;
=2007=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: David Bartley (dtbartle)&lt;br /&gt;
 VP: David Belanger (dbelange)&lt;br /&gt;
 TR: Caelyn McAulay (cmcaulay)&lt;br /&gt;
 SE: David Tenty (daltenty)&lt;br /&gt;
 SA: Holden Karau (hkarau)&lt;br /&gt;
 WW: jnopporn&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Gaelan D&#039;costa (gdcosta)&lt;br /&gt;
 VP: Kyle Larose (kmlarose)&lt;br /&gt;
 TR: Kyle Spaans (kspaans)&lt;br /&gt;
 SE: Erik Louie (elouie)&lt;br /&gt;
 SA: Michael Spang (mspang)&lt;br /&gt;
 LI: David Tenty (daltenty)&lt;br /&gt;
&lt;br /&gt;
===Fall ===&lt;br /&gt;
 PR: Holden Karau (hkarau)&lt;br /&gt;
 VP: Alex McCausland (amccausl)&lt;br /&gt;
 TR: Dominik Chlobowski (dchlobow)&lt;br /&gt;
 SE: Sean Cumming (sgcummin)&lt;br /&gt;
 SA: David Tenty (daltenty)&lt;br /&gt;
 WW: dtbartle / jnopporn&lt;br /&gt;
=2008=&lt;br /&gt;
&lt;br /&gt;
===Winter ===&lt;br /&gt;
 PR: Sean Cumming (sgcummin)&lt;br /&gt;
 VP: Matt Lawrence (m3lawren)&lt;br /&gt;
 TR: Mateusz Tarkowski (mtarkows)&lt;br /&gt;
 SE: Edgar Bering (ebering)&lt;br /&gt;
 SA: Jordan Saunders (jmsaunde)&lt;br /&gt;
&lt;br /&gt;
===Summer ===&lt;br /&gt;
 PR: Brennan Taylor (b4taylor)&lt;br /&gt;
 VP: Qifan Xi (qxi)&lt;br /&gt;
 TR: Matt Lawrence (m3lawren)&lt;br /&gt;
 SE: Nick Guenther (nguenthe)&lt;br /&gt;
&lt;br /&gt;
===Fall ===&lt;br /&gt;
 PR: Matthew Lawrence (m3lawren)&lt;br /&gt;
 VP: Edgar Bering (ebering)&lt;br /&gt;
 TR: Michael Gregson (mgregson)&lt;br /&gt;
 SE: James Simpson (j2simpso) resigned for medical reasons, replaced by Dominik &#039;Domo&#039; Chłobowski&lt;br /&gt;
 SA: Kyle Spaans (kspaans)&lt;br /&gt;
=2009=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Michael Gregson (mgregson)&lt;br /&gt;
 VP: Edgar Bering (ebering)&lt;br /&gt;
 TR: Brennan Taylor (b4taylor)&lt;br /&gt;
 SE: James Simpson (j2simpso)  resigned for business reasons, replaced by Rebecca Putinski (rjputins) &lt;br /&gt;
 SA: Jacob Parker (j3parker) &lt;br /&gt;
 OF: XinChi Yang / Sapphyre Gervais (x23yang / sagervai) (both)&lt;br /&gt;
&lt;br /&gt;
===Spring ===&lt;br /&gt;
 PR: Michael Spang (mspang)&lt;br /&gt;
 VP: Jacob Parker (j3parker)&lt;br /&gt;
 TR: Sapphyre Gervais (sagervai)&lt;br /&gt;
 SE: Matthew McPherrin (mimcpher)&lt;br /&gt;
 SA: Anthony Brennan (a2brenna)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Jacob Parker (j3parker)&lt;br /&gt;
 VP: Edgar Bering (ebering)&lt;br /&gt;
 TR: Michael Spang (mspang)&lt;br /&gt;
 SE: Brennan Taylor (b4taylor)&lt;br /&gt;
 SA: Michael Ellis (m2ellis)&lt;br /&gt;
 OF: Rebecca Putinski (rjputins)&lt;br /&gt;
=2010=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Kyle Spaans (kspaans)&lt;br /&gt;
 VP: Edgar Bering (ebering)&lt;br /&gt;
 TR: Sapphyre Gervais (sagervai)&lt;br /&gt;
 SE: Ajnu Jacob (ajacob)&lt;br /&gt;
 SA: Matthew Thiffault (mthiffau)&lt;br /&gt;
 OF: Jacob Parker (j3parker)&lt;br /&gt;
Keyed office staffers: j3camero, jdonland, m2ellis, mimcpher, nsasherr&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Jeff Cameron (j3camero)&lt;br /&gt;
 VP: Brennan Taylor (b4taylor)&lt;br /&gt;
 TR: Vardhan Mudunuru (vmudunur)&lt;br /&gt;
 SE: Matthew Lawrence (m3lawren)&lt;br /&gt;
 SA: Michael Ellis (m2ellis)&lt;br /&gt;
 OF: Edgar Bering (ebering)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Jacob Parker (j3parker)&lt;br /&gt;
 VP: Edgar Bering (ebering)&lt;br /&gt;
 TR: Rebecca Putinski (rjputins)&lt;br /&gt;
 SE: Kyle Spaans (kspaans)&lt;br /&gt;
 SA: Jeremy Roman (jbroman)&lt;br /&gt;
 OF: Amir Sayed Khader (askhader)&lt;br /&gt;
=2011=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Edgar Bering (ebering)&lt;br /&gt;
 VP: Jennifer &amp;quot;Emily&amp;quot; Wong (jy2wong)&lt;br /&gt;
 TR: Kyle Spaans (kspaans)&lt;br /&gt;
 SE: Elana &amp;quot;Alana&amp;quot; Hashman (ehashman)&lt;br /&gt;
 SA: Peter &amp;quot;Bofh&amp;quot; Barfuss (pbarfuss)&lt;br /&gt;
 OF: Marc Burns (Marc Burns)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Matthew Thiffault (mthiffau)&lt;br /&gt;
 VP: Matthew McPherrin (mimcpher)&lt;br /&gt;
 TR: Kyle Spaans (kspaans)&lt;br /&gt;
 SE: Kwame Andrew Ansong (kansong)&lt;br /&gt;
 SA: Jeremy Brandon Roman (jbroman)&lt;br /&gt;
 OF: Jennifer &amp;quot;Emily&amp;quot; Wong (jy2wong)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Marc Burns (m4burns)&lt;br /&gt;
 VP: Katharine Hyatt (kshyatt)&lt;br /&gt;
 TR: Jacob Parker (j3parker)&lt;br /&gt;
 SE: Elana Hashman (ehashman)&lt;br /&gt;
 SA: Anthony &amp;quot;hatguy/hotgay&amp;quot; Brennan (a2brenna)&lt;br /&gt;
 OF: Kyle Spaans (kspaans)&lt;br /&gt;
 LI: Edgar Bering (ebering)&lt;br /&gt;
=2012=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Marc Burns (m4burns)&lt;br /&gt;
 VP: Elana Hashman (ehashman)&lt;br /&gt;
 TR: Jacob Parker (j3parker)&lt;br /&gt;
 SE: Matthew McPherrin (mimcpher)&lt;br /&gt;
 SA: Jeremy Roman (jbroman)&lt;br /&gt;
 OF: Luqman Aden (laden)&lt;br /&gt;
 LI: Jennifer &amp;quot;Emily&amp;quot; Wong (jy2wong)&lt;br /&gt;
&lt;br /&gt;
===Summer===&lt;br /&gt;
 PR: Anthony Brennan (a2brenna)&lt;br /&gt;
 VP: Luqman Aden (laden)&lt;br /&gt;
 TR: Matthew McPherrin (mimcpher)&lt;br /&gt;
 SE: Elana Hashman (ehashman)&lt;br /&gt;
 SA: Sarah Harvey (sharvey)&lt;br /&gt;
 OF: Marc Burns (m4burns)&lt;br /&gt;
 LI: John Ladan (jladan)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Marc Burns (m4burns)&lt;br /&gt;
 VP: Salem Talha (satalha)&lt;br /&gt;
 TR: Jennifer Wong (jy2wong)&lt;br /&gt;
 SE: Elana Hashman (ehashman), resigned&lt;br /&gt;
 SA: Jeremy Roman (jbroman)&lt;br /&gt;
 OF: Luqman Aden (laden)&lt;br /&gt;
 LI: John Ladan (jladan)&lt;br /&gt;
=2013=&lt;br /&gt;
&lt;br /&gt;
===Winter===&lt;br /&gt;
 PR: Anthony Brennan (a2brenna)&lt;br /&gt;
 VP: Marc Burns (m4burns)&lt;br /&gt;
 TR: John Mumford (jsmumfor)&lt;br /&gt;
 SE: Matt Olechnowicz (mgolechn)&lt;br /&gt;
 SA: Sarah Harvey (sharvey)&lt;br /&gt;
 OF: Bryan Coutts (b2coutts)&lt;br /&gt;
 LI: Matthew McPherrin (mimcpher)&lt;br /&gt;
&lt;br /&gt;
===Spring===&lt;br /&gt;
 PR: Shane Robert Creighton-Young (srcreigh)&lt;br /&gt;
 VP: Visishta Vijayanand (vvijayan)&lt;br /&gt;
 TR: Dominik Chlobowski (dchlobow)&lt;br /&gt;
 SE: Youn Jin Kim (yj7kim)&lt;br /&gt;
 SA: Anthony Brennan (a2brenna)&lt;br /&gt;
 OF: Marc Burns (m4burns)&lt;br /&gt;
 FR: Dominik Chlobowski (dchlobow)&lt;br /&gt;
&lt;br /&gt;
===Fall===&lt;br /&gt;
 PR: Elana Hashman (ehashman)&lt;br /&gt;
 VP: Marc Burns (m4burns)&lt;br /&gt;
 TR: Dominik Chlobowski (dchlobow)&lt;br /&gt;
 SE: Edward Lee (e45lee)&lt;br /&gt;
 SA: Jeremy Roman (jbroman)&lt;br /&gt;
 OF: Alexis Hunt (aechunt)&lt;br /&gt;
= 2014 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Bryan Coutts (b2coutts)&lt;br /&gt;
 VP: Visishta Vijayanand (vvijayan)&lt;br /&gt;
 TR: Marc Burns (m4burns)&lt;br /&gt;
 SE: Mark Farrell (m4farrel)&lt;br /&gt;
 SA: Murphy Berzish (mtrberzi)&lt;br /&gt;
 OF: Nicholas Black (nablack)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: Youn Jin Kim (yj7kim)&lt;br /&gt;
 VP: Luke Franceschini (l3france)&lt;br /&gt;
 TR: Joseph Chouinard (jchouina)&lt;br /&gt;
 SE: Ifaz Kabir (ikabir)&lt;br /&gt;
 SA: Murphy Berzish (mtrberzi)&lt;br /&gt;
 OF: Matthew Thiffault (mthiffau)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Youn Jin Kim (yj7kim)&lt;br /&gt;
 VP: Theodor Belaire (tbelaire)&lt;br /&gt;
 TR: Jonathan Jerel Bailey (jj2baile)&lt;br /&gt;
 SE: Shane Robert Creighton-Young (srcreigh)&lt;br /&gt;
 SA: Alexis Hunt (aechunt)&lt;br /&gt;
 OF: Mark Farrell (m4farrel)&lt;br /&gt;
 LI: Gianni Leonardo Gambetti (glgambet)&lt;br /&gt;
&lt;br /&gt;
= 2015 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Gianni Leonardo Gambetti (glgambet)&lt;br /&gt;
 VP: Luke Franceschini (l3france)&lt;br /&gt;
 TR: Edward Lee (e45lee)&lt;br /&gt;
 SE: Patrick James Melanson (pj2melan)&lt;br /&gt;
 SA: Murphy Berzish (mtrberzi)&lt;br /&gt;
 OF: Shikhar Singh (s285sing)&lt;br /&gt;
 LI: Aishwarya Gupta (a72gupta)&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: Luqman Aden (laden)&lt;br /&gt;
 VP: Patrick Melanson (pj2melan)&lt;br /&gt;
 TR: Jonathan Bailey (jj2baile)&lt;br /&gt;
 SE: Keri Warr (kpwarr)&lt;br /&gt;
 SA: Nik Black (nablack)&lt;br /&gt;
 OF: Ilia Chtcherbakov (ischtche)&lt;br /&gt;
 LI: Yomna Nasser (ynasser)&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Simone Hu (ss2hu)&lt;br /&gt;
 VP: Theo Belaire (tbelaire)&lt;br /&gt;
 TR: Jordan Taylore Upiter (jtupiter)&lt;br /&gt;
 SE: Daniel Marin (dmarin)&lt;br /&gt;
 SA: Jordan Xavier Pryde (jxpryde)&lt;br /&gt;
 OF: Ilia Chtcherbakov (ischtche)&lt;br /&gt;
= 2016 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Patrick Melanson (pj2melan)&lt;br /&gt;
 VP: Patrick Melanson (pj2melan)&lt;br /&gt;
 Acting VP, progcom chair: Theo Belaire (tbelaire)&lt;br /&gt;
 TR: Luqman Aden (laden)&lt;br /&gt;
 SE: Naomi Koo (m3koo)&lt;br /&gt;
 SA: Zachary Seguin (ztseguin)&lt;br /&gt;
 OF: Reila Zheng (wy2zheng)&lt;br /&gt;
 LI: Felix Bauckholt (fbauckho)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: Luqman Aden (laden)&lt;br /&gt;
 VP: Melissa Angelica Mary Tedesco (matedesc)&lt;br /&gt;
 TR: Jonathan Jerel Bailey (jj2baile)&lt;br /&gt;
 SE: Aditya Shivam Kothari (askothar)&lt;br /&gt;
 SA: Jordan Xavier Pryde (jxpryde)&lt;br /&gt;
 OF: Zachary Seguin (ztseguin)&lt;br /&gt;
 LI: Charlie Wang (s455wang)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Charlie Wang (s455wang)&lt;br /&gt;
 VP: Bryan Coutts (b2coutts)&lt;br /&gt;
 TR: Laura Song (lhsong)&lt;br /&gt;
 SE: Uday Barar (ubarar)&lt;br /&gt;
 SA: Zachary Seguin (ztseguin)&lt;br /&gt;
 OF: Jamie Sinn (j2sinn)&lt;br /&gt;
 LI: Felix Bauckholt (fbauckho)&lt;br /&gt;
 FR: Ilia Chtcherbakov (ischtche)&lt;br /&gt;
&lt;br /&gt;
= 2017 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Wilson Cheang (wyschean)&lt;br /&gt;
 VP: Tristan Hume (tghume)&lt;br /&gt;
 TR: Jordan Pryde (jxpryde)&lt;br /&gt;
 SE: Amir Fata (aafata)&lt;br /&gt;
 SA: Zachary Seguin (ztseguin)&lt;br /&gt;
 OF: Felix Bauckholt (fbaukcho)&lt;br /&gt;
 LI: Connor Murphy (cfmurph)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Felix Bauckholt (fbauckho)&lt;br /&gt;
 VP: Zichuan Wei (z34wei)&lt;br /&gt;
 TR: Laura Song (lhsong)&lt;br /&gt;
 SE: Bo Mo (bzmo)&lt;br /&gt;
 SA: Zachary Seguin (ztseguin)&lt;br /&gt;
 OF: Uday Barar (ubarar)&lt;br /&gt;
 LI: Patrick Melanson (pj2melan)&lt;br /&gt;
 FR: Uday Barar (ubarar)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Melissa Tedesco (matedesc)&lt;br /&gt;
 VP: Victor Brestoiu (vabresto)&lt;br /&gt;
 TR: Tristan Hume (tghume)&lt;br /&gt;
 SE: Marc Mailhot (mnmailho)&lt;br /&gt;
 SA: Jordan Pryde (jxpryde)&lt;br /&gt;
 OF: Zoë Laing (zlaing)&lt;br /&gt;
 LI: Felix Bauckholt (fbauckho)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
= 2018 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
&lt;br /&gt;
 PR: Patrick Melanson (pj2melan)&lt;br /&gt;
 VP: Charlie Wang (s455wang)&lt;br /&gt;
 TR: Ashley Dewiputri Pranajaya (adpranaj)&lt;br /&gt;
 SE: Arshia Mufti (a2mufti)&lt;br /&gt;
 SA: Jordan Pryde (jxpryde)&lt;br /&gt;
 OF: Zoë Laing (zlaing)&lt;br /&gt;
 LI: Zichuan Wei (z34wei)&lt;br /&gt;
 FR: Uday Barar (ubarar)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Melissa Tedesco (matedesc)&lt;br /&gt;
 VP: Dhruv Jauhar (djauhar)&lt;br /&gt;
 TR: Tristan Hume (tghume)&lt;br /&gt;
 AV: Marc Mailhot (mnmailho)&lt;br /&gt;
 SA: Jennifer Zhou (c7zou)&lt;br /&gt;
 OF: Aditya Thakral (a3thakra)&lt;br /&gt;
 LI: Archer Zhang (z577zhan)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Zichuan Wei (z34wei)&lt;br /&gt;
 VP: Uday Barar (ubarar)&lt;br /&gt;
 TR: Alex Tomala (actomala)&lt;br /&gt;
 AV: Neil Parikh (n3parikh)&lt;br /&gt;
 SA: Jennifer Zhou (c7zou)&lt;br /&gt;
 OF: Alexander Zvorygin (azvorygi)&lt;br /&gt;
 LI: Neil Parikh (n3parikh)&lt;br /&gt;
 FR:&lt;br /&gt;
&lt;br /&gt;
= 2019 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
&lt;br /&gt;
 PR: Marc Mailhot (mnmailho)&lt;br /&gt;
 VP: Victor Brestoiu (vabresto)&lt;br /&gt;
 TR: Tristan Hume (tghume)&lt;br /&gt;
 AV: Aditya Thakral (a3thakra)&lt;br /&gt;
 SA: Charlie Wang (s455wang)&lt;br /&gt;
 OF: Archer Zhang (z577zhan)&lt;br /&gt;
 LI: Rishabh Minocha (rkminoch)&lt;br /&gt;
 FR: Marc Mailhot (mnmailho)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Uday Barar (ubarar)&lt;br /&gt;
 VP: Rajat Malhotra (r24malho)&lt;br /&gt;
 TR: Raghav Sethi (r5sethi)&lt;br /&gt;
 AV: Bo Mo (bzmo)&lt;br /&gt;
 SA: Charlie Wang (s455wang)&lt;br /&gt;
 OF: Hannah Wong (sm7wong)&lt;br /&gt;
 LI: Nolan Munce (nmmunce)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Dhruv Jauhar (djauhar)&lt;br /&gt;
 VP: Aditya Thakral (a3thakra)&lt;br /&gt;
 TR: Rishabh Minocha (rkminoch)&lt;br /&gt;
 AV: Tammy Khalaf (tekhalaf)&lt;br /&gt;
 SA: Murphy Berzish (mtrberzi)&lt;br /&gt;
 OF: Zihan Zhang (z577zhan)&lt;br /&gt;
 LI: Raghav Sethi (r5sethi)&lt;br /&gt;
&lt;br /&gt;
= 2020 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
The office was closed midway through this term due to the COVID-19 pandemic.&lt;br /&gt;
&lt;br /&gt;
The pandemic that threw the University and rest of the world into disarray drove all CSC activity virtual. Though everyone thought the pandemic would quickly be over, the Alpha, then Delta, then Omicron variants resulted in the majority of classes being held online until February 2022.&lt;br /&gt;
&lt;br /&gt;
As a result, the office would stay closed for a full 5 terms, and only reopen in W2022.&lt;br /&gt;
&lt;br /&gt;
 PR: Richard Shi (r27shi)&lt;br /&gt;
 VP: Anastassia Gaikovaia (agaikova)&lt;br /&gt;
 TR: Alex Tomala (actomala)&lt;br /&gt;
 AV: Neil Parikh (n3parikh)&lt;br /&gt;
 SA: Amin Bandali (abandali)&lt;br /&gt;
 OF: Alexander Zvorygin (azvorygi)&lt;br /&gt;
 LI: Anastassia Gaikovaia (agaikova)&lt;br /&gt;
 FR: Richard Shi (r27shi)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Neil Parikh (n3parikh)&lt;br /&gt;
 VP: Anastassia Gaikovaia (agaikova)&lt;br /&gt;
 SA: Amin Bandali (abandali)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Mokai Xu (m92xu)&lt;br /&gt;
 VP: Anastassia Gaikovaia (agaikova) (stepped down as of 2020-11-30)&lt;br /&gt;
 TR: Neil Parikh (n3parikh)&lt;br /&gt;
 AV: Edwin Yang (e37yang)&lt;br /&gt;
 SA: Murphy Berzish (mtrberzi)&lt;br /&gt;
&lt;br /&gt;
= 2021 =&lt;br /&gt;
In 2021, CSC rapidly expanded the Program Committee, introducing subcommittees including Design, Events, Marketing, Photography, Reps, Discord mods, and Discord bot developers. The website committee also expanded, and the terminal committee was officially repurposed to serve a syscom-in-training role (since the office remained closed and the terminals powered off).&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
&lt;br /&gt;
 PR: Kallen Tu (k4tu)&lt;br /&gt;
 VP: Gordon Le (g2le)&lt;br /&gt;
 TR: Neil Parikh (n3parikh)&lt;br /&gt;
 AV: Nakul Vijhani (nvijhani)&lt;br /&gt;
 SA: Max Erenberg (merenber)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Kallen Tu (k4tu)&lt;br /&gt;
 VP: Gordon Le (g2le)&lt;br /&gt;
 TR: Neil Parikh (n3parikh)&lt;br /&gt;
 AV: Ravindu Angammana (rbangamm)&lt;br /&gt;
 SA: Max Erenberg (merenber)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Dora Su (d43su)&lt;br /&gt;
 VP: Jason Sang (jzsang)&lt;br /&gt;
 TR: Yanni Wang (y3859wan)&lt;br /&gt;
 AV: Anjing Li (a348li)&lt;br /&gt;
 SA: Max Erenberg (merenber)&lt;br /&gt;
 Advising: Neil Parikh (n3parikh)&lt;br /&gt;
&lt;br /&gt;
= 2022 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
&lt;br /&gt;
 PR: Juthika Hoque (j3hoque)&lt;br /&gt;
 VP: Eric Huang (e48huang)&lt;br /&gt;
 TR: Eden Chan (e223chan)&lt;br /&gt;
 AV: Dina Orucevic (dmorucev)&lt;br /&gt;
 SA: Raymond Li (r389li)&lt;br /&gt;
 WW: Amy Wang (a258wang)&lt;br /&gt;
 OF: Neil Parikh (n3parikh)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Eden Chan (e223chan)&lt;br /&gt;
 VP: Bonnie Peng (b38peng)&lt;br /&gt;
 TR: Sat Arora (s97arora)&lt;br /&gt;
 AV: Haley Song (h79song)&lt;br /&gt;
 SA: Raymond Li (r389li)&lt;br /&gt;
 WW: Amy Wang (a258wang)&lt;br /&gt;
 OF: Sat Arora (s97arora)&lt;br /&gt;
 LI: Santiago Montemayor (smontema) (appointed 2022-06-02)&lt;br /&gt;
 FR: Sat Arora (s97arora) (appointed 2022-06-23)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Amy Wang (a258wang)&lt;br /&gt;
 VP: Anna Wang (aj2wang)&lt;br /&gt;
 TR: Simon Zeng (s33zeng)&lt;br /&gt;
 AV: Mabel Kwok (m23kwok)&lt;br /&gt;
 SA: Raymond Li (r389li)&lt;br /&gt;
 WW: Shahan Nedadahandeh (snedadah)&lt;br /&gt;
 OF: Mark Chen (m375chen) (appointed 2022-09-21)&lt;br /&gt;
 LI: John Oss (joss) (appointed 2022-10-06)&lt;br /&gt;
 FR: Mark Chen (m375chen) and Simon Zeng (s33zeng) (appointed 2022-09-21)&lt;br /&gt;
&lt;br /&gt;
= 2023 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
&lt;br /&gt;
 PR: Sat Arora (s97arora)&lt;br /&gt;
 VP: Ivy Lei (ihlei)&lt;br /&gt;
 TR: Laura Nguyen (l69nguye)&lt;br /&gt;
 AV: Adele Chen (a332chen)&lt;br /&gt;
 SA: Leo Shen (y266shen)&lt;br /&gt;
 WW: Shahan Nedadahandeh (snedadah)&lt;br /&gt;
 OF: Young Wang (y3285wan)&lt;br /&gt;
 LI: John Oss (joss)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
&lt;br /&gt;
 PR: Sat Arora (s97arora)&lt;br /&gt;
 VP: Joshua Kim (j649kim)&lt;br /&gt;
 TR: Amy Wang (a258wang)&lt;br /&gt;
 AV: Andrea Ma (a49ma)&lt;br /&gt;
 SA: Raymond Li (r389li)&lt;br /&gt;
 WW: Shahan Nedadahandeh (snedadah)&lt;br /&gt;
 OF: Sean Zhang (q434zhan)&lt;br /&gt;
&lt;br /&gt;
=== Fall ===&lt;br /&gt;
&lt;br /&gt;
 PR: Laura Nguyen (l69nguye)&lt;br /&gt;
 VP: Amol Venkataraman (avenkata)&lt;br /&gt;
 TR: Bryan Chen (b28chen)&lt;br /&gt;
 AV: Amy Wang (a258wang)&lt;br /&gt;
 SA: Nathan Chung (n4chung)&lt;br /&gt;
 WW: Darren Lo (dlslo), Richard Shuai (r2shuai)&lt;br /&gt;
 OF: Ivy Lei (ihlei), Kevin Cui (k8cui)&lt;br /&gt;
&lt;br /&gt;
= 2024 =&lt;br /&gt;
&lt;br /&gt;
=== Winter ===&lt;br /&gt;
 PR: Ivy Lei (ihlei)&lt;br /&gt;
 VP: Gordon Lin (g3lin)&lt;br /&gt;
 TR: Andrea Ma (a49ma)&lt;br /&gt;
 AV: Saurin Patel (sa23pate)&lt;br /&gt;
 SA: Nathan Chung (n4chung)&lt;br /&gt;
 WW: Darren Lo (dlslo), Richard Shuai (r2shuai)&lt;br /&gt;
 OF: Tiger Ding (t27ding)&lt;br /&gt;
&lt;br /&gt;
=== Spring ===&lt;br /&gt;
 PR: Gordon Lin (g3lin)&lt;br /&gt;
 VP: Justin Wang (yw2wang)&lt;br /&gt;
 TR: Andrea Ma (a49ma)&lt;br /&gt;
 AV: Sean Zhang (q434zhan)&lt;br /&gt;
 SA: Nathan Chung (n4chung)&lt;br /&gt;
 WW: Tejas Srikanth (tcsrikan)&lt;br /&gt;
 OF: Bryan Wang (b397wang)&lt;br /&gt;
=== Fall ===&lt;br /&gt;
 PR: Iris Liao (a23liao)&lt;br /&gt;
 VP: Siimar Leen Kaur (s32kaur)&lt;br /&gt;
 TR: Grace Feng (g27feng)&lt;br /&gt;
 AV: Ray Cao (r44cao)&lt;br /&gt;
 SA: Ohm Patel (o32patel)&lt;br /&gt;
 WW: Tejas Srikanth (tcsrikan)&lt;br /&gt;
 OF: Ivy Fan-Chiang (qkfanchi)&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5261</id>
		<title>Music</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5261"/>
		<updated>2024-06-11T02:23:09Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Music is run off &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt;, since that&#039;s the computer with the speakers attached. &lt;br /&gt;
&lt;br /&gt;
Office staff/termcom/syscom permissions are required to play music in the office.&lt;br /&gt;
&lt;br /&gt;
We also have MPD available, however this method is rarely used after 2022. &#039;&#039;kids these days&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==How to play music==&lt;br /&gt;
&lt;br /&gt;
# Run &amp;lt;code&amp;gt;ssh [watid]@powernap.csclub.uwaterloo.ca&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;bluetoothctl&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;pairable on&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;discoverable on&amp;lt;/code&amp;gt; into the console to enable pairing and discovery&lt;br /&gt;
# Connect to &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt; from your device&lt;br /&gt;
# Respond yes to all the prompts on the terminal from &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;trust [device_mac]&amp;lt;/code&amp;gt; to automatically allow audio from your device next time&lt;br /&gt;
# You should be able to play audio like a normal audio device now&lt;br /&gt;
&lt;br /&gt;
==How to control audio server==&lt;br /&gt;
powernap uses PipeWire and can talk to PulseAudio client. To set volume, mute some audio stream, or change output setting, use an office terminal and do:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
PULSE_SERVER=powernap.csclub.uwaterloo.ca pavucontrol&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you&#039;re using a Linux laptop then this should work too.&lt;br /&gt;
&lt;br /&gt;
==MPD Controls==&lt;br /&gt;
To view the keybindings of ncmpcpp, press F1 while it&#039;s running. &lt;br /&gt;
The number keys switch between tabs in it. &lt;br /&gt;
&lt;br /&gt;
*1 is current playlist&lt;br /&gt;
*2 is browsing files&lt;br /&gt;
*3 is search&lt;br /&gt;
*4 is browsing the media library&lt;br /&gt;
*5 is browsing saved playlists, etc.&lt;br /&gt;
&lt;br /&gt;
You can add your own &#039;&#039;absolutely legitimately obtained&#039;&#039; music by copying them to &amp;lt;code&amp;gt;/music&amp;lt;/code&amp;gt; on powernap. Then type &amp;lt;code&amp;gt;u&amp;lt;/code&amp;gt; in ncmpcpp to refresh the database.&lt;br /&gt;
&lt;br /&gt;
==Termcom Info==&lt;br /&gt;
&lt;br /&gt;
*We require https://github.com/hrkfdn/mpdas to get scrobbling working as it &amp;quot;official&amp;quot; last.fm integration was removed from mpd in 2013. &lt;br /&gt;
**n.b. https://github.com/hrkfdn/mpdas/issues/58&lt;br /&gt;
*To control the mixer on other terminals, use &amp;lt;code&amp;gt;PULSE_SERVER=nullsleep pavucontrol&amp;lt;/code&amp;gt;&lt;br /&gt;
*The official last.fm account credentials are stored in the exec password spot :)&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5218</id>
		<title>Music</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Music&amp;diff=5218"/>
		<updated>2024-02-18T03:34:11Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Music is run off `powernap`, since that&#039;s the computer with the speakers attached. &lt;br /&gt;
&lt;br /&gt;
Office staff/termcom/syscom permissions are required to play music in the office.&lt;br /&gt;
&lt;br /&gt;
We also have MPD available, however this method is rarely used after 2022. &#039;&#039;kids these days&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==How to play music==&lt;br /&gt;
&lt;br /&gt;
# Run &amp;lt;code&amp;gt;ssh [watid]@powernap.csclub.uwaterloo.ca&amp;lt;/code&amp;gt;&lt;br /&gt;
# Run &amp;lt;code&amp;gt;bluetoothctl&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;pairable on&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;discoverable on&amp;lt;/code&amp;gt; into the console to enable pairing and discovery&lt;br /&gt;
# Connect to &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt; from your device&lt;br /&gt;
# Respond yes to all the prompts on the terminal from &amp;lt;code&amp;gt;powernap&amp;lt;/code&amp;gt;&lt;br /&gt;
# Type &amp;lt;code&amp;gt;trust [device_mac]&amp;lt;/code&amp;gt; to automatically allow audio from your device next time&lt;br /&gt;
# You should be able to play audio like a normal audio device now&lt;br /&gt;
&lt;br /&gt;
==MPD Controls==&lt;br /&gt;
To view the keybindings of ncmpcpp, press F1 while it&#039;s running. &lt;br /&gt;
The number keys switch between tabs in it. &lt;br /&gt;
&lt;br /&gt;
*1 is current playlist&lt;br /&gt;
*2 is browsing files&lt;br /&gt;
*3 is search&lt;br /&gt;
*4 is browsing the media library&lt;br /&gt;
*5 is browsing saved playlists, etc.&lt;br /&gt;
&lt;br /&gt;
You can add your own &#039;&#039;absolutely legitimately obtained&#039;&#039; music by copying them to &amp;lt;code&amp;gt;/music&amp;lt;/code&amp;gt; on powernap. Then type &amp;lt;code&amp;gt;u&amp;lt;/code&amp;gt; in ncmpcpp to refresh the database.&lt;br /&gt;
&lt;br /&gt;
==Termcom Info==&lt;br /&gt;
&lt;br /&gt;
*We require https://github.com/hrkfdn/mpdas to get scrobbling working as it &amp;quot;official&amp;quot; last.fm integration was removed from mpd in 2013. &lt;br /&gt;
**n.b. https://github.com/hrkfdn/mpdas/issues/58&lt;br /&gt;
*To control the mixer on other terminals, use &amp;lt;code&amp;gt;PULSE_SERVER=nullsleep pavucontrol&amp;lt;/code&amp;gt;&lt;br /&gt;
*The official last.fm account credentials are stored in the exec password spot :)&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5210</id>
		<title>Machine List</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Machine_List&amp;diff=5210"/>
		<updated>2024-02-07T22:11:23Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: /* Cloud */ add cpu specs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Most of our machines are in the E7, F7, G7 and H7 racks (as of Jan. 2022) in the MC 3015 server room. There is an additional rack in the DC 3558 machine room on the third floor. Our office terminals are in the CSC office, in MC 3036/3037.&lt;br /&gt;
&lt;br /&gt;
= Web Server =&lt;br /&gt;
You are highly encouraged to avoid running anything that&#039;s not directly related to your CSC webspace on our web server. We have plenty of general-use machines; please use those instead. You can even edit web pages from any other machine--usually the only reason you&#039;d *need* to be on caffeine is for database access.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;caffeine&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Caffeine is the Computer Science Club&#039;s web server. It serves websites, databases for websites, and a large amount of other services.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently a virtual machine hosted on phosphoric-acid&lt;br /&gt;
** 12 vCPUs&lt;br /&gt;
** 32GB of RAM&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Club and member web sites with [[Apache]]&lt;br /&gt;
* [[MySQL]] databases&lt;br /&gt;
* [[PostgreSQL]] databases&lt;br /&gt;
* [[ceo]] daemon&lt;br /&gt;
* mail was migrated to [[#mail|mail]]&lt;br /&gt;
&lt;br /&gt;
= General-Use Servers =&lt;br /&gt;
&lt;br /&gt;
These machines can be used for (nearly) anything you like (though be polite and remember that these are shared machines). Recall that when you signed the Machine Usage Agreement, you promised not to use these machines to generate profit (so no cryptocurrency mining).&lt;br /&gt;
&lt;br /&gt;
For computationally-intensive jobs (CPU/memory bound) we recommend running on high-fructose-corn-syrup, carbonated-water, sorbitol, mannitol, or corn-syrup, listed in roughly decreasing order of available resources. For low-intensity interactive jobs, such as IRC clients, we recommend running on neotame. If you have a long-running computationally intensive job, it&#039;s good to nice[https://en.wikipedia.org/wiki/Nice_(Unix)] your process, and possibly let syscom know too.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
PowerEdge 2950&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 × Intel Xeon E5405 (2.00 GHz, 4 cores each)&lt;br /&gt;
* 32 GB RAM&lt;br /&gt;
* eth0 (&amp;quot;Gb0&amp;quot;) mac addr 00:24:e8:52:41:27&lt;br /&gt;
* eth1 (&amp;quot;Gb1&amp;quot;) mac addr 00:24:e8:52:41:29&lt;br /&gt;
* IPMI mac addr 00:24:e8:52:41:2b&lt;br /&gt;
* 3 &amp;amp;times; Western-Digital 160GB SATA hard drive (445 GB software RAID0 array)&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* Use eth0/Gb0 for the mathstudentorgsnet connection&lt;br /&gt;
* has ipmi on corn-syrup-ipmi.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Hosts 1 TB &amp;lt;tt&amp;gt;[[scratch|/scratch]]&amp;lt;/tt&amp;gt; and exports via NFS (sec=krb5)&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;high-fructose-corn-syrup&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
High-fructose-corn-syrup (or hfcs) is a large SuperMicro server. It&#039;s been in CSC service since April 2012.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6272 (2.4 GHz, 16 cores each)&lt;br /&gt;
* 192 GB RAM&lt;br /&gt;
* Supermicro H8QGi+-F Motherboard Quad 1944-pin Socket [http://csclub.uwaterloo.ca/misc/manuals/motherboard-H8QGI+-F.pdf (Manual)]&lt;br /&gt;
* 500 GB Seagate Barracuda&lt;br /&gt;
* Supermicro Case Rackmount CSE-748TQ-R1400B 4U [http://csclub.uwaterloo.ca/misc/manuals/SC748.pdf (Manual)]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;carbonated-water&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
carbonated-water is a Dell R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;01/19/23: IPMI (temporarily) disconnected.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x AMD Opteron 6176 processors (2.3 GHz, 12 cores each)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;neotame&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
neotame is a SuperMicro server funded by MEF. It is the successor to taurine.&lt;br /&gt;
&lt;br /&gt;
We discourage running computationally-intensive jobs on neotame as many users run interactive applications such as IRC clients on it and any significant service degradation will be more likely to affect other users (who will probably notice right away).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* SSH server also listens on ports 21, 22, 53, 80, 81, 443, 8000, 8080 for your convenience.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;sorbitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
sorbitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;mannitol&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
mannitol is a SuperMicro server funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2630 v4 processors (2.2 GHz, 10 cores/20 threads each)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
&lt;br /&gt;
= Office Terminals =&lt;br /&gt;
&lt;br /&gt;
It&#039;s possible to SSH into these machines, but we discourage you from trying to use these machines when you&#039;re not sitting in front of them. They are bounced at least every time our login manager, lightdm, throws a tantrum (which is several times a day). These are for use inside our physical office.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;cyanide&#039;&#039; ==&lt;br /&gt;
(Work in progress)&lt;br /&gt;
&lt;br /&gt;
cyanide is a Mac Mini&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;natural-flavours&#039;&#039; ==&lt;br /&gt;
&lt;br /&gt;
Natural-flavours is an office terminal; it used to be our mirror.&lt;br /&gt;
&lt;br /&gt;
In Fall 2016, it received a major upgrade thanks the MathSoc&#039;s Capital Improvement Fund.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i7-6700k&lt;br /&gt;
* 2x8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Cup Holder (DVD drive has power, but not connected to mother board)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;powernap&#039;&#039;==&lt;br /&gt;
powernap is a [https://support.apple.com/kb/sp710 Mac Mini (Late 2014)].&lt;br /&gt;
&lt;br /&gt;
=== Spec ===&lt;br /&gt;
&lt;br /&gt;
* Intel i7-4578U (4) @ 3.500GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Intel Iris Graphics 5100&lt;br /&gt;
* 256GB On-board SSD&lt;br /&gt;
&lt;br /&gt;
=== Speaker === &lt;br /&gt;
powernap has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
* MPD for playing music. Only office/termcom/syscom can log into powernap. Use `ncmpcpp` to control MPD.&lt;br /&gt;
* Bluetooth audio receiver. Only syscom can control bluetooth pairing. Use `bluetoothctl` to control bluetooth.&lt;br /&gt;
&lt;br /&gt;
Music is located in /music on the office terminals.&lt;br /&gt;
&lt;br /&gt;
= Progcom Only =&lt;br /&gt;
The Programme Committee has access to a VM on corn-syrup called &#039;progcom&#039;. They have sudo rights in this VM so they may install and run their own software inside it. This VM should only be accessible by members of progcom or syscom.&lt;br /&gt;
&lt;br /&gt;
= Syscom Only =&lt;br /&gt;
&lt;br /&gt;
The following systems may only be accessible to members of the [[Systems Committee]] for a variety of reasons; the most common of which being that some of these machines host [[Kerberos]] authentication services for the CSC.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;xylitol&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
xylitol is a Dell PowerEdge R815 donated by CSCF. It is primarily a container host for services previously hosted on aspartame and dextrose, including munin, rt, mathnews, auth1, and dns1. It was provisioned with the intent to replace both of those hosts.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;01/19/23: IPMI (temporarily) disconnected.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Dual AMD Opteron 6176 (2.3 GHz, 48 cores total)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 500GB volume group on RAID1 SSD (xylitol-mirrored)&lt;br /&gt;
* 500ish-GB volume group on RAID10 HDD (xylitol-raidten)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth1&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] primary&lt;br /&gt;
*[[Kerberos]] primary&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chat&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#xylitol|xylitol]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* The Lounge web IRC client (https://chat.csclub.uwaterloo.ca)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phosphoric-acid&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phosphoric-acid is a Dell PowerEdge R815 donated by CSCF and is a clone of xylitol. It may be used to provide redundant cloud services in the future.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;01/19/23: IPMI (temporarily) disconnected.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* (clone of Xylitol)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[#caffeine|caffeine]]&lt;br /&gt;
*[[#coffee|coffee]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;coffee&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Virtual machine running on phosphoric-acid.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Database#MySQL|MySQL]]&lt;br /&gt;
*[[Database#Postgres|Postgres]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;cobalamin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Dell PowerEdge 2950 donated to us by FEDS. Located in the Science machine room on the first floor of Physics. Will act as a backup server for many things.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 1 × Intel Xeon E5420 (2.50 GHz, 4 cores)&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* Broadcom NetworkXtreme II&lt;br /&gt;
* 2x73GB Hard Drives, hardware RAID1&lt;br /&gt;
** Soon to be 2x1TB in MegaRAID1&lt;br /&gt;
*http://www.dell.com/support/home/ca/en/cabsdt1/product-support/servicetag/51TYRG1/configuration&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Containers: [[#auth2|auth2]]&lt;br /&gt;
&lt;br /&gt;
==== Notes ====&lt;br /&gt;
&lt;br /&gt;
* The network card requires non-free drivers. Be sure to use an installation disc with non-free.&lt;br /&gt;
&lt;br /&gt;
* We have separate IP ranges for cobalamin and its containers because the machine is located in a different building. They are:&lt;br /&gt;
&lt;br /&gt;
** VLAN ID 506 (csc-data1): 129.97.18.16/29; gateway 129.97.18.17; mask 255.255.255.240&lt;br /&gt;
** VLAN ID 504 (csc-ipmi): 172.19.5.24/29; gateway 172.19.5.25; mask 255.255.255.248&lt;br /&gt;
&lt;br /&gt;
* For some reason, the keyboard is shit. Try to avoid having to use it. It&#039;s doable, but painful. IPMI works now, and then we don&#039;t need to bug about physical access so it&#039;s better anyway.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;auth2&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Container on [[#cobalamin|cobalamin]].&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[LDAP]] secondary&lt;br /&gt;
*[[Kerberos]] secondary&lt;br /&gt;
&lt;br /&gt;
MAC Address: c2:c0:00:00:00:a2&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mail&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
mail is the CSC&#039;s mail server. It hosts mail delivery, imap(s), smtp(s), and mailman. It is also syscom-only. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mail]] services&lt;br /&gt;
* mailman (web interface at [http://mailman.csclub.uwaterloo.ca/])&lt;br /&gt;
*[[Webmail]]&lt;br /&gt;
*[[ceo]] daemon&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-benzoate is our previous mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It is currently sitting in the office pending repurposing. Will likely become a machine for backups in DC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon Quad Core E5405 @ 2.00 GHz&lt;br /&gt;
* 16GB RAM&lt;br /&gt;
* vg0: 228 GB block device behind DELL PERC 6/i (contains root partition)&lt;br /&gt;
&lt;br /&gt;
Space disks are currently in the office underneath maltodextrin.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-benzoate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate is our mirror server, funded by MEF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 36 drive Supermicro chassis (SSG-6048R-E1CR36L) &lt;br /&gt;
* 1 x Intel Xeon E5-2630 v3 (8 cores, 2.40 GHz)&lt;br /&gt;
* 64 GB (4 x 16GB) of DDR4 (2133Mhz)  ECC RAM&lt;br /&gt;
* 2 x 1 TB Samsung Evo 850 SSD drives&lt;br /&gt;
* 17 x 4 TB Western Digital Gold drives (separate funding from MEF)&lt;br /&gt;
* 9 x 18TB Seagate Exos X18 (8 ZFS, Z2,1 hot-spare)&lt;br /&gt;
* 10 Gbps SFP+ card (loaned from CSCF)&lt;br /&gt;
* 50 Gbps Mellanox QSFP card (from ginkgo; currently unconnected)&lt;br /&gt;
&lt;br /&gt;
==== Network Connections ====&lt;br /&gt;
&lt;br /&gt;
potassium-benzoate has two connections to our network:&lt;br /&gt;
&lt;br /&gt;
* 1 Gbps to our switch (used for management)&lt;br /&gt;
* 2 x 10 Gbps (LACP bond) to mc-rt-3015-mso-a (for mirror)&lt;br /&gt;
&lt;br /&gt;
Mirror&#039;s bandwidth is limited to 1 Gbps on each of the 4 campus internet links. Mirror&#039;s bandwidth is not limited on campus.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[[Mirror]]&lt;br /&gt;
*[[Talks]] mirror&lt;br /&gt;
*[[Debian_Repository|CSClub packages repository]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;munin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
munin is a syscom-only monitoring and accounting machine. It is a [[Virtualization#Linux_Containers|Linux container]] at present.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* currently hosted on [[#xylitol|xylitol]]&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://munin.csclub.uwaterloo.ca munin] systems monitoring daemon&lt;br /&gt;
==&#039;&#039;yerba-mate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* test-ipv6 (test-ipv6.csclub.uwaterloo.ca; a test-ipv6.com mirror)&lt;br /&gt;
* shibboleth (under development)&lt;br /&gt;
&lt;br /&gt;
Also used for experimenting new CSC services.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;citric-acid&#039;&#039;==&lt;br /&gt;
A Dell PowerEdge provided by CSCF to replace [[Machine List#aspartame|aspartame]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* 1 x AMD Opteron 6174 (12 cores, 2.20 GHz)&lt;br /&gt;
* 128 GB RAM&lt;br /&gt;
&lt;br /&gt;
* &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Services&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* Being configured for [https://pass.uwaterloo.ca pass.uwaterloo.ca], a university-wide password manager hosted by CSC as a demo service for all Nexus (ADFS) users&lt;br /&gt;
&lt;br /&gt;
= Cloud =&lt;br /&gt;
&lt;br /&gt;
These machines are used by [https://cloud.csclub.uwaterloo.ca cloud.csclub.uwaterloo.ca]. The machines themselves are restricted to Syscom only access.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;chamomile&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R815 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 4x 2.20GHz 12-core processors (AMD Opteron(tm) Processor 6174)&lt;br /&gt;
* 128GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack primary controller services for csclub.cloud&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;riboflavin&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge R515 provided by CSCF.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 2.6 GHz 8-core processors (AMD Opteron(tm) Processor 4376 HE)&lt;br /&gt;
* 64GB RAM&lt;br /&gt;
* 10GbE connection to core router&lt;br /&gt;
* 2x 500GB internal SSD&lt;br /&gt;
* 12x Seagate 4TB SSHD&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack block and object storage for csclub.cloud&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;guayusa&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Dell PowerEdge 2950 donated by a CSC member.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x 3.00 GHz quad core Intel Xeon 5160&lt;br /&gt;
* 32GB RAM&lt;br /&gt;
* 2TB PCI-Express Flash SSD&lt;br /&gt;
* 2x75GB 15k drives (RAID 1)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
Currently being used to set up NextCloud.&lt;br /&gt;
&lt;br /&gt;
Was used to experiment the following then-new CSC services:&lt;br /&gt;
&lt;br /&gt;
* logstash (testing of logstash)&lt;br /&gt;
* load-balancer-01&lt;br /&gt;
* cifs (for booting ginkgo from CD)&lt;br /&gt;
* caffeine-01 (testing of multi-node caffeine)&lt;br /&gt;
* block1.cloud&lt;br /&gt;
* object1.cloud&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginkgo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by MEF for CSC web hosting. Locate in MC 3015.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;01/19/23: IPMI (temporarily) disconnected.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon E5-2697 v4 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 256GB RAM&lt;br /&gt;
* 2 x 1.2 TB SSD (400GB of each for RAID 1)&lt;br /&gt;
* 10GbE onboard, 25GbE SFP+ card (also included 50GbE SFP+ card which will probably go in mirror)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* controller1.cloud&lt;br /&gt;
* db1.cloud&lt;br /&gt;
* router1.cloud (NAT for cloud tenant network)&lt;br /&gt;
* network1.cloud&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;biloba&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Supermicro server funded by SLEF for CSC web hosting. Located in DC 3558.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2x Intel Xeon Gold 6140 @ 2.30GHz [18 cores each]&lt;br /&gt;
* 384GB RAM&lt;br /&gt;
* 12 3.5&amp;quot; Hot Swap Drive Bays&lt;br /&gt;
** 2 x 480 GB SSD&lt;br /&gt;
* 10GbE onboard, 10GbE SFP+ card (on loan from CSCF)&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* OpenStack Compute machine&lt;br /&gt;
&lt;br /&gt;
No longer in use:&lt;br /&gt;
&lt;br /&gt;
* caffeine&lt;br /&gt;
* mail&lt;br /&gt;
* mattermost&lt;br /&gt;
&lt;br /&gt;
= Storage =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs00&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs00 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;fs01&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
fs01 is a NetApp FAS3040 series fileserver donated by CSCF.&lt;br /&gt;
&lt;br /&gt;
It is currently being used for testing of a HA NetApp nodes and serving home directories directly from the NetApp filer.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
= Other =&lt;br /&gt;
&lt;br /&gt;
== ps3 ==&lt;br /&gt;
This is just a very wide PS3, the model that supported running Linux natively before it was removed. Firmware was updated to remove this feature, however it can still be done via. homebrew. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Specs&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
* It&#039;s a PS3.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2022-10-24&#039;&#039;&#039; - Thermal paste replaced + firmware updated to latest supported version, also modded.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;binaerpilot&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Tobi expansion board. It is currently attached to corn-syrup in the machine room and even more currently turned off until someone can figure out what is wrong with it.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;anamanaguchi&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This is a Gumstix Overo Tide CPU on a Chestnut43 expansion board. It is currently in the hardware drawer in the CSC.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* TI OMAP 3530 750Mhz (ARM Cortex-A8)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;digital cutter&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
See [[Digital Cutter|here]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;mathnews&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
[[#xylitol|xylitol]] hosts a systemd-nspawn container which serves as the mathNEWS webserver. It is administered by mathNEWS, as a pilot for providing containers to select groups who have more specialized demands than the general-use infrastructure can meet.&lt;br /&gt;
&lt;br /&gt;
= Decommissioned =&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;aspartame&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
aspartame was a taurine clone donated by CSCF. It was once our primary file server, serving as the gateway interface to space on phlogiston. It also used to host the [[#auth1|auth1]] container, which has been temporarily moved to [[#dextrose|dextrose]]. Decomissioned in March 2021 after refusing to boot following a power outage.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;psilodump&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
psilodump is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling phlogiston, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
psilodump was plugged into aspartame. It&#039;s still installed but inaccessible.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;phlogiston&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
phlogiston is a NetApp FAS3000 series fileserver donated by CSCF. It, along with its sibling psilodump, hosted disk shelves exported as iSCSI block devices.&lt;br /&gt;
&lt;br /&gt;
phlogiston is turned off and should remain that way. It is misconfigured to have its drives overlap with those owned by psilodump, and if it is turned on, it will likely cause irreparable data loss.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 10GB RAM&lt;br /&gt;
&lt;br /&gt;
==== Notes from before decommissioning ====&lt;br /&gt;
&lt;br /&gt;
* The lxc files are still present and should not be started up, or else the two copies of auth1 will collide.&lt;br /&gt;
* It currently cannot route the 10.0.0.0/8 block to a misconfiguration on the NetApp. This should be fixed at some point.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;glomag&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Glomag hosted [[#caffeine|caffeine]]. Decommissioned April 6, 2018.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Xeon X3450 @ 2.67 GHz&lt;br /&gt;
* 6 GB RAM&lt;br /&gt;
* vg0: 465 GB software RAID1 (contains root partition):&lt;br /&gt;
** 750 GB Seagate Barracuda SATA hard drive&lt;br /&gt;
** 500 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
* vg1: 596 GB software RAID1 (contains caffeine):&lt;br /&gt;
** 2 &amp;amp;times; 640 GB Western-Digital Caviar Blue SATA hard drive&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Before its decommissioning, glomag hosted [[#caffeine|caffeine]], [[#mail|mail]], and [[#munin|munin]] as [[Virtualization#Linux_Container|Linux containers]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;Lisp machine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
A Symbolics XL1200 Lisp machine. Donated to a new home when we couldn&#039;t get it working.&lt;br /&gt;
&lt;br /&gt;
http://www.globalnerdy.com/2008/12/03/symbolics-xl1200-lisp-machine-free-to-a-good-home/ for some history on this hardware.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
Currently inoperable due to (at least) a missing console cable.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;ginseng&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Ginseng used to be our fileserver, before aspartame and the netapp took over.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Pentium Dual Core E2180&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/s3000ah_tps_1_1.pdf Intel S3000AHV Motherboard]&lt;br /&gt;
* 4 &amp;amp;times; 640 GB Western-Digital Caviar Blue in [[wikipedia:Nested_RAID_levels#RAID_10_.28RAID_1.2B0.29|RAID 10]] behind a [http://www.3ware.com/products/serial_ata2-9650.asp 3ware 9650SE RAID card].&lt;br /&gt;
[[Category:Hardware]]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;calum&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Calum used to be our main server and was named after Calum T Dalek.  Purchased new by the club in 1994. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* SPARCserver 10 (headless SPARCstation 10)&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;paza&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An iMac G3 that was used as a dumb terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 233Mhz PowerPC 740/750&lt;br /&gt;
* 96 MB RAM&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;romana&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Romana was a BeBox that has been in the CSC&#039;s possession since long before BeOS became defunct.&lt;br /&gt;
&lt;br /&gt;
Confirmed on March 19th, 2016 to be fully functional. An SSHv1 compatible client was installed from http://www.abstrakt.ch/be/ and a compatible firewalled daemon was started on Sucrose (living in /root, prefix is /root/ssh-romana). The insecure daemon is to be used a bastion host to jump to hosts only supporting &amp;gt;=SSHv2. The mail daemon on the BeBox has also been configured to send mail through mail.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 PowerPC based processors&lt;br /&gt;
* Stylish Blinken processor-load lights&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sodium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Sodium-citrate was an SGI O2 machine.&lt;br /&gt;
&lt;br /&gt;
In order to net boot you need to set /proc/sys/net/ipv4/ip_no_pmtu_disc to 1. When the O2 boots, hit F5 at the boot menu and type bootp():.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* SGI O2 MIPS processor&lt;br /&gt;
* 423 MB (?) RAM&lt;br /&gt;
* 2 &amp;amp;times; 2 GB hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;acesulfame-potassium&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
An old office terminal.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium 4 2.67GHz&lt;br /&gt;
* 1GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ABIT_VT7.pdf ABIT VT7] Motherboard&lt;br /&gt;
* ATI Radeon 7000&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;skynet&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
skynet was a Sun E6500 machine donated by Sanjay Singh. It was never fully set up.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 15 full CPU/memory boards&lt;br /&gt;
** 2x UltraSPARC II 464MHz / 8MB Cache Processors&lt;br /&gt;
** ??? RAM?&lt;br /&gt;
* 1 I/O board (type=???)&lt;br /&gt;
** ???x disks?&lt;br /&gt;
* 1 CD-ROM drive&lt;br /&gt;
&lt;br /&gt;
*[http://mirror.csclub.uwaterloo.ca/csclub/sun_e6500/ent6k.srvr/ e6500 documentation (hosted on mirror, currently dead link)]&lt;br /&gt;
*[http://docs.oracle.com/cd/E19095-01/ent6k.srvr/ e6500 documentation (backup link)]&lt;br /&gt;
*[http://www.e6500.com/ e6500]&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;freebsd&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
FreeBSD was a virtual machine with FreeBSD installed.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Newer software&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;rainbowdragoneyes&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Rainbowdragoneyes was our Lemote Fuloong MIPS machine. This machine is aliased to rde.csclub.uwaterloo.ca.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 800MHz MIPS Loongson 2f CPU&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;denardo&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Due to some instability, general uselessness, and the acquisition of a more powerful SPARC machine from MFCF, denardo was decommissioned in February 2015.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Sun Fire V210&lt;br /&gt;
* TI UltraSparc IIIi (Jalapeño)&lt;br /&gt;
* 2 GB RAM&lt;br /&gt;
* 160 GB RAID array&lt;br /&gt;
* ALOM on denardo-alom.csclub can be used to power machine on/off&lt;br /&gt;
==&#039;&#039;artificial-flavours&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Artificial-flavours was our secondary (backup services) server. It used to be an office terminal. It was decommissioned in February 2015 and transferred to the ownership of Women in Computer Science (WiCS).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Celeron 3.2GHz&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/Biostar_P4M80-M4.pdf Biostar P4M80-M4] Motherboard&lt;br /&gt;
* Western-Digital 80 GB ATA hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-citrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
Potassium-citrate is a dual-processor Alpha machine. It is on extended loan from pbarfuss.&lt;br /&gt;
&lt;br /&gt;
It is temporarily decommissioned pending the reinstallation of a supported operating system (such as OpenBSD).&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Alphaserver CS20 (2 833MHz EV68al CPUs)&lt;br /&gt;
* 512MB RAM&lt;br /&gt;
* 36 GB Seagate SCSI hard drive&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;potassium-nitrate&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
This was a Sun Fire E2900 from a decommissioned MFCF compute cluster. It had a SPARC architecture and ran OpenBSD, unlike many of our other systems which are x86/x86-64 and Linux/Debian. After multiple unsuccessful attempts to boot a modern Linux kernel and possible hardware instability, it was determined to be non-cost-effective and non-effort-effective to put more work into running this machine. The system was reclaimed by MFCF where someone from CS had better luck running a suitable operating system (probably Solaris).&lt;br /&gt;
&lt;br /&gt;
The name is from saltpetre, because sparks.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 24 CPUs&lt;br /&gt;
* 90GB main memory&lt;br /&gt;
* 400GB scratch disk local storage in /scratch-potassium-nitrate&lt;br /&gt;
&lt;br /&gt;
There is a [[Sun 2900 Strategy Guide|setup guide]] available for this machine.&lt;br /&gt;
&lt;br /&gt;
See also [[Sun 2900]].&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;taurine&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: On August 21, 2019, just before 2:30PM EDT, we were informed that taurine caught fire&#039;&#039;&#039;. As a result, taurine has been decommissioned as of Fall 2019.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 2 AMD Opteron 2218 CPUs&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 136 GB LVM volume group&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
* Virtual machines&lt;br /&gt;
* BitlBee IRC instant messaging gateway (localhost only)&lt;br /&gt;
*[[ident]] server to maintain high connection cap to freenode&lt;br /&gt;
* Runs ssh on ports 21,22,53,80,81,443,8000,8080 for user&#039;s convenience.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;dextrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
dextrose was a [[#taurine|taurine]] clone donated by CSCF and was decommissioned in Fall 2019 after being replaced with a more powerful server.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;sucrose&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
sucrose was a [[#taurine|taurine]] clone donated by CSCF. It was decommissioned in Fall 2019 following multiple hardware failures.&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;goto80&#039;&#039;==&lt;br /&gt;
&#039;&#039;&#039;Note (2022-10-25): This seems to have gone missing or otherwise left our hands.&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
This was small ARM machine we picked up in order to have similar hardware to the Real Time Operating Systems (CS 452) course. It has a [[TS-7800_JTAG|JTAG]] interface. Located was the office on the top shelf above strombola.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* 500 MHz Feroceon (ARM926ej-s compatible) processor&lt;br /&gt;
* ARMv5TEJ architecture&lt;br /&gt;
&lt;br /&gt;
Use -march=armv5te -mtune=arm926ej-s options to GCC.&lt;br /&gt;
&lt;br /&gt;
For information on the TS-7800&#039;s hardware see here:&lt;br /&gt;
http://www.embeddedarm.com/products/board-detail.php?product=ts-7800&lt;br /&gt;
&lt;br /&gt;
==&#039;&#039;nullsleep&#039;&#039;==&lt;br /&gt;
&lt;br /&gt;
nullsleep is an [http://csclub.uwaterloo.ca/misc/manuals/ASRock_ION_330.pdf ASRock ION 330] machine given to us by CSCF and funded by MEF.&lt;br /&gt;
&lt;br /&gt;
It&#039;s decommissioned on 2023-03-20 due to repeated unexpected shutdown. Replaced by [[#powernap|powernap]]. &lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel® Dual Core Atom™ 330&lt;br /&gt;
* 2GB RAM&lt;br /&gt;
* NVIDIA® ION™ graphics&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* DVD Burner&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Nullsleep has the office speakers (a pair of nice studio monitors) currently connected to it.&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
Nullsleep runs MPD for playing music. Control of MPD is available only to users in the &amp;quot;audio&amp;quot; group.&lt;br /&gt;
Music is located in /music on the office terminal&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;bit-shifter&#039;&#039; ==&lt;br /&gt;
bit-shifter was an office terminal, decommissioned April 2023 due to extended age. It was upgraded to the same specs as Strombola at an unknown point in time.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core 2 Quad CPU Q8300&lt;br /&gt;
* 4GB RAM&lt;br /&gt;
* Nvidia GeForce GT 440&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
* Jacob Parker&#039;s Firewire Card&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;strombola&#039;&#039;==&lt;br /&gt;
Strombola was an office terminal named after Gordon Strombola. It was retired in April 2023.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
* Intel Pentium G4600 2 cores @ 3.6Ghz&lt;br /&gt;
* 8 GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
==== Speakers ====&lt;br /&gt;
Strombola used to have integrated 5.1 channel sound before we got new speakers and moved audio stuff to nullsleep.&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;gwem&#039;&#039; ==&lt;br /&gt;
gwem was an office terminal that was created because AMD donated a graphics card. It entered CSC service in February 2012.&lt;br /&gt;
&lt;br /&gt;
=== Specs ===&lt;br /&gt;
&lt;br /&gt;
* AMD FX-8150 3.6GHz 8-Core CPU&lt;br /&gt;
* 16 GB RAM&lt;br /&gt;
* AMD Radeon 6870 HD 1GB GPU&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/ga-990fxa-ud7_e.pdf Gigabyte GA-990FXA-UD7] Motherboard&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
&lt;br /&gt;
== &#039;&#039;maltodextrin&#039;&#039; ==&lt;br /&gt;
(*specs are outdated at least as of 2023-05-27*)&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/motherboard_manual_ga-ep45-ud3l.pdf Gigabyte GA-EP45-UD3L] Motherboard&lt;br /&gt;
Maltodextrin was an office terminal. It was upgraded in Spring 2014 after an unidentified failure. Not operational (no video output) as of July 2022.&lt;br /&gt;
&lt;br /&gt;
==== Specs ====&lt;br /&gt;
&lt;br /&gt;
* Intel Core i3-4130 @ 3.40 GHz&lt;br /&gt;
* 8GB RAM&lt;br /&gt;
* 1x 64GB SanDisk SDSSDP064G SSD&lt;br /&gt;
*[http://csclub.uwaterloo.ca/misc/manuals/E8425_H81I_PLUS.pdf ASUS H81-PLUS] Motherboard&lt;br /&gt;
&lt;br /&gt;
==== Services ====&lt;br /&gt;
&lt;br /&gt;
*[http://csclub.uwaterloo.ca/office/webcam Office webcam]&lt;br /&gt;
&lt;br /&gt;
= UPS =&lt;br /&gt;
&lt;br /&gt;
All of the machines in the MC 3015 machine room are connected to one of our UPSs.&lt;br /&gt;
&lt;br /&gt;
All of our UPSs can be monitored via CSCF:&lt;br /&gt;
&lt;br /&gt;
* MC3015-UPS-B2&lt;br /&gt;
* mc-3015-e7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced July 2014) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-e7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-f7-ups-1.cs.uwaterloo.ca (rbc55, batteries replaced Feb 2017) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-f7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2010) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-g7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-g7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-1.cs.uwaterloo.ca (su5000t, batteries replaced 2004) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-1&amp;amp;var-Interval=30m)&lt;br /&gt;
* mc-3015-h7-ups-2.cs.uwaterloo.ca (unknown) (https://metrics.cscf.uwaterloo.ca/grafana/dashboard/db/ups-statistics?orgId=1&amp;amp;var-UPS=mc-3015-h7-ups-2&amp;amp;var-Interval=30m)&lt;br /&gt;
&lt;br /&gt;
We will receive email alerts for any issues with the UPS. Their status can be monitored via [[SNMP]].&lt;br /&gt;
&lt;br /&gt;
TODO: Fix labels &amp;amp; verify info is correct &amp;amp; figure out why we can&#039;t talk to cacti.&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=5131</id>
		<title>SSL</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=SSL&amp;diff=5131"/>
		<updated>2023-10-19T18:06:11Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== GlobalSign ==&lt;br /&gt;
&lt;br /&gt;
The CSC currently has an SSL Certificate from GlobalSign for *.csclub.uwaterloo.ca provided at no cost to us through IST.  GlobalSign likes to take a long time to respond to certificate signing requests (CSR) for wildcard certs, so our CSR really needs to be handed off to IST at least 2 weeks in advance. You can do it sooner – the certificate expiry date will be the old expiry date + 1 year (+ a bonus )  Having an invalid cert for any length of time leads to terrible breakage, followed by terrible workarounds and prolonged problems.&lt;br /&gt;
&lt;br /&gt;
When the certificate is due to expire in a month or two, syscom should (but apparently doesn&#039;t always) get an email notification. This will include a renewal link. Otherwise, use the [https://uwaterloo.ca/information-systems-technology/about/organizational-structure/information-security-services/certificate-authority/globalsign-signed-x5093-certificates/self-service-globalsign-ssl-certificates IST-CA self service system]. Please keep a copy of the key, CSR and (once issued) certificate in &amp;lt;tt&amp;gt;/home/sysadmin/certs&amp;lt;/tt&amp;gt;. The OpenSSL examples linked there are good to generate a 2048-bit RSA key and a corresponding CSR. It&#039;s probably a good idea to change the private key (as it&#039;s not that much effort anyways). Just sure your CSR is for &amp;lt;tt&amp;gt;*.csclub.uwaterloo.ca&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
At the self-service portal, these options worked in 2013. If you need IST assistance, [mailto:ist-ca@uwaterloo.ca ist-ca@uwaterloo.ca] is the email address you should contact.&lt;br /&gt;
  Products: OrganizationSSL&lt;br /&gt;
  SSL Certificate Type: Wildcard SSL Certificate&lt;br /&gt;
  Validity Period: 1 year&lt;br /&gt;
  Are you switching from a Competitor? No, I am not switching&lt;br /&gt;
  Are you renewing this Certificate? Yes (paste current certificate)&lt;br /&gt;
  30-day bonus: Yes (why not?)&lt;br /&gt;
  Add specific Subject Alternative Names (SANs): No (*.csclub.uwaterloo.ca automatically adds csclub.uwaterloo.ca as a SAN)&lt;br /&gt;
  Enter Certificate Signing Request (CSR): Yes (paste CSR)&lt;br /&gt;
  Contact Information:&lt;br /&gt;
    First Name: Computer Science Club&lt;br /&gt;
    Last Name: Systems Committee&lt;br /&gt;
    Telephone: +1 519 888 4567 x33870&lt;br /&gt;
    Email Address: syscom@csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
=== Helpful links ===&lt;br /&gt;
* [https://support.globalsign.com/ssl/ssl-certificates-installation/generate-csr-openssl How to generate a new CSR and private key]&lt;br /&gt;
* [https://uwaterloo.atlassian.net/wiki/spaces/ISTKB/pages/262013183/How+to+obtain+a+new+GlobalSign+certificate+or+renew+an+existing+one How to obtain a new GlobalSign certificate or renew an existing one]&lt;br /&gt;
* [https://system.globalsign.com/bm/public/certificate/poporder.do?domain=PAR12271n5w6s27pvg8d92v4150t GlobalSign UWaterloo self-service page]&lt;br /&gt;
* [https://support.globalsign.com/ca-certificates/intermediate-certificates/organizationssl-intermediate-certificates GlobalSign intermediate certificate] (needed to create a certificate chain; see below)&lt;br /&gt;
&lt;br /&gt;
=== OpenSSL cheat sheet ===&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Generate a new CSR and private key (do this in a new directory):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -out csclub.uwaterloo.ca.csr -new -newkey rsa:2048 -keyout csclub.uwaterloo.ca.key -nodes&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Enter the following information at the prompts:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Country Name (2 letter code) [AU]:CA&lt;br /&gt;
State or Province Name (full name) [Some-State]:Ontario&lt;br /&gt;
Locality Name (eg, city) []:Waterloo&lt;br /&gt;
Organization Name (eg, company) [Internet Widgits Pty Ltd]:University of Waterloo&lt;br /&gt;
Organizational Unit Name (eg, section) []:Computer Science Club&lt;br /&gt;
Common Name (e.g. server FQDN or YOUR name) []:*.csclub.uwaterloo.ca&lt;br /&gt;
Email Address []:systems-committee@csclub.uwaterloo.ca&lt;br /&gt;
&lt;br /&gt;
Please enter the following &#039;extra&#039; attributes&lt;br /&gt;
to be sent with your certificate request&lt;br /&gt;
A challenge password []:&lt;br /&gt;
An optional company name []:&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
View the information inside a CSR:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl req -noout -text -in csclub.uwaterloo.ca.csr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
View the information inside a private key:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl pkey -noout -text -in csclub.uwaterloo.ca.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
View information inside a certificate:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
openssl x509 -noout -text -in csclub.uwaterloo.ca.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== csclub.cloud ===&lt;br /&gt;
Once a year, someone from IST will ask us to create a temporary TXT record for csclub.cloud to prove to GlobalSign that we own it. This must be created at the &amp;lt;b&amp;gt;root&amp;lt;/b&amp;gt; of the domain. Since this zone is managed dynamically (via the acme.sh script on biloba, see below), we need to freeze the domain and update /var/lib/bind/db.csclub.cloud directly. Here are the steps:&lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Run &amp;lt;code&amp;gt;rndc freeze csclub.cloud&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Open /var/lib/bind/db.csclub.cloud and add a new TXT record. It&#039;ll look something like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
TXT &amp;quot;_globalsign-domain-verification=blablabla&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
In the same file, make sure to also update the SOA serial number. It should generally be YYYYMMDDNN where NN is a monotonically increasing counter (YYYYMMDD is the current date).&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Run a DNS query to make sure you can see the TXT record:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
dig -t txt @dns1 csclub.cloud&lt;br /&gt;
dig -t txt @dns2 csclub.cloud&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Email back the person from IST and let them know that we created the TXT record.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;&lt;br /&gt;
Once the certificate has been renewed, delete the TXT record, update the SOA serial number, and run &amp;lt;code&amp;gt;rndc reload&amp;lt;/code&amp;gt;.&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Run &amp;lt;code&amp;gt;rndc thaw csclub.cloud&amp;lt;/code&amp;gt;.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Certificate Files ==&lt;br /&gt;
Let&#039;s say you obtain a new certificate for *.csclub.uwaterloo.ca. Here are the files which should be stored in the certs folder:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;csclub.uwaterloo.ca.key: private key created by openssl&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;csclub.uwaterloo.ca.csr: certificate signing request created by openssl&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;order: order number from GlobalSign&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;csclub.uwaterloo.ca.crt: certificate created by GlobalSign&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;globalsign-intermediate.crt: intermediate certificate from GlobalSign, obtainable from [https://support.globalsign.com/ca-certificates/intermediate-certificates/organizationssl-intermediate-certificates here]. As of this writing, we use the &amp;quot;OrganizationSSL SHA-256 R3 Intermediate Certificate&amp;quot;. Just click the &amp;quot;View in Base64&amp;quot; button and copy the contents.&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;There is an alternative way to get the intermediate certificate: if you run &amp;lt;code&amp;gt;openssl x509 -noout -text -in csclub.uwaterloo.ca.crt&amp;lt;/code&amp;gt;, under X509v3 extensions &amp;gt; Authority Information Access, there should be a field called &amp;quot;CA Issuers&amp;quot; which has a URL which looks like http://secure.globalsign.com/cacert/gsrsaovsslca2018.crt. You can download that file and convert it to PEM:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget https://secure.globalsign.com/cacert/gsrsaovsslca2018.crt&lt;br /&gt;
openssl x509 -inform der -in gsrsaovsslca2018.crt -out globalsign-intermediate.crt&lt;br /&gt;
rm gsrsaovsslca2018.crt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;csclub.uwaterloo.ca.chain: create this with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat csclub.uwaterloo.ca.crt globalsign-intermediate.crt &amp;gt; csclub.uwaterloo.ca.chain&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;csclub.uwaterloo.ca.pem: create this with the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cat csclub.uwaterloo.ca.key csclub.uwaterloo.ca.chain &amp;gt; csclub.uwaterloo.ca.pem&lt;br /&gt;
chmod 600 csclub.uwaterloo.ca.pem&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Certificate Locations ==&lt;br /&gt;
&lt;br /&gt;
Keep a copy of newly generated certificates in /users/sysadmin/certs.&lt;br /&gt;
&lt;br /&gt;
A list of places you&#039;ll need to put the new certificate to keep our services running. Private key (if applicable) should be kept next to the certificate with the extension .key.&lt;br /&gt;
&lt;br /&gt;
* caffeine:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* coffee:/etc/ssl/private/csclub.uwaterloo.ca (for PostgreSQL and MariaDB)&lt;br /&gt;
* mail:/etc/ssl/private/csclub-wildcard.crt (for Apache, Postfix and Dovecot)&lt;br /&gt;
* mailman:/etc/ssl/private/csclub-wildcard-chain.crt (for Apache)&lt;br /&gt;
* rt:/etc/ssl/private/csclub-wildcard.crt (for Apache)&lt;br /&gt;
* potassium-benzoate:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* phosphoric-acid:/etc/ssl/private/csclub-wildcard-chain.crt (for ceod)&lt;br /&gt;
* auth1:/etc/ssl/private/csclub-wildcard.crt (for slapd, make sure to &amp;lt;code&amp;gt;sudo service slapd restart&amp;lt;/code&amp;gt;)&lt;br /&gt;
* auth2:/etc/ssl/private/csclub-wildcard.crt (for slapd, make sure to &amp;lt;code&amp;gt;sudo service slapd restart&amp;lt;/code&amp;gt;)&lt;br /&gt;
* mattermost:/etc/ssl/private/csclub-wildcard.crt (for nginx)&lt;br /&gt;
* load-balancer-0(1|2):/etc/ssl/private/csclub.uwaterloo.ca (for haproxy) [temporarily down 2020]&lt;br /&gt;
* chat:/etc/ssl/private/csclub-wildcard-chain.crt (for nginx)&lt;br /&gt;
* prometheus:/etc/ssl/private/csclub-wildcard-chain.crt (for Apache)&lt;br /&gt;
* bigbluebutton:/etc/nginx/ssl/csclub-wildcard-chain.crt (podman container on xylitol)&lt;br /&gt;
* icy:/etc/ssl/private/csclub-wildcard.pem (for Icecast)&lt;br /&gt;
* chamomile:/etc/ssl/private/cloud.csclub.uwaterloo.ca.chain.crt, /etc/ssl/private/csclub.cloud.chain, /etc/ssl/private/csclub.uwaterloo.ca.chain (for nginx)&lt;br /&gt;
* biloba:/etc/ssl/private/cloud.csclub.uwaterloo.ca.chain.crt, /etc/ssl/private/csclub.cloud.chain, /etc/ssl/private/csclub.uwaterloo.ca.chain (for nginx)&lt;br /&gt;
* nextcloud (nspawn container inside guayusa): /etc/ssl/private/csclub.uwaterloo.ca.chain (for nginx)&lt;br /&gt;
* citric-acid (runs vaultwarden): /etc/ssl/private/csclub.uwaterloo.ca.{chain,key}&lt;br /&gt;
&lt;br /&gt;
Some services (e.g. Dovecot, Postfix) prefer to have the certificate chain in one file. Concatenate the appropriate intermediate root to the end of the certificate and store this as csclub-wildcard-chain.crt.&lt;br /&gt;
&lt;br /&gt;
=== More certificate locations ===&lt;br /&gt;
We have some SSL certificates which are not used by web servers, but still need to be renewed eventually.&lt;br /&gt;
&lt;br /&gt;
==== Prometheus node exporter ====&lt;br /&gt;
All of our Prometheus node exporters are using mTLS via stunnel (every bare-metal host, as well as caffeine, coffee and mail, is running this exporter). The certificates (both client and server) are set to expire in &amp;lt;b&amp;gt;September 2031&amp;lt;/b&amp;gt;; before then, create new keypairs in /opt/prometheus/tls, and deploy the new server.crt, node.crt and node.key to /etc/stunnel/tls on all machines. Restart prometheus and all of the node exporters.&lt;br /&gt;
&lt;br /&gt;
==== ADFS ====&lt;br /&gt;
See [[ADFS]]. When the university&#039;s IdP certificate expires (&amp;lt;b&amp;gt;October 2025&amp;lt;/b&amp;gt;), we can just download a new one and restart Apache; when our own certificate expires (&amp;lt;b&amp;gt;July 2031&amp;lt;/b&amp;gt;), we need to submit a new form to IST (please do this &amp;lt;i&amp;gt;before&amp;lt;/i&amp;gt; the cert expires).&lt;br /&gt;
&lt;br /&gt;
==== Keycloak ====&lt;br /&gt;
See [[Keycloak]]. When the saml-passthrough certificate expires (&amp;lt;b&amp;gt;January 2032&amp;lt;/b&amp;gt;), you need to create a new keypair in /srv/saml-passthrough on caffeine, and upload the new certificate into the Keycloak UI (IdP settings). When the Keycloak SP certificate expires (&amp;lt;b&amp;gt;December 2031&amp;lt;/b&amp;gt;), make sure to create a new keypair and upload it to the Keycloak UI (Realm Settings).&lt;br /&gt;
&lt;br /&gt;
== letsencrypt ==&lt;br /&gt;
&lt;br /&gt;
We support letsencrypt for our virtual hosts with custom domains. We use the &amp;lt;tt&amp;gt;cerbot&amp;lt;/tt&amp;gt; from debian repositories with a configuration file at &amp;lt;tt&amp;gt;/etc/letsencrypt/cli.ini&amp;lt;/tt&amp;gt;, and a systemd timer to handle renewals.&lt;br /&gt;
&lt;br /&gt;
The setup for a new domain is:&lt;br /&gt;
&lt;br /&gt;
# Become &amp;lt;tt&amp;gt;certbot&amp;lt;/tt&amp;gt; on caffine with &amp;lt;tt&amp;gt;sudo -u certbot bash&amp;lt;/tt&amp;gt; or similar.&lt;br /&gt;
# Run &amp;lt;tt&amp;gt;certbot certonly -c /etc/letsencrypt/cli.ini -d DOMAIN --logs-dir /tmp&amp;lt;/tt&amp;gt;. The logs-dir isn&#039;t important and is only needed for troubleshooting.&lt;br /&gt;
# Set up the Apache site configuration using the example below. (apache config is in /etc/apache2) Note the permanent redirect to https.&lt;br /&gt;
# Make sure to commit your changes when you&#039;re done.&lt;br /&gt;
# Reloading apache config is &amp;lt;tt&amp;gt;sudo systemctl reload apache2&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;VirtualHost *:80&amp;gt;&lt;br /&gt;
     ServerName example.com&lt;br /&gt;
     ServerAlias *.example.com&lt;br /&gt;
     ServerAdmin example@csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
     #DocumentRoot /users/example/www/&lt;br /&gt;
     Redirect permanent / https://example.com/&lt;br /&gt;
 &lt;br /&gt;
     ErrorLog /var/log/apache2/example-error.log&lt;br /&gt;
     CustomLog /var/log/apache2/example-access.log combined&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 &amp;lt;VirtualHost csclub:443&amp;gt;&lt;br /&gt;
     SSLEngine on&lt;br /&gt;
     SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem&lt;br /&gt;
     SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem&lt;br /&gt;
     SSLStrictSNIVHostCheck on&lt;br /&gt;
 &lt;br /&gt;
     ServerName example.com&lt;br /&gt;
     ServerAlias *.example.com&lt;br /&gt;
     ServerAdmin example@csclub.uwaterloo.ca&lt;br /&gt;
 &lt;br /&gt;
     DocumentRoot /users/example/www&lt;br /&gt;
 &lt;br /&gt;
     ErrorLog /var/log/apache2/example-error.log&lt;br /&gt;
     CustomLog /var/log/apache2/example-access.log combined&lt;br /&gt;
 &amp;lt;/VirtualHost&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== acme.sh ==&lt;br /&gt;
We are using [https://github.com/acmesh-official/acme.sh acme.sh] for provisioning SSL certificates for some of our *.csclub.cloud domains. It is currently set up under /root/.acme.sh on biloba.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;NOTE&amp;lt;/b&amp;gt;: acme.sh has a cron job which automatically renews certificates before they expire and reloads NGINX, so you do not have to do anything after issuing and installing a certificate (i.e. &amp;quot;set-and-forget&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
=== How to add a new SSL cert for a custom domain on CSC cloud ===&lt;br /&gt;
Let&#039;s say user &amp;lt;code&amp;gt;ctdalek&amp;lt;/code&amp;gt; wants &amp;lt;code&amp;gt;mydomain.com&amp;lt;/code&amp;gt; to point to a VM on CSC cloud.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
TLDR:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Obtain the cert.&lt;br /&gt;
# If a subdomain was also requested, pass the -d option multiple times, e.g.&lt;br /&gt;
# `-d mydomain.com -d sub.mydomain.com`. Make sure the &amp;quot;main&amp;quot; domain is specified first.&lt;br /&gt;
acme.sh --issue -d mydomain.com -w /var/www&lt;br /&gt;
&lt;br /&gt;
# Install the cert.&lt;br /&gt;
# If a subdomain was also requested, only specify the &amp;quot;main&amp;quot; domain.&lt;br /&gt;
acme.sh --install-cert -d mydomain.com \&lt;br /&gt;
    --key-file /etc/nginx/ceod/member-ssl/mydomain.com.key \&lt;br /&gt;
    --fullchain-file /etc/nginx/ceod/member-ssl/mydomain.com.chain \&lt;br /&gt;
    --reloadcmd &amp;quot;/root/bin/reload-nginx.sh&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# Create a vhost file.&lt;br /&gt;
# Look at the other files in the same directory for inspiration.&lt;br /&gt;
# Make sure the file starts with the username and an underscore, e.g. &amp;quot;ctdalek_&amp;quot;,&lt;br /&gt;
# because this is how ceod keeps track of the vhosts.&lt;br /&gt;
# Make sure to set the custom domain name(s) and paths to the SSL key/cert.&lt;br /&gt;
vim /etc/nginx/ceod/member-vhosts/ctdalek_mydomain.com&lt;br /&gt;
&lt;br /&gt;
# Finally, reload NGINX on both biloba and chamomile. The /etc/nginx/ceod directory&lt;br /&gt;
# is shared between them.&lt;br /&gt;
/root/bin/reload-nginx.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd /opt    &lt;br /&gt;
git clone --depth 1 https://github.com/acmesh-official/acme.sh    &lt;br /&gt;
cd acme.sh    &lt;br /&gt;
./acme.sh --install -m syscom@csclub.uwaterloo.ca    &lt;br /&gt;
. &amp;quot;/root/.acme.sh/acme.sh.env&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;b&amp;gt;Important&amp;lt;/b&amp;gt;: If invoking acme.sh from another program, it needs the environment variables set in acme.sh.env. Currently, that is just&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
LE_WORKING_DIR=&amp;quot;/root/.acme.sh&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For testing purposes, make sure to use the Let&#039;s Encrypt test server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh --set-default-ca --server letsencrypt_test&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== NGINX setup ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir -p /var/www/.well-known/acme-challenge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Add the following snippet to your default NGINX file (e.g. /etc/nginx/sites-enabled/default):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  # For Let&#039;s Encrypt&lt;br /&gt;
  location /.well-known/acme-challenge/ {&lt;br /&gt;
    alias /var/www/.well-known/acme-challenge/;&lt;br /&gt;
  }&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now assuming that biloba has the IP address for *.csclub.cloud, you can test that everything is working:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh --issue -d app.merenber.csclub.cloud -w /var/www&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To install a certificate after it&#039;s been issued:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh --install-cert -d app.merenber.csclub.cloud \&lt;br /&gt;
    --key-file /etc/nginx/ceod/member-ssl/app.merenber.csclub.cloud.key \&lt;br /&gt;
    --fullchain-file /etc/nginx/ceod/member-ssl/app.merenber.csclub.cloud.chain \&lt;br /&gt;
    --reloadcmd &amp;quot;/root/bin/reload-nginx.sh&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
At this point, you should add your NGINX vhost file which uses that SSL certificate.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
To remove a certificate:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh --remove -d app.merenber.csclub.cloud&lt;br /&gt;
rm -r /root/.acme.sh/app.merenber.csclub.cloud&lt;br /&gt;
rm /etc/nginx/ceod/member-ssl/app.merenber.csclub.cloud.chain&lt;br /&gt;
rm /etc/nginx/ceod/member-ssl/app.merenber.csclub.cloud.key&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Don&#039;t forget to remove the NGINX vhost file too.&lt;br /&gt;
&lt;br /&gt;
Once you think you&#039;re ready, use a real ACME provider, e.g.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh --set-default-ca --server letsencrypt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Since we have a [https://zerossl.com ZeroSSL] account, and ZeroSSL has no rate limit, we are going to use that instead:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
acme.sh  --register-account  --server zerossl \&lt;br /&gt;
        --eab-kid  xxxxxxxxxxxx  \&lt;br /&gt;
        --eab-hmac-key  xxxxxxxxx&lt;br /&gt;
acme.sh --set-default-ca  --server zerossl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DNS challenge ===&lt;br /&gt;
To obtain a wildcard certificate (e.g. *.k8s.csclub.cloud), you will need to perform the DNS-01 challenge. We are going to use nsupdate to interact with our BIND9 server on dns1.&lt;br /&gt;
&lt;br /&gt;
On dns1, run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
tsig-keygen csc-cloud&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Paste the output into the appropriate section in /etc/bind/named.conf.local. Also paste it into a file somewhere on biloba, e.g. /etc/csc/csc-cloud-tsig.key.&lt;br /&gt;
&lt;br /&gt;
Add the following to the csclub.cloud zone block:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  allow-update {&lt;br /&gt;
    !{&lt;br /&gt;
      !127.0.0.1;&lt;br /&gt;
      !::1;&lt;br /&gt;
      !129.97.134.0/24;&lt;br /&gt;
      !2620:101:f000:4901::/64;&lt;br /&gt;
      any;&lt;br /&gt;
    };&lt;br /&gt;
    key csc-cloud;&lt;br /&gt;
  };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(We&#039;re basically trying to restrict updates to the given IP ranges. See https://serverfault.com/a/417229.)&lt;br /&gt;
&lt;br /&gt;
The &#039;bind&#039; user can&#039;t write to files under /etc/bind, so we&#039;re going to move our zone file to /var/lib/bind instead.&lt;br /&gt;
Comment out &#039;file &amp;quot;/etc/bind/db.csclub.cloud&amp;quot;;&#039; from named.conf.local and add this line below it:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  file &amp;quot;/var/lib/bind/db.csclub.cloud&amp;quot;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then run:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cp /etc/bind/db.csclub.cloud /var/lib/bind/db.csclub.cloud&lt;br /&gt;
  chown bind:bind /var/lib/bind/db.csclub.cloud&lt;br /&gt;
  rndc reload&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On biloba, check that everything&#039;s working:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  nsupdate -k /etc/csc/csc-cloud-tsig.key -v &amp;lt;&amp;lt;EOF&lt;br /&gt;
  update add test.csclub.cloud 300 A 0.0.0.0&lt;br /&gt;
  send&lt;br /&gt;
  EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Use a tool such as &amp;lt;code&amp;gt;dig&amp;lt;/code&amp;gt; to make sure that the update was successful.&lt;br /&gt;
If it worked, you can delete the record:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  nsupdate -k /etc/csc/csc-cloud-tsig.key -v &amp;lt;&amp;lt;EOF&lt;br /&gt;
  delete test.csclub.cloud&lt;br /&gt;
  send&lt;br /&gt;
  EOF&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now we are ready to actually perform the challenge with acme.sh:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  export NSUPDATE_SERVER=&amp;quot;dns1.csclub.uwaterloo.ca&amp;quot;&lt;br /&gt;
  export NSUPDATE_KEY=&amp;quot;/etc/csc/csc-cloud-tsig.key&amp;quot;&lt;br /&gt;
  acme.sh --issue --dns dns_nsupdate -d &#039;k8s.csclub.cloud&#039; -d &#039;*.k8s.csclub.cloud&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(If something goes wrong, use the &amp;lt;code&amp;gt;--debug&amp;lt;/code&amp;gt; flag.)&lt;br /&gt;
&lt;br /&gt;
If all went well, just install the certificate as usual:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  acme.sh --install-cert -d k8s.csclub.cloud \&lt;br /&gt;
    --key-file /etc/nginx/ceod/syscom-ssl/k8s.csclub.cloud.key \&lt;br /&gt;
    --fullchain-file /etc/nginx/ceod/syscom-ssl/k8s.csclub.cloud.chain \&lt;br /&gt;
    --reloadcmd &#039;systemctl reload nginx&#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5051</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Main_Page&amp;diff=5051"/>
		<updated>2023-07-31T04:30:00Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This is the Wiki of the [[Computer Science Club]]. Feel free to start adding pages and information.&lt;br /&gt;
&lt;br /&gt;
[[Special:AllPages]]&lt;br /&gt;
&lt;br /&gt;
== Member/Club Rep Documentation ==&lt;br /&gt;
To access our Linux machines, see [[How to SSH]] and select one of the general-use machines from [[Machine List#General-Use Servers]].&lt;br /&gt;
&lt;br /&gt;
To host a website, see [[Web Hosting]]. If you are trying to host websites for clubs, see [[Club Hosting]].&lt;br /&gt;
&lt;br /&gt;
To use our VPS services (similar to Linode and Amazon EC2), see [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]. Note that you&#039;ll need to activate your account on one of CSC&#039;s machines before using the management panel.&lt;br /&gt;
&lt;br /&gt;
To view instruction on playing music at the office, see [[Music]].&lt;br /&gt;
&lt;br /&gt;
To use our Nextcloud instance (similar to Google Drive and Dropbox), go to [https://files.csclub.uwaterloo.ca CSC Files].&lt;br /&gt;
&lt;br /&gt;
=== Guides ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[New Member Guide]]&lt;br /&gt;
* [[Club Hosting]]&lt;br /&gt;
* [[Web Hosting]]&lt;br /&gt;
* [[Git Hosting]]&lt;br /&gt;
* [[How to IRC]]&lt;br /&gt;
* [[How to SSH]]&lt;br /&gt;
* [[MySQL]]&lt;br /&gt;
* [https://docs.cloud.csclub.uwaterloo.ca/ CSC Cloud Documentation]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== News and Events ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Meetings]]&lt;br /&gt;
* [[Talks]]&lt;br /&gt;
* [[Projects]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Committees Documentation ==&lt;br /&gt;
=== Club Operation ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Budget Guide]]&lt;br /&gt;
* [[ceo]]&lt;br /&gt;
* [[Exec Manual]]&lt;br /&gt;
* [[MEF Guide]]&lt;br /&gt;
* [[Office Policies]]&lt;br /&gt;
* [[Office Staff]]&lt;br /&gt;
* [[Sysadmin Guide]]&lt;br /&gt;
* [[Imapd Guide]]&lt;br /&gt;
* [[SCS Guide]]&lt;br /&gt;
* [[Kerberos | Password Reset ]]&lt;br /&gt;
* [[Keys and Fobs]]&lt;br /&gt;
&lt;br /&gt;
* [[Talks Guide]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Hardware Infrastructure (the bare metals) ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Disk Drive RMA Process]]&lt;br /&gt;
* [[Hardware]]&lt;br /&gt;
* [[Machine List]]&lt;br /&gt;
* [[IPMI101]]&lt;br /&gt;
* [[New NetApp]]&lt;br /&gt;
* [[Switches]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Software Infrastructure ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[ADFS]]&lt;br /&gt;
* [[Authentication]]&lt;br /&gt;
* [[Backups]]&lt;br /&gt;
* [[DNS]]&lt;br /&gt;
* [[Debian Repository]]&lt;br /&gt;
* [[Firewall]]&lt;br /&gt;
* [[Kerberos]]&lt;br /&gt;
* [[Keycloak]]&lt;br /&gt;
* [[KVM]]&lt;br /&gt;
* [[LDAP]]&lt;br /&gt;
* [[Network]]&lt;br /&gt;
* [[New CSC Machine]]&lt;br /&gt;
* [[NFS/Kerberos]]&lt;br /&gt;
* [[Observability]]&lt;br /&gt;
* [[OID Assignment]]&lt;br /&gt;
* [[Podman]]&lt;br /&gt;
* [[Scratch]]&lt;br /&gt;
* [[SNMP]]&lt;br /&gt;
* [[SSL]]&lt;br /&gt;
* [[Syscom Todo]]&lt;br /&gt;
* [[Systemd-nspawn]]&lt;br /&gt;
* [[Two-Factor Authentication]]&lt;br /&gt;
* [[UID/GID Assignment]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Services ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Application List]]&lt;br /&gt;
* [[BigBlueButton]]&lt;br /&gt;
* [[Mail]]&lt;br /&gt;
* [[Mailing Lists]]&lt;br /&gt;
* [[Mirror]]&lt;br /&gt;
* [[Music]]&lt;br /&gt;
* [[Nextcloud]]&lt;br /&gt;
* [[Printing]]&lt;br /&gt;
* [[Pulseaudio]]&lt;br /&gt;
* [[Webmail]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== CSC Cloud ===&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Ceph]]&lt;br /&gt;
* [[Cloud Networking]]&lt;br /&gt;
* [[CloudStack]]&lt;br /&gt;
* [[CloudStack Templates]]&lt;br /&gt;
* [[Kubernetes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Acronyms]]&lt;br /&gt;
* [[Budget]]&lt;br /&gt;
* [[Executive]]&lt;br /&gt;
* [[Past Executive]]&lt;br /&gt;
* [[History]]&lt;br /&gt;
* [[Library]]&lt;br /&gt;
* [[MEF Proposals]]&lt;br /&gt;
* [[Proposed Constitution Changes]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Historical or Obsolete Pages ==&lt;br /&gt;
&amp;lt;div style=&amp;quot;-webkit-column-count:3; -moz-column-count:3; column-count:3;&amp;quot;&amp;gt;&lt;br /&gt;
* [[Robot Arm]]&lt;br /&gt;
* [[Webcams]]&lt;br /&gt;
* [[Website]]&lt;br /&gt;
* [[Digital Cutter]]&lt;br /&gt;
* [[Electronics]]&lt;br /&gt;
* [[NetApp]]&lt;br /&gt;
* [[Frosh]]&lt;br /&gt;
* [[Virtualization (LXC Containers)]]&lt;br /&gt;
* [[Serial Connections]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
	<entry>
		<id>https://wiki.csclub.uwaterloo.ca/index.php?title=Debian_12_Transition&amp;diff=5050</id>
		<title>Debian 12 Transition</title>
		<link rel="alternate" type="text/html" href="https://wiki.csclub.uwaterloo.ca/index.php?title=Debian_12_Transition&amp;diff=5050"/>
		<updated>2023-07-31T03:57:15Z</updated>

		<summary type="html">&lt;p&gt;Y266shen: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Upgrade steps ==&lt;br /&gt;
1. Create the /etc/apt/keyrings folder.&lt;br /&gt;
&lt;br /&gt;
2. Download the CSC keyring into it:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -O /etc/apt/keyrings/csclub.gpg http://debian.csclub.uwaterloo.ca/csclub.gpg&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Make sure that the CSC keyring is the only one in /etc/apt/trusted.gpg:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
gpg --no-options --show-keys /etc/apt/trusted.gpg&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Delete /etc/apt/trusted.gpg and its backup file:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
rm -f /etc/apt/trusted.gpg /etc/apt/trusted.gpg~&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Replace the old-style /etc/apt/sources.list and /etc/apt/sources.list.d/*.list files with the new Deb822 &amp;quot;sources&amp;quot; style (see /etc/apt/sources.list.d/*.sources on sorbitol; don&#039;t copy the one for the Dell repo). Add a helpful note in /etc/apt/sources.list for other syscom members:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# See /etc/apt/sources.list.d/*.sources&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. apt update &amp;amp;&amp;amp; apt dist-upgrade&lt;br /&gt;
&lt;br /&gt;
7. apt autoremove --purge&lt;br /&gt;
&lt;br /&gt;
8. During the upgrade, accept the new configuration files (choose the &#039;Y&#039; option)&lt;br /&gt;
for the following files:&lt;br /&gt;
* /etc/fail2ban/fail2ban.conf&lt;br /&gt;
* /etc/fail2ban/jail.conf&lt;br /&gt;
* /etc/fail2ban/filter.d/sshd.conf&lt;br /&gt;
Everything else should keep the old file.&lt;br /&gt;
&lt;br /&gt;
9. Copy the following files from sorbitol:&lt;br /&gt;
* /etc/fail2ban/fail2ban.local&lt;br /&gt;
* /etc/fail2ban/jail.local&lt;br /&gt;
* /etc/fail2ban/filter.d/sshd.local&lt;br /&gt;
Then restart fail2ban.&lt;br /&gt;
&lt;br /&gt;
10. If the &#039;ntp&#039; package is installed, purge it and install systemd-timesyncd instead. Enable the systemd-timesyncd service and copy /etc/systemd/timesyncd.conf.d/csclub.conf from sorbitol. Start the service and make sure it&#039;s working.&lt;br /&gt;
&lt;br /&gt;
11. Get rid of python2 if it&#039;s still installed:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
apt purge python2.7-minimal&lt;br /&gt;
apt autoremove --purge&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Pending machines ==&lt;br /&gt;
Machines/containers that have yet to upgrade to Debian 12. Remove entry when upgrade is done.&lt;br /&gt;
&lt;br /&gt;
=== General-use servers ===&lt;br /&gt;
&lt;br /&gt;
* corn-syrup: low on disk space (&amp;amp;lt;10G)&lt;br /&gt;
&lt;br /&gt;
=== Syscom Only ===&lt;br /&gt;
&lt;br /&gt;
* xylitol: later?&lt;br /&gt;
** xylitol runs all sort of critical services&lt;br /&gt;
* phosphoric-acid: later?&lt;br /&gt;
** phosphoric-acid runs web&lt;br /&gt;
* yerba-mate&lt;br /&gt;
* cobalamin&lt;br /&gt;
* potassium-benzoate: ugh ubuntu and we can&#039;t shut down the mirror&lt;br /&gt;
&lt;br /&gt;
=== Cloud ===&lt;br /&gt;
&lt;br /&gt;
Everything. We will need to wait until ceph supports bookworm.&lt;br /&gt;
&lt;br /&gt;
=== Containers ===&lt;br /&gt;
&lt;br /&gt;
* on xylitol&lt;br /&gt;
** auth1&lt;br /&gt;
** mail&lt;br /&gt;
** chat&lt;br /&gt;
* on phosphoric-acid&lt;br /&gt;
** caffeine&lt;br /&gt;
** coffee&lt;br /&gt;
** prometheus&lt;/div&gt;</summary>
		<author><name>Y266shen</name></author>
	</entry>
</feed>