Ceph

From CSCWiki
Revision as of 14:44, 29 December 2021 by Merenber (talk | contribs) (Created page with "We are running a three-node [https://ceph.io Ceph] cluster on riboflavin, ginkgo and biloba for the purpose of cloud storage. Most Ceph services are running on riboflavin or g...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

We are running a three-node Ceph cluster on riboflavin, ginkgo and biloba for the purpose of cloud storage. Most Ceph services are running on riboflavin or ginkgo; biloba is just providing a tiny bit of extra storage space.

Official documentation: https://docs.ceph.com/en/latest/

At the time this page was written, the latest version of Ceph was 'Pacific'; check the website above to see what the latest version is.

Bootstrap

The instructions below were adapted from https://docs.ceph.com/en/pacific/cephadm/install/.

riboflavin was used as the bootstrap host, since it has the most storage.

Add the following to /etc/apt/sources.list.d/ceph.list:

deb http://mirror.csclub.uwaterloo.ca/ceph/debian-pacific/ bullseye main

Download the Ceph release key for the Debian packages:

wget -O /etc/apt/trusted.gpg.d/ceph.release.gpg https://download.ceph.com/keys/release.gpg

Now run:

apt update
apt install cephadm podman
ceph boostrap --mon-ip 172.19.168.25

For the rest of the instructions below, the ceph command can be run inside a Podman container by running cephadm shell. Alternatively, you can install the ceph-common package to run ceph directly on the host.

Add the disks for that host:

ceph orch daemon add osd riboflavin:/dev/sdb
ceph orch daemon add osd riboflavin:/dev/sdc

Note: Unfortunately Ceph didn't like it when I used one of the /dev/disk/by-id paths, so I had to use the /dev/sdX paths instead. I'm not sure what'll happen if the device names change at boot. Let's just cross our fingers and pray.

Add more hosts:

ceph orch host add ginkgo 172.19.168.22 --labels _admin
ceph orch host add biloba 172.19.168.23

Add each available disk on each of the additional hosts.

Disable unnecessary services:

ceph orch rm alertmanager
ceph orch rm grafana
ceph orch rm node-exporter

Set the autoscale profile to scale-up instead of scale-down:

ceph osd pool set autoscale-profile scale-up

Set the default pool replication factor to 2 instead of 3:

ceph config set global osd_pool_default_size 2

Deploy the Managers and Monitors on riboflavin and ginkgo only:

ceph orch apply mon --placement '2 riboflavin ginkgo'
ceph orch apply mgr --placement '2 riboflavin ginkgo'

CloudStack Primary Storage

We are using RBD (RADOS Block Device) for CloudStack primary storage. The instructions below were adapted from https://docs.ceph.com/en/pacific/rbd/rbd-cloudstack/.

Create and initialize a pool:

ceph osd pool create cloudstack
rbd pool init cloudstack