Observability

From CSCWiki
Jump to navigation Jump to search

There are three pillars of observability: metrics, logging and tracing. We are only interested in the first two.

Metrics

All of our machines are, or at least should be, running the Prometheus node exporter. This collects and sends machine metrics (e.g. RAM used, disk space) to the Prometheus server running at https://prometheus.csclub.uwaterloo.ca (currently a VM on phosphoric-acid). There are a few specialized exporters running on several other machines; a Postfix exporter is running on mail, an Apache exporter is running on caffeine, and an NGINX expoter is running on potassium-benzoate. There is also a custom exporter written by syscom running on potassium-benzoate for mirror stats.

Most of the exporters use mutual TLS authentication with the Prometheus server. I set the expiration date for the TLS certs to 10 years. If you are reading this and it is 2031 or later, then go update the certs.

I highly suggest becoming familiar with PromQL, the query language for Prometheus. You can run and visualize some queries at https://prometheus.csclub.uwaterloo.ca/prometheus. For example, here is a query to determine which machines are up or down:

up{job="node_exporter"}


Here's how we determine if a machine has NFS mounted. This will return 1 for machines which have NFS mounted, but will not return any records for machines which do not have NFS mounted. (We ignore the actual value of node_filesystem_device_error because it returns 1 for machines using Kerberized NFS.)

count by (instance) (node_filesystem_device_error{mountpoint="/users", fstype="nfs"})


Now this is a rather complicated expression which can return one of three values:

  • 0: the machine is down
  • 1: the machine is up, but NFS is not mounted
  • 2: the machine is up and NFS is mounted

The or operator in PromQL is key here.

sum by (instance) (
  (count by (instance) (node_filesystem_device_error{mountpoint="/users", fstype="nfs"}))
  or up{job="node_exporter"}
)


We also use AlertManager to send email alerts from Prometheus metrics. We should figure out how to also send messages to IRC or similar.

We also use the Blackbox prober exporter to check if some of our web-based services are up.

We make some pretty charts on Grafana (https://prometheus.csclub.uwaterloo.ca) from PromQL queries. Grafana also has an 'Explorer' page where you can test out some queries before making chart panels from them.

Logging

We use a combination of Elastic Beats, Logstash and Loki for collecting, storing and querying our logs; for visualization, we use Grafana. Logstash and Loki are currently both running in the prometheus VM.

The reason why I chose Loki over Elasticsearch is because Loki is very space efficient with regards to storage. It also consumes way less RAM and CPU. This means that we can collect a lot of logs without worrying too much about resource usage.

We have Journalbeat and/or Filebeat running on some of our machines to collect logs from sshd, Apache and NGINX. The Beats send these logs to Logstash, which does some pre-processing. The most useful contribution by Logstash is its GeoIP plugin, which allows us to enrich the logs with some geographical information from IP addresses (e.g. add city and country). Logstash sends these logs to Loki, and we can then view these from Grafana.

Note: Sometimes the Loki output plugin for Logstash disappears after a reboot or an upgrade. If you see Logstash complaining about this in the journald logs, run this:

cd /usr/share/logstash
bin/logstash-plugin install logstash-output-loki
systemctl restart logstash

See here for details.

The language for querying logs in Loki is LogQL, which, syntactically, is very similar to PromQL. If you have already learned PromQL, then you should be able to pick up LogQL very easily. You can try out some LogQL queries from the 'Explore' page on Grafana; make sure you toggle the data source to 'Loki' in the top left corner. For the 'topk' queries, you will also want to toggle 'Query type' to 'Instant' rather than 'Range'.

LogQL examples

Here are the number of failed SSH login attempts for each host for a given time range:

sum by (hostname) (
  count_over_time(
    {job="logstash-sshd"} [$__range]
  )
)

Note that $__range is a special global variable in Grafana which is equal to the time range in the top right corner of a chart.

Here are the top 10 IP addresses from which failed SSH login attempts arrived, for a given host and time range:

topk(10,
  sum by (ip_address) (
    count_over_time(
      {job="logstash-sshd",hostname="$hostname"} | json | __error__ = ""
      [$__range]
    )
  )
)

$hostname is a chart variable, which can be configured from a chart's settings.

I configured Logstash to send logs to Loki as JSON, but it's a rather hacky solution, so occasionally invalid JSON is sent.

Here are the number of HTTP requests for the 15 distros on our mirror from the last hour:

topk(15,
  sum by (distro) (
    count_over_time(
      {job="logstash-nginx"} | json | __error__ = "" | distro != "server-status"
      [1h]
    )
  )
)



Here are the number of total bytes sent over HTTP for the top 15 distros from the last hour. Note the use of the unwrap operator.

topk(15,
  sum by (distro) (
    sum_over_time(
      {job="logstash-nginx"} | json | __error__ = "" | distro != "server-status" | unwrap bytes
      [1h]
    )
  )
)

You can see more examples on the Mirror Requests dashboard on Grafana.

Some more LogQL examples (webcom)

Here are some queries which the Web Committee may find interesting. Try these out from the 'Explore' page in Grafana after setting the data source to 'Loki' (top left corner). You may optionally create a new dashboard if you think you've written some good queries.

Here's a query to just view the raw logs, parsed as JSON (click on each log to view its labels):

{job="logstash-apache"} | json



For the 'topk' queries below, make sure you toggle 'Query type' to 'Instant' from 'Range'.
Here's the number of requests by User-Agent for the top 15 requesters:

topk(15,
  sum by (agent) (
    count_over_time(
      {job="logstash-apache"} | json
      [$__range]
    )
  )
)



Let's say you want to exclude bots from those results:

topk(15,
  sum by (agent) (
    count_over_time(
      {job="logstash-apache"} | json | agent !~ "(?i).*bot.*"
      [$__range]
    )
  )
)

You can change 'agent' to 'request', 'ip_address', etc.
See the LogQL documentation for more details.

Avoid high cardinality

For both Prometheus and Loki, you must avoid high cardinality labels at all costs. By high cardinality, I mean labels which can take on a very large number of values; for example, using a label to store IP addresses would be a very bad idea. This is because Prometheus and Loki use labels to store metrics/logs efficiently with compression; when two metrics have two different sets of labels, they cannot be stored together, which increases the storage space usage.

With Loki, you can extract labels from your logs inside your query dynamically. One way to do this is with the json operator; there are other ways to do this as well (see the LogQL docs). This basically means that we get infinite cardinality from our logs, the tradeoff being that queries may take longer to execute.

Also, be very careful about what you send to Loki from Logstash - every field in a Logstash message becomes a Loki label. Usage of the prune command in Logstash is highly recommended.