28 Flares 28 Flares ×
Dashboards in Ceph have always been a bit of a problem. In the past, I tried first to deploy and run Calamari, but it was a complete failure. I talked about my disgraces in this blog post, and there I also suggested a way better solution: Ceph Dash. But now with the release of Luminous, Ceph is trying again to have its own dashboard. Will it be good this time?
The Ceph manager service (ceph-mgr) was introduced in the Kraken release, and in Luminous it has been extended with a number of new python modules. One of these is a monitoring web page, simply called “dashboard”. The dashboard module is included in the ceph-mgr package, so if you’ve upgraded to Luminous then you already have it! Enabling the dashboard is done with a single command:
ceph mgr module enable dashboard
The dashboard module runs on port 7000 by default. The output of “ceph status” will tell you which of your mgr daemons is currently active:
[cephuser@ceph ceph-deploy]$ ceph status cluster: id: cc15a4bb-b4f7-4f95-af17-5b3e796cb8d5 health: HEALTH_OK services: mon: 3 daemons, quorum mon1,mon2,mon3 mgr: mon1(active), standbys: mon2, mon3 osd: 12 osds: 12 up, 12 in data: pools: 1 pools, 256 pgs objects: 16221 objects, 64746 MB usage: 143 GB used, 1055 GB / 1198 GB avail pgs: 256 active+clean
In my case, it’s mon1. So, to view the dashboard I simply point my browser at http://mon1:7000/. In general, the active monitor is the one running the dashboard at any point in time. This is how it appears upon opening the page:
The dashboard is read-only, but it also has no authentication. If you need to publish it, it’s probably a better idea to have it protected, only answering to a specific IP address, and have a reverse proxy to publish it.
A quick walk-through
The welcome screen can be compared to the ceph status command. It tells us the overall status, and the one of the different components like monitors and OSDs, and the usage of the cluster. We can drill down into the available menus and see more information, like components and versions running on each server:
Another nice information we can see is the real-time performance of the OSDs, this one for example is during a Veeam backup written into the RBD volume:
We can clearly observe here the scale-out system in action: multiple writes operations are ingested by the cluster, and many OSDs are writing data.
Even if there are not so many information available yet, the native dashboard is a really nice and welcome addition to Ceph features. It’s simple, but it also has all the basic information we may need. And finally, the pain for installing and configuring a dashboard for Ceph is a memory of the past!