My adventures with Ceph Storage. Part 10: Upgrade the cluster

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

As any software solution, Ceph is subject to minor and major releases. My entire series of posts has been realized using version Giant (0.87), but by the time I completed the series, Hammer (0.94) was released. Note that Ceph, as other linux softwares, uses a major release naming scheme based on the letter of the alphabet, so Giant is the 7th major release. Ceph releases both minor and major versions of the software, so it’s important to know how to upgrade it.

Also available in this series:
Part 1: Introduction
Part 2: Architecture for Dummies
Part 3: Design the nodes
Part 4: deploy the nodes in the Lab
Part 5: install Ceph in the lab
Part 6: Mount Ceph as a block device on linux machines
Part 7: Add a node and expand the cluster storage
Part 8: Veeam clustered repository
Part 9: failover scenarios during Veeam backups

Rolling Ceph Storage upgrades

Ceph Storage has been designed to be always up and running, no matter what happens to single components or entire nodes. The upgrade procedure has been developed with the same concepts in mind, so in Ceph is possible to complete rolling upgrades of all the nodes and components while the overall cluster keeps serving its users. There is a specific order that needs to be followed when doing an upgrade:

Monitors servers
OSD (Object Storage Daemons) servers
MetaData Servers
Rados Gateway servers

In my cluster I’m using only Monitors and OSDs servers, so I will show you how to upgrade these two. Ceph website has a great page about upgrades, but anyway I’ve tried in this post to highlight the needed commands and explain them a little bit. If you are upgrading from/to different version, check the upgrade page first, there are some specific informations for example if you are running an old release. During the process, the cluster will run for a while in a mixed-version situation; be sure to complete the procedure and don’t leave the cluster itself in this condition for a long time.

Prepare ceph-deploy

In part 5 I’ve described ceph-deploy and how to use it to deploy an entire Ceph cluster. Even if the tool doesn’t have a specific “upgrade” command, it can be used also for upgrades. With ceph-deploy, we will upgrade all the installation packages in all the managed nodes, both monitors and OSD daemons, and the ceph-deploy machine itself (my admin console). The upgrade of the tool itself on my CentOS machines is pretty easy:

sudo yum install ceph-deploy python-pushy

One of the issues I’ve found on the nodes is the EPEL repository. If you have this repository enabled on a node (and it’s enabled and configured by ceph-deploy itself, so…) you will hit an error that will prevent a monitor to be updated. This is because epel has split python-ceph into separate python-rados and python-rbd packages. These packages claim to obsolete python-ceph which somehow propagates back to the official Ceph repo and it will not consider python-ceph an installation candidate. Since this seems to be solved in the latest releases of EPEL but not yet committed into the general repo, by using the testing version you can eventually fix this issue. To enable the repo, go into each node, edit /etc/yum.repos.d/epel-testing.repo, and change the line “enabled=0” into “enabled=1”. Ignore the Debug and Source repositories.

Also, remember in Part 5 we added the Ceph repository to yum. In the config file, I’ve used the “giant” release. Since now we are going to install “hammer”, you need to edit /etc/yum.repos.d/ceph.repo and change all the lines with giant in it and replace it with hammer. This needs to be done in ALL the nodes. You can quickly do it on each node with:

sed -i s'/giant/hammer'/g /etc/yum.repos.d/ceph.repo

Finally, since ceph is also installed in the admin node to run commands remotely, you need to upgrade the package on this node. Simply run:

ceph-deploy install --release hammer ceph-admin

And verify the local machine has the latest version of Ceph binaries:

[ceph@ceph-admin ceph-deploy]$ ceph -v
ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)

Upgrade monitors

The second upgrade step is about monitors. In my lab I use 3 monitors, and this is just the bare minimum to live with: monitors should always be in an odd number, and if anything happens to one of them while we are upgrading one of the others, the quorum would be lost and the cluster would be unavailable until the other node has finished the upgrade. So, maybe in production a better number of monitors would be 5. Anyway, let’s suppose anything will happens to monitors during the upgrade, and let’s proceeed.

As said, ceph-deploy doesn’t have an upgrade command, but the install command in reality can also be used to upgrade nodes. So, we will do this:

ceph-deploy install --release hammer mon1 mon2 mon3

At the end of the update, restart each monitor (see the example for mon3, syntax is mon.hostname):

[root@mon3 ~]# /etc/init.d/ceph restart mon.mon3
=== mon.mon3 ===
=== mon.mon3 ===
Stopping Ceph mon.mon3 on mon3...kill 1902...done
=== mon.mon3 ===
Starting Ceph mon.mon3 on mon3...
Running as unit run-22714.service.
Starting ceph-create-keys on mon3...

and verify all monitors are up and partecipating to the quorum:

[ceph@ceph-admin ceph-deploy]$ ceph mon stat
e1: 3 mons at {mon1=10.2.50.211:6789/0,mon2=10.2.50.212:6789/0,mon3=10.2.50.213:6789/0}, election epoch 14, quorum 0,1,2 mon1,mon2,mon3

Upgrade OSD daemons

During the upgrade of an OSD, some placement groups will go into a degraded state since some replication targets will be missing. To avoid un-needed rebalancing activities in the cluster, you can use this command first of all:

ceph osd set noout

This tell the monitors to not verify the removal of an OSD from the cluster, so that placement groups will not be rebalanced during the upgrade causing unneeded IO on the storage. With the cluster in a sort of maintenance mode, it’s time to upgrade all the OSD daemons:

ceph-deploy install --release hammer osd1 osd2 osd3 osd4

Once the upgrade is finished, go on each node and restart all the OSDs. First retrieve the OSD numbers of the nodes:

[ceph@ceph-admin ceph-deploy]$ ceph osd tree
 ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 1.19989 root default
 -2 0.29997 host osd1
 0 0.09999 osd.0 up 1.00000 1.00000
 1 0.09999 osd.1 up 1.00000 1.00000
 2 0.09999 osd.2 up 1.00000 1.00000
 -3 0.29997 host osd2
 3 0.09999 osd.3 up 1.00000 1.00000
 4 0.09999 osd.4 up 1.00000 1.00000
 5 0.09999 osd.5 up 1.00000 1.00000
 -4 0.29997 host osd3
 6 0.09999 osd.6 up 1.00000 1.00000
 7 0.09999 osd.7 up 1.00000 1.00000
 8 0.09999 osd.8 up 1.00000 1.00000
 -5 0.29997 host osd4
 9 0.09999 osd.9 up 1.00000 1.00000
 10 0.09999 osd.10 up 1.00000 1.00000
 11 0.09999 osd.11 up 1.00000 1.00000

So, for example on osd1, to restart the OSDs you will have to run:

/etc/init.d/ceph restart osd.0
/etc/init.d/ceph restart osd.1
/etc/init.d/ceph restart osd.2

And so on for each node. Once all OSDs are restarted, Ensure each upgraded Ceph OSD Daemon has rejoined the cluster:

[ceph@ceph-admin ceph-deploy]$ ceph osd stat
osdmap e181: 12 osds: 12 up, 12 in
flags noout

If all osds are up and in, you can remove the “maintenance mode” and have the cluster in healthy state again:

ceph osd unset noout

At this point, the cluster is completely upgraded. Note that the same procedure can be used also for minor upgrades.

Upgrade clients

There’s still one operation left to do: upgrade the different clients accessing the cluster. In my lab I have two clients (the two linux nodes acting as Veeam repositories that I’ve created in previous parts), so we need to upgrade also those two machines. If you installed all the components following the installation guides, you will need to run:

yum install ceph-common librados2 librbd1 python-rados python-rbd

If instead you just deployed the minimum amount of components as I’ve explained in part 5, visit the page https://ceph.com/rpm-testing/rhel7/x86_64/ and look after kmod-rbd and kmod-libceph rpms. Check the version of the RPMs on your system:

[root@repo2 ~]# rpm -q kmod-rbd
 kmod-rbd-3.10-0.1.20140702gitdc9ac62.el7.x86_64
[root@repo2 ~]# rpm -q kmod-libceph
 kmod-libceph-3.10-0.1.20140702gitdc9ac62.el7.x86_64

If there’s a newer version on the website, simply update it using the yum command:

yum -y install https://ceph.com/rpm-testing/rhel7/x86_64/kmod-rbd-3.10-0.1.20140702gitdc9ac62.el7.x86_64.rpm https://ceph.com/rpm-testing/rhel7/x86_64/kmod-libceph-3.10-0.1.20140702gitdc9ac62.el7.x86_64.rpm

This is the original command I’ve used for Ceph 0.87. If there’s a new package, modify the url with the complete path to the newer package and run the command.

8 thoughts on “My adventures with Ceph Storage. Part 10: Upgrade the cluster

  1. Thanks for wonderful Tutorial. I have a question regarding upgrade. When you are upgrading monitors all together, in that moment the cluster is unavailable, right? I mean you can access any data till the upgrade finish? How long it takes? Thank you

    • No, if you only upgrade one monitor at the time, the overall cluster stays up. that’s the beauty of a scale-out system.

      • Thank you for your reply. That is nice. The command
        ” ceph-deploy install –release hammer mon1 mon2 mon3″
        confused me, I thought all are getting upgraded at the same time. So, I assume there is backward compatibility between old and new version, right? Since in some point we can have 2 monitors with new version and one with old, so they need to be able to work together and with mixed version of OSDs.

        • Yes, usually the monitors can work with different releases, sometimes with OSDs there may be a disk format change. In any case, it’s better to read the release notes of each release and check the update part.

  2. Thanks for your well-organized Tutorial 🙂

    For me, I’d like to upgrade the kernel version. Is there something to do while kernel is complied? I mean, should I add some configuration parameter in the kernel compile configuration file?

  3. Hi @dellock6:disqus

    Thanks for your well-organized Tutorial. For me, I’d like to upgrade the kernel version. Is there something to consider when I compile the kernel for Ceph? I mean, is there something surely we have to check in the kernel configuration.

    Thank you in advance 🙂

    • There’s no specific flag or option needed by Ceph in the kernel, just be sure that you always have the minimum version required by the running version of Ceph, but as Ceph would be already installed during an upgrade, I believe the requirements would have been already satisfied in the first place.

      • Thank you for your response !!! It’s a really helpful to me !

Comments are closed.