Mount MicroCeph backed Block Devices

Ceph RBD (RADOS Block Device) are virtual block devices backed by the Ceph storage cluster. This tutorial will guide you with mounting Block devices using MicroCeph.

The above will be achieved by creating an rbd image on the MicroCeph deployed Ceph cluster, mapping it on the client machine, and then mounting it.

Warning

MicroCeph as an isolated snap cannot perform certain elevated operations like mapping the rbd image to the host. Therefore, it is recommended to use the client tools as described in this documentation, even if the client machine is the MicroCeph node itself.

MicroCeph Operations:

Check Ceph cluster’s status:

$ sudo ceph -s
cluster:
    id:     90457806-a798-47f2-aca1-a8a93739941a
    health: HEALTH_OK

services:
    mon: 1 daemons, quorum workbook (age 36m)
    mgr: workbook(active, since 50m)
    osd: 3 osds: 3 up (since 17m), 3 in (since 47m)

data:
    pools:   2 pools, 33 pgs
    objects: 21 objects, 13 MiB
    usage:   94 MiB used, 12 GiB / 12 GiB avail
    pgs:     33 active+clean

Create a pool for RBD images:

$ sudo ceph osd pool create block_pool
pool 'block_pool' created

$ sudo ceph osd lspools
1 .mgr
2 block_pool

$ rbd pool init block_pool

Create RBD image:

$ sudo rbd create bd_foo --size 8192 --image-feature layering -p block_pool
$ sudo rbd list -p block_pool
bd_foo

Client Operations:

Download ‘ceph-common’ package:

$ sudo apt install ceph-common

This step is required even if the client machine is a MicroCeph node itself.

Fetch the ceph.conf and ceph.keyring file :

Ideally, a keyring file for any CephX user which has access to RBD devices will work. For the sake of simplicity, we are using admin keys in this example.

$ cat /var/snap/microceph/current/conf/ceph.conf
# # Generated by MicroCeph, DO NOT EDIT.
[global]
run dir = /var/snap/microceph/1039/run
fsid = 90457806-a798-47f2-aca1-a8a93739941a
mon host = 192.168.X.Y
public_network = 192.168.X.Y/24
auth allow insecure global id reclaim = false
ms bind ipv4 = true
ms bind ipv6 = false

$ cat /var/snap/microceph/current/conf/ceph.keyring
# Generated by MicroCeph, DO NOT EDIT.
[client.admin]
    key = AQCNTXlmohDfDRAAe3epjquyZGrKATDhL8p3og==

The files are located at the paths shown above on any MicroCeph node. Moving forward, we will assume that these files are located at mentioned path.

Map the RBD image on client:

$ sudo rbd map bd_foo \
    --name client.admin \
    -m 192.168.29.152 \
    -k /var/snap/microceph/current/conf/ceph.keyring \
    -c /var/snap/microceph/current/conf/ceph.conf \
    -p block_pool
/dev/rbd0

$ sudo mkfs.ext4 -m0 /dev/rbd0
mke2fs 1.46.5 (30-Dec-2021)
Discarding device blocks: done
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: 1deeef7b-ceaf-4882-a07a-07a28b5b2590
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the device on a suitable path:

$ sudo mkdir /mnt/new-mount
$ sudo mount /dev/rbd0 /mnt/new-mount
$ cd /mnt/new-mount

With this, you now have a block device mounted at /mnt/new-mount on your client machine that you can perform IO to.

Perform IO and observe the ceph cluster:

Write a file on the mounted device:

$ sudo dd if=/dev/zero of=random.img count=1 bs=10M
...
10485760 bytes (10 MB, 10 MiB) copied, 0.0176554 s, 594 MB/s

$ ll
...
-rw-r--r-- 1 root root 10485760 Jun 24 17:02 random.img

Ceph cluster state post IO:

$ sudo ceph -s
cluster:
    id:     90457806-a798-47f2-aca1-a8a93739941a
    health: HEALTH_OK

services:
    mon: 1 daemons, quorum workbook (age 37m)
    mgr: workbook(active, since 51m)
    osd: 3 osds: 3 up (since 17m), 3 in (since 48m)

data:
    pools:   2 pools, 33 pgs
    objects: 24 objects, 23 MiB
    usage:   124 MiB used, 12 GiB / 12 GiB avail
    pgs:     33 active+clean

Comparing the ceph status output before and after writing the file shows that the MicroCeph cluster has grown by 30MiB which is thrice the size of the file we wrote (10MiB). This is because MicroCeph configures 3 way replication by default.