Release notes

The following provides details on major MicroCeph releases, beginning with the MicroCeph squid release.

MicroCeph Tentacle

The Ceph team is happy to announce the release of MicroCeph v20 (tentacle). This is the first stable release in the Tentacle series of releases.

The MicroCeph tentacle release can be installed from the tentacle/stable track.

Highlights

  • Uses Ceph 20.2.0 (tentacle)

  • Built on Ubuntu 24.04 (core24 snap base)

  • Upgraded to microcluster v3

  • Log rotation for the logs/ directory via logrotate

  • Reference architecture documentation

  • Consolidated charm-microceph and MicroCeph documentation

  • Includes all features and fixes from the squid stable cycle

Important changes

The snap is now built on the core24 base. Hosts must be running an Ubuntu release that supports core24 snaps.

The microcluster dependency has been upgraded from v2 to v3. This is an internal change but affects the underlying cluster database schema.

Known issues

Upgrades from quincy directly to tentacle are not supported. Upgrade to squid first, then to tentacle.

Erasure-coded pool users are advised that the upstream Ceph developers identified a bug where OSDs crash when allow_ec_optimizations is set on a pool, regardless of the allow_ec_overwrites setting, making the cluster unusable. Read Tracker Issue 74813 before enabling allow_ec_optimizations on Ceph 20.2.0.

List of pull requests

  • #718: test: upgrade squid to tentacle

  • #714: feat: add log rotation for the logs directory

  • #713: docs: Add reference architecture

  • #707: docs: consolidate MicroCeph charm documentation with MicroCeph docs

  • #706: feat: move to MicroCeph Tentacle, upgrade cluster library to v3, and build on core 24

  • #698: ci: add weekly health report workflow

MicroCeph Squid

The Ceph team is happy to announce the release of MicroCeph v19 (squid). This is the first stable release in the Squid series of releases.

The MicroCeph squid release can be installed from the squid/stable track.

Highlights

  • Uses Ceph 19.2.0 (squid)

  • Support for RBD remote replication

  • CephFS remote replication (enable/disable, status, listing)

  • NFS service support via Ceph NFS Ganesha

  • Adopt existing (non-MicroCeph) Ceph clusters into MicroCeph management

  • Availability Zone support for OSDs

  • Cluster maintenance mode with monitor-quorum protection

  • MicroCeph orchestrator module shipped in the snap

  • DSL-based device matching for OSD, WAL, and DB selection

  • Support for modifying RGW SSL certificates at runtime

  • microceph waitready command to verify cluster readiness

  • stripingv2 enabled by default in the RBD feature set

  • OSD support for many additional block device types such as NVMe, partitions, LVM volumes

  • Improved ipv6 support

  • Updated dependencies, based off of Ubuntu 24.04

  • Various fixes and documentation improvements

Important changes

For added security, MicroCeph now checks hostnames upon cluster joining. This means that the name used when running microceph cluster add <name> must match the hostname of the node where microceph cluster join is being run. If the hostname does not match joining the node will fail, and log a message Joining server certificate SAN does not contain join token name to syslog.

Monitors are now enforced to use the v2 (msgr2) protocol. Clients that only support v1 will not be able to connect.

The joiner address is now auto-detected from the join token peers when running microceph cluster join; manual address overrides remain supported.

Known issues

iSCSI users are advised that the upstream developers of Ceph encountered a bug during an upgrade from Ceph 19.1.1 to Ceph 19.2.0. Read Tracker Issue 68215 before attempting an upgrade to 19.2.0.

List of pull requests

Squid stable updates (post-v19.2.0):

  • #711: fix: update Go module dependencies

  • #710: fix: auto-detect joiner address from join token peers

  • #708: fix: resolve references to stale paths

  • #703: fix: increase disk operation timeout

  • #702: fix: resolve monitor refresh loop

  • #700: fix: resolve all Go static check failures and drop the previous linter

  • #699: feat: add support for declarative WAL and DB device usage with execution, cleanup, and validation

  • #697: feat: add support for modifying the RGW SSL certificate

  • #696: fix: wait for RBD mirror health before testing disable operations

  • #695: ci: cache the built snap between jobs

  • #691: fix: re-enable services after a snap disable and enable cycle

  • #688: docs: add a database schema update guide to the developer docs

  • #687: feat: add Availability Zone support

  • #684: refactor: maintenance mode quality improvements

  • #683: feat: add a wait-ready command to verify the cluster is ready

  • #682: docs: fix a documentation link

  • #681: fix: resolve “no disks present” error when adding all disks

  • #680: fix: resolve missing unlock of encrypted WAL and DB at OSD start

  • #679: feat: close inactive issues automatically

  • #677: ci: avoid building Sphinx from source

  • #676: fix: resolve unexpected loop device behaviour

  • #672: fix: make device-node matching conform to the device DSL spec

  • #668: feat: add declarative device matching for OSD selection

  • #661: tests: functional test helper housekeeping

  • #659: fix: amend the command parameter

  • #657: fix: multi-monitor adopt bootstrap

  • #656: fix: add content attributes for content plugs

  • #650: feat: add v2 striping to the default RBD feature set

  • #646: feat: add a format flag to cluster list

  • #643: docs: add HTML meta descriptions

  • #642: docs: add a how-to document for MicroCeph CephFS replication

  • #641: docs: use a ref target for the cluster network how-to

  • #638: docs: refine the get-started tutorial

  • #637: docs: add remote replication explanations

  • #635: fix: pin dqlite to the LTS release

  • #633: docs: create a redirect for a renamed file

  • #632: docs: split up the security overview

  • #631: docs: add a how-to for reporting security issues

  • #630: docs: split up the full-disk encryption documentation

  • #628: feat: add support for enabling and disabling CephFS replication

  • #627: docs: fix a documentation title

  • #626: docs: move the architecture documentation

  • #625: ci: increase wait time for OSDs

  • #624: ci: make the OSD check more robust

  • #622: feat: adopt existing Ceph clusters using MicroCeph

  • #621: fix: improve pristine disk check with ceph-bluestore-tool validation

  • #619: feat: expose useful Ceph tools

  • #616: fix: use ceph-bluestore-tool for wiping disks

  • #607: docs: correct command invocation in client config docs

  • #606: docs: fix section headings

  • #604: fix: implement structured logging with persistent configuration

  • #601: feat: add support for fetching CephFS mirroring status and lists to the replication framework

  • #600: fix: remove unnecessary references to the client from the command

  • #599: test: speed up tests

  • #594: fix: list virtio block disk devices

  • #591: refactor: move sub-process handling to a common package

  • #590: fix: check if disks are pristine before attempting to use them

  • #588: fix: add checks before adding OSD, WAL, or DB devices

  • #585: feat: create only one OSD pool for NFS Ganesha

  • #584: feat: add the MicroCeph orchestrator module to the snap build

  • #583: docs: update documentation to include information about enabling NFS

  • #582: refactor: add an OSD manager to improve testing

  • #578: feat: add CephFS mirror to the service placement interface

  • #575: fix: prevent enabling snapshot replication on RBD pools

  • #574: feat: add NFS support

  • #573: docs: update disk add documentation

  • #572: docs: migrate to the extension-based starter pack

  • #567: feat: enforce v2 for monitors

  • #565: feat: ensure a majority of monitor services remain available before entering maintenance mode

  • #545: docs: MicroCloud integration annotations

Initial v19.2.0 release:

  • #467: Fix: increase timings for osd release

  • #466: Adjust ‘verify_health’ iterations

  • #464: Test: upgrade update

  • #463: Fix: add python3-packaging

  • #462: Test: upgrade reef to local build

  • #461: Test: add reef to squid upgrade test

  • #460: Improve require-osd-release

  • #459: Set the ‘require-osd-release’ option on startup

  • #458: Updated readme.md

  • #457: Modify post-refresh hook to set OSD-release

  • #456: Make remote replication CLI conformant to CLI guidelines

  • #454: Pin LXD and use microcluster with dqlite LTS

  • #447: Update mods, build from noble

  • #443: Bootstrap: wait for daemon

  • #441: Build from noble-proposed

  • #440: Remove tutorial section

  • #438: MicroCeph Remote Replication (3/3): Site Failover/Failback

  • #437: MicroCeph Remote Replication (2/3): RBD Mirroring

  • #433: Docs: fix indexes

  • #432: Use square brackets around IPv6 in ceph.conf

  • #430: Adds support for RO cluster configs

  • #429: Move mounting CephFS shares tutorial to how-to section

  • #428: Move mounting RBD tutorial to how-to section

  • #427: Move multi-node tutorial to how-to section

  • #426: Move multi-node tutorial to how-to section

  • #422: Change tutorial landing page

  • #419: Change explanation landing page

  • #418: Add CephFS to wordlist

  • #417: Move MicroCeph charm to explanation section

  • #416: Fix reference landing page

  • #415: Move single-node tutorial to how-to section

  • #409: Fetch current OSD pool configuration over the API

  • #407: Add interfaces: rbd kernel module and support

  • #405: MicroCeph Remote Replication (1/3): Remote Awareness

  • #401: doc: remove woke-install as prereq for building the docs

  • #400: doc: remove woke-install as prereq for building the docs

  • #398: MicroCeph Remote Replication (2/3): RBD Mirroring

  • #395: Use LTS microcluster