|
| 1 | +.. _crimson_doc: |
| 2 | + |
| 3 | +====================== |
| 4 | +Crimson (Tech Preview) |
| 5 | +====================== |
| 6 | + |
| 7 | +Crimson is the code name of ``crimson-osd``, which is the next generation ``ceph-osd``. |
| 8 | +It is designed to deliver enhanced performance on fast network and storage devices by leveraging modern technologies including DPDK and SPDK. |
| 9 | + |
| 10 | +Crimson is intended to be a drop-in replacement for the classic Object Storage Daemon (OSD), |
| 11 | +aiming to allow seamless migration from existing ``ceph-osd`` deployments. |
| 12 | + |
| 13 | +The second phase of the project introduces :ref:`seastore`, a complete redesign of the object storage backend built around Crimson's native architecture. |
| 14 | +Seastore is optimized for high-performance storage devices like NVMe and may not be suitable for traditional HDDs. |
| 15 | +Crimson will continue to support BlueStore ensuring compatibility with HDDs and slower SSDs. |
| 16 | + |
| 17 | +See `ceph.io/en/news/crimson <https://ceph.io/en/news/crimson/>`_ |
| 18 | + |
| 19 | +Crimson is in tech-preview stage. |
| 20 | +See :ref:`Crimson's Developer Guide <crimson_dev_doc>` for developer information. |
| 21 | + |
| 22 | +.. highlight:: console |
| 23 | + |
| 24 | +Deploying Crimson with cephadm |
| 25 | +============================== |
| 26 | + |
| 27 | +.. note:: |
| 28 | + Cephadm SeaStore support is in `early stages <https://tracker.ceph.com/issues/71946>`_. |
| 29 | + |
| 30 | +The Ceph CI/CD pipeline builds containers with ``crimson-osd`` replacing the standard ``ceph-osd``. |
| 31 | + |
| 32 | +Once a branch at commit <sha1> has been built and is available in |
| 33 | +Shaman / Quay, you can deploy it using the cephadm instructions outlined |
| 34 | +in :ref:`cephadm` with the following adaptations. |
| 35 | + |
| 36 | +The latest `main` branch is built `daily <https://shaman.ceph.com/builds/ceph/main>`_ |
| 37 | +and the images are available in `quay <https://quay.ceph.io/repository/ceph-ci/ceph?tab=tags>`_ |
| 38 | +(filter ``crimson-release``). |
| 39 | +We recommend using one of the latest available builds, as Crimson evolves rapidly. |
| 40 | + |
| 41 | +Use the ``--image`` flag to specify a Crimson build: |
| 42 | + |
| 43 | +.. prompt:: bash # |
| 44 | + |
| 45 | + cephadm --image quay.ceph.io/ceph-ci/ceph:<sha1>-crimson-release --allow-mismatched-release bootstrap ... |
| 46 | + |
| 47 | + |
| 48 | +.. note:: |
| 49 | + Crimson builds are available in two variants: ``crimson-debug`` and ``crimson-release``. |
| 50 | + For testing purposes the `release` variant should be used. |
| 51 | + The `debug` variant is intended primarily for development. |
| 52 | + |
| 53 | +You'll likely need to include the ``--allow-mismatched-release`` flag to use a non-release branch. |
| 54 | + |
| 55 | +Crimson CPU allocation |
| 56 | +====================== |
| 57 | + |
| 58 | +.. note:: |
| 59 | + |
| 60 | + #. Allocation options **cannot** be changed after deployment. |
| 61 | + #. :ref:`vstart.sh <dev_crimson_vstart>` sets these options using the ``--crimson-smp`` flag. |
| 62 | + |
| 63 | +The ``crimson_cpu_num`` parameter defines the number of CPUs used to serve Seastar reactors. |
| 64 | +Each reactor is expected to run on a dedicated CPU core. |
| 65 | + |
| 66 | +This parameter **does not have a default value**. |
| 67 | +Admins must configure it at the OSD level based on system resources and cluster requirements **before** deploying the OSDs. |
| 68 | + |
| 69 | +We recommend setting a value for ``crimson_cpu_num`` that is less than the host's |
| 70 | +number of CPU cores (``nproc``) divided by the **number of OSDs on that host**. |
| 71 | + |
| 72 | +For example, for deploying a node with eight CPU cores per OSD: |
| 73 | + |
| 74 | +.. prompt:: bash # |
| 75 | + |
| 76 | + ceph config set osd crimson_cpu_num 8 |
| 77 | + |
| 78 | +Note that ``crimson_cpu_num`` does **not** pin threads to specific CPU cores. |
| 79 | +To explicitly assign CPU cores to Crimson OSDs, use the ``crimson_cpu_set`` parameter. |
| 80 | +This enables CPU pinning, which *may* improve performance. |
| 81 | +However, using this option requires manually setting the CPU set for each OSD, |
| 82 | +and is generally less recommended due to its complexity. |
| 83 | + |
| 84 | +.. _crimson-required-flags: |
| 85 | + |
| 86 | +Crimson Requried Flags |
| 87 | +====================== |
| 88 | + |
| 89 | +.. note:: |
| 90 | + Crimson is in a tech preview stage and is **not suitable for production use**. |
| 91 | + |
| 92 | +After starting your cluster, prior to deploying OSDs, you'll need to configure the |
| 93 | +`Crimson CPU allocation`_ and enable Crimson to |
| 94 | +direct the default pools to be created as Crimson pools. You can proceed by running the following after you have a running cluster: |
| 95 | + |
| 96 | +.. prompt:: bash # |
| 97 | + |
| 98 | + ceph config set global 'enable_experimental_unrecoverable_data_corrupting_features' crimson |
| 99 | + ceph osd set-allow-crimson --yes-i-really-mean-it |
| 100 | + ceph config set mon osd_pool_default_crimson true |
| 101 | + |
| 102 | +The first command enables the ``crimson`` experimental feature. |
| 103 | + |
| 104 | +The second enables the ``allow_crimson`` OSDMap flag. The monitor will |
| 105 | +not allow ``crimson-osd`` to boot without that flag. |
| 106 | + |
| 107 | +The last causes pools to be created by default with the ``crimson`` flag. |
| 108 | +Crimson pools are restricted to operations supported by Crimson. |
| 109 | +``crimson-osd`` won't instantiate PGs from non-Crimson pools. |
| 110 | + |
| 111 | +.. _crimson-bakends: |
| 112 | + |
| 113 | +Object Store Backends |
| 114 | +===================== |
| 115 | + |
| 116 | +``crimson-osd`` supports two categories of object store backends: **native** and **non-native**. |
| 117 | + |
| 118 | +Native Backends |
| 119 | +--------------- |
| 120 | + |
| 121 | +Native backends perform I/O operations using the **Seastar reactor**. These are tightly integrated with the Seastar framework and follow its design principles: |
| 122 | + |
| 123 | +.. describe:: seastore |
| 124 | + |
| 125 | + SeaStore is the primary native object store for Crimson OSD. It is built with the Seastar framework and adheres to its asynchronous, shard-based architecture. |
| 126 | + |
| 127 | +.. describe:: cyanstore |
| 128 | + |
| 129 | + CyanStore is inspired by ``memstore`` from the classic OSD, offering a lightweight, in-memory object store model. |
| 130 | + CyanStore **does not store data** and should be used only for measuring OSD overhead, without the cost of actually storing data. |
| 131 | + |
| 132 | +Non-Native Backends |
| 133 | +------------------- |
| 134 | + |
| 135 | +Non-native backends operate through a **thread pool proxy**, which interfaces with object stores running in **alien threads**—worker threads not managed by Seastar. |
| 136 | +These backends allow Crimson to interact with legacy or external object store implementations: |
| 137 | + |
| 138 | +.. describe:: bluestore |
| 139 | + |
| 140 | + The default object store used by the classic ``ceph-osd``. It provides robust, production-grade storage capabilities. |
| 141 | + |
| 142 | + The ``crimson_bluestore_num_threads`` option needs to be set according to the CPU set available. |
| 143 | + This defines the number of threads dedicated to serving the BlueStore ObjectStore on each OSD. |
| 144 | + |
| 145 | + If ``crimson_cpu_num`` is used from `Crimson CPU allocation`_, |
| 146 | + The counterpart ``crimson_bluestore_cpu_set`` should also be used accordingly to |
| 147 | + allow the two sets to be mutually exclusive. |
| 148 | + |
| 149 | +.. describe:: memstore |
| 150 | + |
| 151 | + An in-memory object store backend, primarily used for testing and development purposes. |
| 152 | + |
| 153 | +Metrics and Tracing |
| 154 | +=================== |
| 155 | + |
| 156 | +Crimson offers three ways to report stats and metrics. |
| 157 | + |
| 158 | +PG stats reported to the Manager |
| 159 | +-------------------------------- |
| 160 | + |
| 161 | +Crimson collects the per-PG, per-pool, and per-OSD stats in a `MPGStats` |
| 162 | +message which is sent to the Ceph Managers. Manager modules can query |
| 163 | +them using the ``MgrModule.get()`` method. |
| 164 | + |
| 165 | +Asock command |
| 166 | +------------- |
| 167 | + |
| 168 | +An admin socket command is offered for dumping metrics:: |
| 169 | + |
| 170 | +.. prompt:: bash # |
| 171 | + |
| 172 | + $ ceph tell osd.0 dump_metrics |
| 173 | + $ ceph tell osd.0 dump_metrics reactor_utilization |
| 174 | + |
| 175 | +Here `reactor_utilization` is an optional string allowing us to filter |
| 176 | +the dumped metrics by prefix. |
| 177 | + |
| 178 | +Prometheus text protocol |
| 179 | +------------------------ |
| 180 | + |
| 181 | +The listening port and address can be configured using the command line options of |
| 182 | +``--prometheus_port`` |
| 183 | +see `Prometheus`_ for more details. |
| 184 | + |
| 185 | +.. _Prometheus: https://github.com/scylladb/seastar/blob/master/doc/prometheus.md |
0 commit comments