Skip to content

Commit b2ec2af

Browse files
authored
Merge pull request ceph#50651 from rosinL/cleanup
Cleanup the LevelDB residue Reviewed-by: Radoslaw Zarzynski <[email protected]>
2 parents b554518 + 2671fad commit b2ec2af

33 files changed

+111
-202
lines changed

README.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,8 +94,7 @@ defaulted to ON. To build without the RADOS Gateway:
9494
Another example below is building with debugging and alternate locations
9595
for a couple of external dependencies:
9696

97-
cmake -DLEVELDB_PREFIX="/opt/hyperleveldb" \
98-
-DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
97+
cmake -DCMAKE_INSTALL_PREFIX=/opt/ceph -DCMAKE_C_FLAGS="-Og -g3 -gdwarf-4" \
9998
..
10099

101100
Ceph has several bundled dependencies such as Boost, RocksDB and Arrow. By

doc/install/install-storage-cluster.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ To install Ceph with RPMs, execute the following steps:
6363

6464
#. Install pre-requisite packages::
6565

66-
sudo yum install snappy leveldb gdisk python-argparse gperftools-libs
66+
sudo yum install snappy gdisk python-argparse gperftools-libs
6767

6868

6969
Once you have added either release or development packages, or added a

doc/man/8/ceph-kvstore-tool.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,14 @@
99
Synopsis
1010
========
1111

12-
| **ceph-kvstore-tool** <leveldb|rocksdb|bluestore-kv> <store path> *command* [args...]
12+
| **ceph-kvstore-tool** <rocksdb|bluestore-kv> <store path> *command* [args...]
1313
1414

1515
Description
1616
===========
1717

1818
:program:`ceph-kvstore-tool` is a kvstore manipulation tool. It allows users to manipulate
19-
leveldb/rocksdb's data (like OSD's omap) offline.
19+
RocksDB's data (like OSD's omap) offline.
2020

2121
Commands
2222
========

doc/man/8/ceph.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ Usage::
161161
compact
162162
-------
163163

164-
Causes compaction of monitor's leveldb storage.
164+
Causes compaction of monitor's RocksDB storage.
165165

166166
Usage::
167167

doc/rados/configuration/mon-config-ref.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ the monitor services are written by the Ceph Monitor to a single Paxos
3838
instance, and Paxos writes the changes to a key/value store for strong
3939
consistency. Ceph Monitors are able to query the most recent version of the
4040
cluster map during sync operations, and they use the key/value store's
41-
snapshots and iterators (using leveldb) to perform store-wide synchronization.
41+
snapshots and iterators (using RocksDB) to perform store-wide synchronization.
4242

4343
.. ditaa::
4444
/-------------\ /-------------\
@@ -265,7 +265,7 @@ Data
265265

266266
Ceph provides a default path where Ceph Monitors store data. For optimal
267267
performance in a production Ceph Storage Cluster, we recommend running Ceph
268-
Monitors on separate hosts and drives from Ceph OSD Daemons. As leveldb uses
268+
Monitors on separate hosts and drives from Ceph OSD Daemons. As RocksDB uses
269269
``mmap()`` for writing the data, Ceph Monitors flush their data from memory to disk
270270
very often, which can interfere with Ceph OSD Daemon workloads if the data
271271
store is co-located with the OSD Daemons.

doc/rados/operations/health-checks.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -127,8 +127,8 @@ Monitor databases might grow in size when there are placement groups that have
127127
not reached an ``active+clean`` state in a long time.
128128

129129
This alert might also indicate that the monitor's database is not properly
130-
compacting, an issue that has been observed with some older versions of leveldb
131-
and rocksdb. Forcing a compaction with ``ceph daemon mon.<id> compact`` might
130+
compacting, an issue that has been observed with some older versions of
131+
RocksDB. Forcing a compaction with ``ceph daemon mon.<id> compact`` might
132132
shrink the database's on-disk size.
133133

134134
This alert might also indicate that the monitor has a bug that prevents it from

doc/rados/troubleshooting/log-and-debug.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -318,8 +318,6 @@ to their default level or to a level suitable for normal operations.
318318
+--------------------------+-----------+--------------+
319319
| ``rocksdb`` | 4 | 5 |
320320
+--------------------------+-----------+--------------+
321-
| ``leveldb`` | 4 | 5 |
322-
+--------------------------+-----------+--------------+
323321
| ``fuse`` | 1 | 5 |
324322
+--------------------------+-----------+--------------+
325323
| ``mgr`` | 2 | 5 |

doc/rados/troubleshooting/troubleshooting-mon.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -438,7 +438,7 @@ Monitor Store Failures
438438
Symptoms of store corruption
439439
----------------------------
440440

441-
Ceph monitor stores the :term:`Cluster Map` in a key/value store such as LevelDB. If
441+
Ceph monitor stores the :term:`Cluster Map` in a key/value store such as RocksDB. If
442442
a monitor fails due to the key/value store corruption, following error messages
443443
might be found in the monitor log::
444444

doc/radosgw/layout.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -132,8 +132,6 @@ Footnotes
132132
to how Extended Attributes associate with a POSIX file. An object's omap
133133
is not physically located in the object's storage, but its precise
134134
implementation is invisible and immaterial to RADOS Gateway.
135-
In Hammer, LevelDB is used to store omap data within each OSD; later releases
136-
default to RocksDB but can be configured to use LevelDB.
137135

138136
[2] Before the Dumpling release, the 'bucket.instance' metadata did not
139137
exist and the 'bucket' metadata contained its information. It is possible

install-deps.sh

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -351,7 +351,6 @@ if [ x$(uname)x = xFreeBSDx ]; then
351351
devel/libtool \
352352
devel/google-perftools \
353353
lang/cython \
354-
databases/leveldb \
355354
net/openldap24-client \
356355
archivers/snappy \
357356
archivers/liblz4 \

0 commit comments

Comments
 (0)