Skip to content

Commit fba2d51

Browse files
authored
Merge pull request ceph#61209 from Kushal-deb/fix_issue-2253832-osd_service_issue
cephadm: link new OSDs to existing managed services Reviewed-by: Adam King <[email protected]>
2 parents 65f2e27 + 5b8c155 commit fba2d51

File tree

5 files changed

+43
-8
lines changed

5 files changed

+43
-8
lines changed

doc/cephadm/services/osd.rst

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -208,17 +208,19 @@ There are multiple ways to create new OSDs:
208208

209209
.. warning:: When deploying new OSDs with ``cephadm``, ensure that the ``ceph-osd`` package is not installed on the target host. If it is installed, conflicts may arise in the management and control of the OSD that may lead to errors or unexpected behavior.
210210

211-
* OSDs created via ``ceph orch daemon add`` are by default not added to the orchestrator's OSD service. To attach an OSD to a different, existing OSD service, issue a command of the following form:
211+
* New OSDs created using ``ceph orch daemon add osd`` are added under ``osd.default`` as managed OSDs with a valid spec.
212212

213-
.. prompt:: bash *
213+
To attach an existing OSD to a different managed service, ``ceph orch osd set-spec-affinity`` command can be used:
214214

215-
ceph orch osd set-spec-affinity <service_name> <osd_id(s)>
215+
.. prompt:: bash #
216+
217+
ceph orch osd set-spec-affinity <service_name> <osd_id(s)>
216218

217219
For example:
218220

219221
.. prompt:: bash #
220-
221-
ceph orch osd set-spec-affinity osd.default_drive_group 0 1
222+
223+
ceph orch osd set-spec-affinity osd.default_drive_group 0 1
222224

223225
Dry Run
224226
-------

qa/suites/orch/cephadm/smoke-small/start.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,4 +20,4 @@ tasks:
2020
- ceph orch host ls
2121
- ceph orch device ls
2222
- ceph orch ls --format yaml
23-
- ceph orch ls | grep '^osd '
23+
- ceph orch ls | grep 'osd'

qa/suites/orch/cephadm/smoke/start.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,4 @@ tasks:
2525
- ceph orch host ls
2626
- ceph orch device ls
2727
- ceph orch ls --format yaml
28-
- ceph orch ls | grep '^osd '
28+
- ceph orch ls | grep 'osd'

qa/suites/orch/cephadm/upgrade/4-wait.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,4 +13,4 @@ tasks:
1313
- ceph health detail
1414
- ceph versions | jq -e '.overall | length == 1'
1515
- ceph versions | jq -e '.overall | keys' | grep $sha1
16-
- ceph orch ls | grep '^osd '
16+
- ceph orch ls | grep 'osd'

src/pybind/mgr/cephadm/module.py

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@
4343
MgmtGatewaySpec,
4444
NvmeofServiceSpec,
4545
)
46+
from ceph.deployment.drive_group import DeviceSelection
4647
from ceph.utils import str_to_datetime, datetime_to_str, datetime_now
4748
from cephadm.serve import CephadmServe
4849
from cephadm.services.cephadmservice import CephadmDaemonDeploySpec
@@ -2937,12 +2938,44 @@ def apply_drivegroups(self, specs: List[DriveGroupSpec]) -> List[str]:
29372938
"""
29382939
return [self._apply(spec) for spec in specs]
29392940

2941+
def create_osd_default_spec(self, drive_group: DriveGroupSpec) -> None:
2942+
# Create the default osd and attach a valid spec to it.
2943+
2944+
drive_group.unmanaged = False
2945+
2946+
host_pattern_obj = drive_group.placement.host_pattern
2947+
host = str(host_pattern_obj.pattern)
2948+
device_list = [d.path for d in drive_group.data_devices.paths] if drive_group.data_devices else []
2949+
devices = [{"path": d} for d in device_list]
2950+
2951+
osd_default_spec = DriveGroupSpec(
2952+
service_id="default",
2953+
placement=PlacementSpec(host_pattern=host),
2954+
data_devices=DeviceSelection(paths=devices),
2955+
unmanaged=False,
2956+
objectstore="bluestore"
2957+
)
2958+
2959+
self.log.info(f"Creating OSDs with service ID: {drive_group.service_id} on {host}:{device_list}")
2960+
self.spec_store.save(osd_default_spec)
2961+
self.apply([osd_default_spec])
2962+
29402963
@handle_orch_error
29412964
def create_osds(self, drive_group: DriveGroupSpec) -> str:
29422965
hosts: List[HostSpec] = self.inventory.all_specs()
29432966
filtered_hosts: List[str] = drive_group.placement.filter_matching_hostspecs(hosts)
29442967
if not filtered_hosts:
29452968
return "Invalid 'host:device' spec: host not found in cluster. Please check 'ceph orch host ls' for available hosts"
2969+
2970+
if not drive_group.service_id:
2971+
drive_group.service_id = "default"
2972+
2973+
if drive_group.service_id not in self.spec_store.all_specs:
2974+
self.log.info("osd.default does not exist. Creating it now.")
2975+
self.create_osd_default_spec(drive_group)
2976+
else:
2977+
self.log.info("osd.default already exists.")
2978+
29462979
return self.osd_service.create_from_spec(drive_group)
29472980

29482981
def _preview_osdspecs(self,

0 commit comments

Comments
 (0)