You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
`ceph-devstack` 's default configuration is [here](./ceph_devstack/config.yml). It can be extended by placing a file at `~/.config/ceph-devstack/config.yml` or by using the `--config-file` flag.
58
+
`ceph-devstack` 's default configuration is [here](./ceph_devstack/config.toml). It can be extended by placing a file at `~/.config/ceph-devstack/config.toml` or by using the `--config-file` flag.
59
59
60
60
`ceph-devstack config dump` will output the current configuration.
61
61
@@ -159,7 +159,7 @@ To start off, we would like you to familiarise yourself with this project. This
159
159
160
160
Evaluation Tasks -
161
161
162
-
##### Task 1
162
+
##### Task 1
163
163
1. Set up ceph-devstack locally (you can see supported Operating Systems here - https://github.com/zmc/ceph-devstack/tree/main)
164
164
2. Test your setup by making sure that you can run the following command without any issues:
165
165
@@ -169,32 +169,31 @@ ceph-devstack start
169
169
170
170
Once you have this running, share a screenshot with the mentors.
171
171
172
-
##### Task 2
172
+
##### Task 2
173
173
174
-
Right now, we cannot determine if the test run was successful or not from the output of "teuthology" container logs. We would need to look at logs archive (particularly `teuthology.log` file) to see if the test passed successfully.
174
+
Right now, we cannot determine if the test run was successful or not from the output of "teuthology" container logs. We would need to look at logs archive (particularly `teuthology.log` file) to see if the test passed successfully.
175
175
176
176
177
177
Implement a new ceph-devstack command to locate / display `teuthology.log` log file of a test run. By default, test logs are found at `~/.local/share/ceph-devstack`, but this path can be configurable. Log archives are stored as `<run-name>/<job-id>/teuthology.log`.
178
178
179
179
By default, this command should locate logs of most recent test run, and dumps logs if there is only one job. If multiple jobs are found in a run, alert the user and ask them to choose a job.
180
180
181
-
We can determine "latest run" by parsing datetime in the run name.
181
+
We can determine "latest run" by parsing datetime in the run name.
182
182
183
-
Also add a flag to this command to output filename (full path) instead of contents of logfile.
183
+
Also add a flag to this command to output filename (full path) instead of contents of logfile.
184
184
185
-
##### BONUS
185
+
##### BONUS
186
186
187
-
Write unit tests for the above feature.
187
+
Write unit tests for the above feature.
188
188
189
-
#### Problem Statement
189
+
#### Problem Statement
190
190
191
-
Implement a feature that allows ceph-devstack to to configured to use an arbitrary number of storage devices per testnode container. This will enable us to deploy multiple [Ceph OSDs](https://docs.ceph.com/en/latest/glossary/#term-Ceph-OSD) per testnode - bringing us closer to how we use teuthology in production. Right now, ceph-devstack supports 1 OSD per testnode.
191
+
Implement a feature that allows ceph-devstack to to configured to use an arbitrary number of storage devices per testnode container. This will enable us to deploy multiple [Ceph OSDs](https://docs.ceph.com/en/latest/glossary/#term-Ceph-OSD) per testnode - bringing us closer to how we use teuthology in production. Right now, ceph-devstack supports 1 OSD per testnode.
192
192
193
193
If you have extra time, you might consider also allowing the _size_ of the storage devices to be configurable. The same size can be used for all.
194
194
195
195
In the future, we may also want to implement a feature that allows ceph-devstack to discover and directly consume unused storage devices on the host machine, as opposed to using loop devices. This would enable more performance-sensitive testing.
196
196
197
-
#### Connect
198
-
199
-
Feel free to reach out to us on the [#gsoc-2025-teuthology](https://ceph-storage.slack.com/archives/C08GR4Q8YS0) Slack channel under ceph-storage.slack.com. Use slack invite link at the bottom of [this page](https://ceph.io/en/community/connect/) to join ceph-storage.slack.com workspace.
197
+
#### Connect
200
198
199
+
Feel free to reach out to us on the [#gsoc-2025-teuthology](https://ceph-storage.slack.com/archives/C08GR4Q8YS0) Slack channel under ceph-storage.slack.com. Use slack invite link at the bottom of [this page](https://ceph.io/en/community/connect/) to join ceph-storage.slack.com workspace.
0 commit comments