You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To start off, we would like you to familiarise yourself with this project. This would involve understanding the basics of the [Teuthology](https://github.com/ceph/teuthology) as well.
159
-
160
-
Evaluation Tasks -
161
-
162
-
##### Task 1
163
-
1. Set up ceph-devstack locally (you can see supported Operating Systems here - https://github.com/zmc/ceph-devstack/tree/main)
164
-
2. Test your setup by making sure that you can run the following command without any issues:
Once you have this running, share a screenshot with the mentors.
171
-
172
-
##### Task 2
173
-
174
-
Right now, we cannot determine if the test run was successful or not from the output of "teuthology" container logs. We would need to look at logs archive (particularly `teuthology.log` file) to see if the test passed successfully.
175
-
176
-
177
-
Implement a new ceph-devstack command to locate / display `teuthology.log` log file of a test run. By default, test logs are found at `~/.local/share/ceph-devstack`, but this path can be configurable. Log archives are stored as `<run-name>/<job-id>/teuthology.log`.
178
-
179
-
By default, this command should locate logs of most recent test run, and dumps logs if there is only one job. If multiple jobs are found in a run, alert the user and ask them to choose a job.
180
-
181
-
We can determine "latest run" by parsing datetime in the run name.
182
-
183
-
Also add a flag to this command to output filename (full path) instead of contents of logfile.
184
-
185
-
##### BONUS
186
-
187
-
Write unit tests for the above feature.
188
-
189
-
#### Problem Statement
163
+
3. Setup the remote repo as upstream (this will prevent creating additional branches)
Implement a feature that allows ceph-devstack to to configured to use an arbitrary number of storage devices per testnode container. This will enable us to deploy multiple [Ceph OSDs](https://docs.ceph.com/en/latest/glossary/#term-Ceph-OSD) per testnode - bringing us closer to how we use teuthology in production. Right now, ceph-devstack supports 1 OSD per testnode.
168
+
4. Create virtual env in the root directory of ceph-devstack & install python dependencies
169
+
```bash
170
+
python3 -m venv venv
171
+
./venv/bin/pip3 install -e .
172
+
```
192
173
193
-
If you have extra time, you might consider also allowing the _size_ of the storage devices to be configurable. The same size can be used for all.
174
+
5. Activate venv
175
+
```bash
176
+
source venv/bin/activate
177
+
```
194
178
195
-
In the future, we may also want to implement a feature that allows ceph-devstack to discover and directly consume unused storage devices on the host machine, as opposed to using loop devices. This would enable more performance-sensitive testing.
179
+
6. Run doctor command to check & fix the dependencies that you need for ceph-devstack
180
+
```bash
181
+
ceph-devstack -v doctor --fix
182
+
```
196
183
197
-
#### Connect
184
+
7. Build, Create and Start the all containers in ceph-devstack
185
+
```bash
186
+
ceph-devstack -v build
187
+
ceph-devstack -v create
188
+
ceph-devstack -v start
189
+
```
198
190
199
-
Feel free to reach out to us on the [#gsoc-2025-teuthology](https://ceph-storage.slack.com/archives/C08GR4Q8YS0) Slack channel under ceph-storage.slack.com. Use slack invite link at the bottom of [this page](https://ceph.io/en/community/connect/) to join ceph-storage.slack.com workspace.
191
+
8. Test the containers by waiting for teuthology to finish and print the logs
0 commit comments