Skip to content

Commit 2045302

Browse files
authored
Revert "docs: convert auto-generated documentation to from pydoc to sphinx"
This reverts commit 5606f84.
1 parent 30588b2 commit 2045302

34 files changed

+8821
-896
lines changed

.github/workflows/release.yaml

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,10 +51,7 @@ jobs:
5151
- name: Run poetry install
5252
run: poetry install --with docs
5353
- name: Create new documentation
54-
run: |
55-
sphinx-apidoc -o docs/sphinx src/codeflare_sdk "**/*test_*" --force
56-
make clean -C docs/sphinx
57-
make html -C docs/sphinx
54+
run: poetry run pdoc --html -o docs/detailed-documentation src/codeflare_sdk && pushd docs/detailed-documentation && rm -rf cluster job utils && mv codeflare_sdk/* . && rm -rf codeflare_sdk && popd && find docs/detailed-documentation -type f -name "*.html" -exec bash -c "echo '' >> {}" \;
5855
- name: Copy demo notebooks into SDK package
5956
run: cp -r demo-notebooks src/codeflare_sdk/demo-notebooks
6057
- name: Run poetry build

docs/authentication.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,40 @@
1+
# Authentication via the CodeFlare SDK
2+
Currently there are four ways of authenticating to your cluster via the SDK.<br>
3+
Authenticating with your cluster allows you to perform actions such as creating Ray Clusters and Job Submission.
4+
5+
## Method 1 Token Authentication
6+
This is how a typical user would authenticate to their cluster using `TokenAuthentication`.
7+
```
8+
from codeflare_sdk import TokenAuthentication
9+
10+
auth = TokenAuthentication(
11+
token = "XXXXX",
12+
server = "XXXXX",
13+
skip_tls=False,
14+
# ca_cert_path="/path/to/cert"
15+
)
16+
auth.login()
17+
# log out with auth.logout()
18+
```
19+
Setting `skip_tls=True` allows interaction with an HTTPS server bypassing the server certificate checks although this is not secure.<br>
20+
You can pass a custom certificate to `TokenAuthentication` by using `ca_cert_path="/path/to/cert"` when authenticating provided `skip_tls=False`. Alternatively you can set the environment variable `CF_SDK_CA_CERT_PATH` to the path of your custom certificate.
21+
22+
## Method 2 Kubernetes Config File Authentication (Default location)
23+
If a user has authenticated to their cluster by alternate means e.g. run a login command like `oc login --token=<token> --server=<server>` their kubernetes config file should have updated.<br>
24+
If the user has not specifically authenticated through the SDK by other means such as `TokenAuthentication` then the SDK will try to use their default Kubernetes config file located at `"/HOME/.kube/config"`.
25+
26+
## Method 3 Specifying a Kubernetes Config File
27+
A user can specify a config file via a different authentication class `KubeConfigFileAuthentication` for authenticating with the SDK.<br>
28+
This is what loading a custom config file would typically look like.
29+
```
30+
from codeflare_sdk import KubeConfigFileAuthentication
31+
32+
auth = KubeConfigFileAuthentication(
33+
kube_config_path="/path/to/config",
34+
)
35+
auth.load_kube_config()
36+
# log out with auth.logout()
37+
```
38+
39+
## Method 4 In-Cluster Authentication
40+
If a user does not authenticate by any of the means detailed above and does not have a config file at `"/HOME/.kube/config"` the SDK will try to authenticate with the in-cluster configuration file.

docs/cluster-configuration.md

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Ray Cluster Configuration
2+
3+
To create Ray Clusters using the CodeFlare SDK a cluster configuration needs to be created first.<br>
4+
This is what a typical cluster configuration would look like; Note: The values for CPU and Memory are at the minimum requirements for creating the Ray Cluster.
5+
6+
```python
7+
from codeflare_sdk import Cluster, ClusterConfiguration
8+
9+
cluster = Cluster(ClusterConfiguration(
10+
name='ray-example', # Mandatory Field
11+
namespace='default', # Default None
12+
head_cpu_requests=1, # Default 2
13+
head_cpu_limits=1, # Default 2
14+
head_memory_requests=1, # Default 8
15+
head_memory_limits=1, # Default 8
16+
head_extended_resource_requests={'nvidia.com/gpu':0}, # Default 0
17+
worker_extended_resource_requests={'nvidia.com/gpu':0}, # Default 0
18+
num_workers=1, # Default 1
19+
worker_cpu_requests=1, # Default 1
20+
worker_cpu_limits=1, # Default 1
21+
worker_memory_requests=2, # Default 2
22+
worker_memory_limits=2, # Default 2
23+
# image="", # Optional Field
24+
machine_types=["m5.xlarge", "g4dn.xlarge"],
25+
labels={"exampleLabel": "example", "secondLabel": "example"},
26+
))
27+
```
28+
Note: 'quay.io/modh/ray:2.35.0-py39-cu121' is the default image used by the CodeFlare SDK for creating a RayCluster resource. If you have your own Ray image which suits your purposes, specify it in image field to override the default image. If you are using ROCm compatible GPUs you can use 'quay.io/modh/ray:2.35.0-py39-rocm61'. You can also find documentation on building a custom image [here](https://github.com/opendatahub-io/distributed-workloads/tree/main/images/runtime/examples).
29+
30+
The `labels={"exampleLabel": "example"}` parameter can be used to apply additional labels to the RayCluster resource.
31+
32+
After creating their `cluster`, a user can call `cluster.up()` and `cluster.down()` to respectively create or remove the Ray Cluster.
33+
34+
35+
## Deprecating Parameters
36+
The following parameters of the `ClusterConfiguration` are being deprecated in release `v0.22.0`. <!-- TODO: When removing deprecated parameters update this statement -->
37+
| Deprecated Parameter | Replaced By |
38+
| :--------- | :-------- |
39+
| `head_cpus` | `head_cpu_requests`, `head_cpu_limits` |
40+
| `head_memory` | `head_memory_requests`, `head_memory_limits` |
41+
| `min_cpus` | `worker_cpu_requests` |
42+
| `max_cpus` | `worker_cpu_limits` |
43+
| `min_memory` | `worker_memory_requests` |
44+
| `max_memory` | `worker_memory_limits` |
45+
| `head_gpus` | `head_extended_resource_requests` |
46+
| `num_gpus` | `worker_extended_resource_requests` |

0 commit comments

Comments
 (0)