You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|**Customize Blueprints**| Tailor existing OCI AI Blueprints to suit your exact AI workload needs—everything from hyperparameters to node counts and hardware. |[Read More](docs/custom_blueprints/README.md)|
38
-
|**Updating OCI AI Blueprints**| Keep your OCI AI Blueprints environment current with the latest control plane and portal updates. |[Read More](docs/installing_new_updates/README.md)|
39
-
|**Shared Node Pool**| Use longer-lived resources (e.g., bare metal nodes) across multiple blueprints or to persist resources after a blueprint is undeployed. |[Read More](docs/shared_node_pools/README.md)|
40
-
|**File Storage Service**| Store and supply model weights using OCI File Storage Service for blueprint deployments. |[Read More](docs/fss/README.md)|
41
-
|**Auto-Scaling**| Automatically adjust resource usage based on infrastructure or application-level metrics to optimize performance and costs. |[Read More](docs/auto_scaling/README.md)|
37
+
|**Customize Blueprints**| Tailor existing OCI AI Blueprints to suit your exact AI workload needs—everything from hyperparameters to node counts and hardware. |[Read More](../custom_blueprints/README.md)|
38
+
|**Updating OCI AI Blueprints**| Keep your OCI AI Blueprints environment current with the latest control plane and portal updates. |[Read More](../installing_new_updates/README.md)|
39
+
|**Shared Node Pool**| Use longer-lived resources (e.g., bare metal nodes) across multiple blueprints or to persist resources after a blueprint is undeployed. |[Read More](../shared_node_pools/README.md)|
40
+
|**File Storage Service**| Store and supply model weights using OCI File Storage Service for blueprint deployments. |[Read More](../fss/README.md)|
41
+
|**Auto-Scaling**| Automatically adjust resource usage based on infrastructure or application-level metrics to optimize performance and costs. |[Read More](../auto_scaling/README.md)|
42
42
43
43
---
44
44
@@ -77,13 +77,13 @@ A:
77
77
A: Deploy a vLLM blueprint, then use a tool like LLMPerf to run benchmarking against your inference endpoint. Contact us for more details.
78
78
79
79
**Q: Where can I see the full list of blueprints?**
80
-
A: All available blueprints are listed [here](docs/sample_blueprints/README.md). If you need something custom, please let us know.
80
+
A: All available blueprints are listed [here](../sample_blueprints/README.md). If you need something custom, please let us know.
81
81
82
82
**Q: How do I check logs for troubleshooting?**
83
83
A: Use `kubectl` to inspect pod logs in your OKE cluster.
84
84
85
85
**Q: Does OCI AI Blueprints support auto-scaling?**
86
-
A: Yes, we leverage KEDA for application-driven auto-scaling. See [documentation](docs/auto_scaling/README.md).
86
+
A: Yes, we leverage KEDA for application-driven auto-scaling. See [documentation](../auto_scaling/README.md).
87
87
88
88
**Q: Which GPUs are compatible?**
89
89
A: Any NVIDIA GPUs available in your OCI region (A10, A100, H100, etc.).
@@ -92,4 +92,4 @@ A: Any NVIDIA GPUs available in your OCI region (A10, A100, H100, etc.).
92
92
A: Yes, though testing on clusters running other workloads is ongoing. We recommend a clean cluster for best stability.
93
93
94
94
**Q: How do I run multiple blueprints on the same node?**
95
-
A: Enable shared node pools. [Read more here](docs/shared_node_pools/README.md).
95
+
A: Enable shared node pools. [Read more here](../shared_node_pools/README.md).
0 commit comments