diff --git a/README.md b/README.md
index 7a2f9728c..9c18d08f0 100644
--- a/README.md
+++ b/README.md
@@ -140,7 +140,7 @@ the Apache License Version 2.0.
| 🗺 **[Roadmap]** | See where ZenML is working to build new features. |
| 🙋♀️ **[Contribute]** | How to contribute to the ZenML project and code base. |
-[ZenML 101]: https://docs.zenml.io/user-guide/starter-guide
+[ZenML 101]: https://docs.zenml.io/user-guides/starter-guide
[Core Concepts]: https://docs.zenml.io/getting-started/core-concepts
[Our latest release]: https://github.com/zenml-io/zenml/releases
[Vote for Features]: https://zenml.io/discussion
diff --git a/databricks-production-qa-demo/steps/deployment/deployment_deploy.py b/databricks-production-qa-demo/steps/deployment/deployment_deploy.py
index 2353ba933..b7407dcfb 100644
--- a/databricks-production-qa-demo/steps/deployment/deployment_deploy.py
+++ b/databricks-production-qa-demo/steps/deployment/deployment_deploy.py
@@ -45,7 +45,7 @@ def deployment_deploy() -> Annotated[
In this example, the step can be configured to use different input data.
See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
dataset_inf: The inference dataset.
diff --git a/databricks-production-qa-demo/steps/etl/train_data_splitter.py b/databricks-production-qa-demo/steps/etl/train_data_splitter.py
index c38ab3f43..ae5414593 100644
--- a/databricks-production-qa-demo/steps/etl/train_data_splitter.py
+++ b/databricks-production-qa-demo/steps/etl/train_data_splitter.py
@@ -41,7 +41,7 @@ def train_data_splitter(
In this example, the step can be configured to use different test
set sizes. See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
dataset: Dataset read from source.
diff --git a/databricks-production-qa-demo/steps/hp_tuning/hp_tuning_single_search.py b/databricks-production-qa-demo/steps/hp_tuning/hp_tuning_single_search.py
index e7c1dc561..b4c3eb0dd 100644
--- a/databricks-production-qa-demo/steps/hp_tuning/hp_tuning_single_search.py
+++ b/databricks-production-qa-demo/steps/hp_tuning/hp_tuning_single_search.py
@@ -50,7 +50,7 @@ def hp_tuning_single_search(
to use different input datasets and also have a flag to fall back to default
model architecture. See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
model_package: The package containing the model to use for hyperparameter tuning.
diff --git a/databricks-production-qa-demo/steps/inference/inference_predict.py b/databricks-production-qa-demo/steps/inference/inference_predict.py
index 77333b9b0..d08540657 100644
--- a/databricks-production-qa-demo/steps/inference/inference_predict.py
+++ b/databricks-production-qa-demo/steps/inference/inference_predict.py
@@ -43,7 +43,7 @@ def inference_predict(
In this example, the step can be configured to use different input data.
See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
dataset_inf: The inference dataset.
diff --git a/databricks-production-qa-demo/steps/promotion/compute_performance_metrics_on_current_data.py b/databricks-production-qa-demo/steps/promotion/compute_performance_metrics_on_current_data.py
index ec637b6ad..a9722c0ab 100644
--- a/databricks-production-qa-demo/steps/promotion/compute_performance_metrics_on_current_data.py
+++ b/databricks-production-qa-demo/steps/promotion/compute_performance_metrics_on_current_data.py
@@ -44,7 +44,7 @@ def compute_performance_metrics_on_current_data(
and target environment stage for promotion.
See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
dataset_tst: The test dataset.
diff --git a/databricks-production-qa-demo/steps/promotion/promote_with_metric_compare.py b/databricks-production-qa-demo/steps/promotion/promote_with_metric_compare.py
index 29ff1b927..d23a0c371 100644
--- a/databricks-production-qa-demo/steps/promotion/promote_with_metric_compare.py
+++ b/databricks-production-qa-demo/steps/promotion/promote_with_metric_compare.py
@@ -46,7 +46,7 @@ def promote_with_metric_compare(
and target environment stage for promotion.
See the documentation for more information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
latest_metric: Recently trained model metric results.
diff --git a/databricks-production-qa-demo/steps/training/model_trainer.py b/databricks-production-qa-demo/steps/training/model_trainer.py
index ababb9f44..03edae2eb 100644
--- a/databricks-production-qa-demo/steps/training/model_trainer.py
+++ b/databricks-production-qa-demo/steps/training/model_trainer.py
@@ -72,7 +72,7 @@ def model_trainer(
hyperparameters to the model constructor. See the documentation for more
information:
- https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
dataset_trn: The preprocessed train dataset.
diff --git a/huggingface-sagemaker/README.md b/huggingface-sagemaker/README.md
index 672ff985d..cc9d600ea 100644
--- a/huggingface-sagemaker/README.md
+++ b/huggingface-sagemaker/README.md
@@ -71,7 +71,7 @@ make setup
Connect to a deployed ZenML and register secrets
After this, you should have ZenML and all the requirements of the project installed locally.
-Next thing to do is to connect to a [deployed ZenML instance](https://docs.zenml.io/deploying-zenml/). You can
+Next thing to do is to connect to a [deployed ZenML instance](https://docs.zenml.io/user-guides/production-guide/deploying-zenml). You can
create a free trial using [ZenML Pro](https://cloud.zenml.io) to get setup quickly.
Once you have your deployed ZenML ready, you can connect to it using:
@@ -93,7 +93,7 @@ zenml secret create huggingface_creds --username=HUGGINGFACE_USERNAME --token=HU
Set up your local stack
-To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks) with the required components to run the pipelines.
+To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guides/production-guide/understand-stacks) with the required components to run the pipelines.
```shell
make install-stack
diff --git a/huggingface-sagemaker/steps/promotion/promote_metric_compare_promoter.py b/huggingface-sagemaker/steps/promotion/promote_metric_compare_promoter.py
index 0938d73d8..aab26ed03 100644
--- a/huggingface-sagemaker/steps/promotion/promote_metric_compare_promoter.py
+++ b/huggingface-sagemaker/steps/promotion/promote_metric_compare_promoter.py
@@ -48,7 +48,7 @@ def promote_metric_compare_promoter(
In this example, the step can be configured to use different input data.
See the documentation for more information:
- https://docs.zenml.io/user-guide/advanced-guide/configure-steps-pipelines
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
latest_metrics: Recently trained model metrics results.
diff --git a/huggingface-sagemaker/steps/tokenizer_loader/tokenizer_loader.py b/huggingface-sagemaker/steps/tokenizer_loader/tokenizer_loader.py
index 21c053314..513bc8d6a 100644
--- a/huggingface-sagemaker/steps/tokenizer_loader/tokenizer_loader.py
+++ b/huggingface-sagemaker/steps/tokenizer_loader/tokenizer_loader.py
@@ -46,7 +46,7 @@ def tokenizer_loader(
For more information on how to configure steps in a pipeline, refer to the
following documentation:
- https://docs.zenml.io/user-guide/advanced-guide/configure-steps-pipelines
+ https://docs.zenml.io/how-to/pipeline-development/use-configuration-files
Args:
lower_case: A boolean value indicating whether to convert the input text to
diff --git a/llm-complete-guide/README.md b/llm-complete-guide/README.md
index cd65d75a4..403f4d724 100644
--- a/llm-complete-guide/README.md
+++ b/llm-complete-guide/README.md
@@ -11,7 +11,7 @@ concepts covered in this guide to your own projects.
Contained within this project is all the code needed to run the full pipelines.
You can follow along [in our
-guide](https://docs.zenml.io/user-guide/llmops-guide/) to understand the
+guide](https://docs.zenml.io/user-guides/llmops-guide/) to understand the
decisions and tradeoffs behind the pipeline and step code contained here. You'll
build a solid understanding of how to leverage LLMs in your MLOps workflows
using ZenML, enabling you to build powerful, scalable, and maintainable
@@ -221,7 +221,7 @@ evaluate the responses.
## Embeddings finetuning
For embeddings finetuning we first generate synthetic data and then finetune the
-embeddings. Both of these pipelines are described in [the LLMOps guide](https://docs.zenml.io/v/docs/user-guide/llmops-guide/finetuning-embeddings) and
+embeddings. Both of these pipelines are described in [the LLMOps guide](https://docs.zenml.io/v/docs/user-guides/llmops-guide/finetuning-embeddings) and
instructions for how to run them are provided below.
### Run the `distilabel` synthetic data generation pipeline
diff --git a/llm-complete-guide/steps/eval_retrieval.py b/llm-complete-guide/steps/eval_retrieval.py
index 1941e92c3..ebec42b50 100644
--- a/llm-complete-guide/steps/eval_retrieval.py
+++ b/llm-complete-guide/steps/eval_retrieval.py
@@ -58,7 +58,7 @@
},
{
"question": "How do I generate embeddings as part of a RAG pipeline when using ZenML?",
- "url_ending": "user-guide/llmops-guide/rag-with-zenml/embeddings-generation",
+ "url_ending": "user-guides/llmops-guide/rag-with-zenml/embeddings-generation",
},
{
"question": "How do I use failure hooks in my ZenML pipeline?",
diff --git a/llm-complete-guide/steps/url_scraper.py b/llm-complete-guide/steps/url_scraper.py
index 0e41cff3a..525c6d3b9 100644
--- a/llm-complete-guide/steps/url_scraper.py
+++ b/llm-complete-guide/steps/url_scraper.py
@@ -44,7 +44,7 @@ def url_scraper(
docs_urls = [
"https://docs.zenml.io/getting-started/system-architectures",
"https://docs.zenml.io/getting-started/core-concepts",
- "https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/rag-85-loc",
+ "https://docs.zenml.io/user-guides/llmops-guide/rag-with-zenml/rag-85-loc",
"https://docs.zenml.io/how-to/track-metrics-metadata/logging-metadata",
"https://docs.zenml.io/how-to/debug-and-solve-issues",
"https://docs.zenml.io/stack-components/step-operators/azureml",
diff --git a/llm-complete-guide/tests/test_url_scraping_utils.py b/llm-complete-guide/tests/test_url_scraping_utils.py
index a95c1d3c3..8f50a33bb 100644
--- a/llm-complete-guide/tests/test_url_scraping_utils.py
+++ b/llm-complete-guide/tests/test_url_scraping_utils.py
@@ -22,15 +22,15 @@
"url, expected_parent_section",
[
(
- "https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline",
+ "https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline",
"user-guide",
),
(
- "https://docs.zenml.io/v/docs/user-guide/production-guide/deploying-zenml",
+ "https://docs.zenml.io/v/docs/user-guides/production-guide/deploying-zenml",
"user-guide",
),
(
- "https://docs.zenml.io/v/0.56.1/stack-components/integration-overview",
+ "https://docs.zenml.io/stacks",
"stacks-and-components",
),
],
diff --git a/nightwatch-ai/README.md b/nightwatch-ai/README.md
index 1618178db..8cfc028d4 100644
--- a/nightwatch-ai/README.md
+++ b/nightwatch-ai/README.md
@@ -155,7 +155,7 @@ jobs:
NightWatch is built on ZenML, giving you access to a complete MLOps ecosystem:
- **Orchestration**: Scale with [Airflow](https://docs.zenml.io/stack-components/orchestrators/airflow) or [Kubeflow](https://docs.zenml.io/stack-components/orchestrators/kubeflow)
-- **Storage**: Store artifacts on [cloud storage](https://docs.zenml.io/user-guide/starter-guide/cache-previous-executions)
+- **Storage**: Store artifacts on [cloud storage](https://docs.zenml.io/user-guides/starter-guide/cache-previous-executions)
- **Tracking**: Monitor experiments with [MLflow integration](https://docs.zenml.io/stack-components/experiment-trackers/mlflow)
- **Alerting**: Customize notifications through various channels
diff --git a/sign-language-detection-yolov5/README.md b/sign-language-detection-yolov5/README.md
index 6a1b7ee81..36844427b 100644
--- a/sign-language-detection-yolov5/README.md
+++ b/sign-language-detection-yolov5/README.md
@@ -32,18 +32,18 @@ installed on your local machine:
* [Docker](https://www.docker.com/)
* [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated)
* [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely)
-* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database
+* [Remote ZenML Server](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database
### :rocket: Remote ZenML Server
For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI
or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your
machine as well as all stack components that may need access to the server.
-[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)
+[Read more information about the use case here](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)
In order to achieve this there are two different ways to get access to a remote ZenML Server.
-1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/
+1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/
2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted
version of the ZenML Server with no setup required.
@@ -59,7 +59,7 @@ pip install -r requirements.txt
pip install -r yolov5/requirements.txt
```
-Starting with ZenML 0.20.0, ZenML comes bundled with a React-based dashboard. This dashboard allows you to observe your stacks, stack components and pipeline DAGs in a dashboard interface. To access this, you need to [launch the ZenML Server and Dashboard locally](https://docs.zenml.io/user-guide/starter-guide#explore-the-dashboard), but first you must install the optional dependencies for the ZenML server:
+Starting with ZenML 0.20.0, ZenML comes bundled with a React-based dashboard. This dashboard allows you to observe your stacks, stack components and pipeline DAGs in a dashboard interface. To access this, you need to [launch the ZenML Server and Dashboard locally](https://docs.zenml.io/user-guides/starter-guide#explore-the-dashboard), but first you must install the optional dependencies for the ZenML server:
```bash
zenml connect --url=$ZENML_SERVER_URL
diff --git a/zencoder/README.md b/zencoder/README.md
index 741238e9a..4a19d5327 100644
--- a/zencoder/README.md
+++ b/zencoder/README.md
@@ -28,10 +28,10 @@
One of the first jobs of somebody entering MLOps is to convert their manual scripts or notebooks into pipelines that can be deployed on the cloud. This job is tedious, and can take time. For example, one has to think about:
-1. Breaking down things into [step functions](https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline)
+1. Breaking down things into [step functions](https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline)
2. Type annotating the steps properly
3. Connecting the steps together in a pipeline
-4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guide/production-guide/configure-pipeline)
+4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guides/production-guide/configure-pipeline)
5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/how-to/customize-docker-builds).
Frameworks like [ZenML](https://github.com/zenml-io/zenml) go a long way in alleviating this burden by abstracting much of the complexity away. However, recent advancement in Large Language Model based Copilots offer hope that even more repetitive aspects of this task can be automated.
@@ -82,7 +82,7 @@ python run.py --deploy-pipeline --config
python run.py --deploy-pipeline --config deployment_a100.yaml
```
-The `feature_engineering` and `deployment` pipeline can be run simply with the `default` stack, but the training pipelines [stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks) will depend on the config.
+The `feature_engineering` and `deployment` pipeline can be run simply with the `default` stack, but the training pipelines [stack](https://docs.zenml.io/user-guides/production-guide/understand-stacks) will depend on the config.
The `deployment` pipelines relies on the `training_pipeline` to have run before.
diff --git a/zenml-support-agent/README.md b/zenml-support-agent/README.md
index 6acc8363e..80e248e91 100644
--- a/zenml-support-agent/README.md
+++ b/zenml-support-agent/README.md
@@ -127,7 +127,7 @@ artifacts for your own data, you can change values as appropriate.
## ☁️ Running it on GCP
It is much more ideal to run a pipeline like the agent creation pipeline on a regular schedule. In order to achieve that,
-you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml)
+you have to [deploy ZenML](https://docs.zenml.io/user-guides/production-guide/deploying-zenml)
and set up a stack that supports
[our scheduling
feature](https://docs.zenml.io/how-to/build-pipelines/schedule-a-pipeline). If you