Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ the Apache License Version 2.0.
| 🗺 **[Roadmap]** | See where ZenML is working to build new features. |
| 🙋‍♀️ **[Contribute]** | How to contribute to the ZenML project and code base. |

[ZenML 101]: https://docs.zenml.io/user-guide/starter-guide
[ZenML 101]: https://docs.zenml.io/user-guides/starter-guide
[Core Concepts]: https://docs.zenml.io/getting-started/core-concepts
[Our latest release]: https://github.com/zenml-io/zenml/releases
[Vote for Features]: https://zenml.io/discussion
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def deployment_deploy() -> Annotated[
In this example, the step can be configured to use different input data.
See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
dataset_inf: The inference dataset.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def train_data_splitter(
In this example, the step can be configured to use different test
set sizes. See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
dataset: Dataset read from source.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ def hp_tuning_single_search(
to use different input datasets and also have a flag to fall back to default
model architecture. See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
model_package: The package containing the model to use for hyperparameter tuning.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ def inference_predict(
In this example, the step can be configured to use different input data.
See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
dataset_inf: The inference dataset.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def compute_performance_metrics_on_current_data(
and target environment stage for promotion.
See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
dataset_tst: The test dataset.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def promote_with_metric_compare(
and target environment stage for promotion.
See the documentation for more information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
latest_metric: Recently trained model metric results.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ def model_trainer(
hyperparameters to the model constructor. See the documentation for more
information:

https://docs.zenml.io/how-to/build-pipelines/use-pipeline-step-parameters
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
dataset_trn: The preprocessed train dataset.
Expand Down
4 changes: 2 additions & 2 deletions huggingface-sagemaker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ make setup
<summary><h3>Connect to a deployed ZenML and register secrets</h3></summary>

After this, you should have ZenML and all the requirements of the project installed locally.
Next thing to do is to connect to a [deployed ZenML instance](https://docs.zenml.io/deploying-zenml/). You can
Next thing to do is to connect to a [deployed ZenML instance](https://docs.zenml.io/user-guides/production-guide/deploying-zenml). You can
create a free trial using [ZenML Pro](https://cloud.zenml.io) to get setup quickly.

Once you have your deployed ZenML ready, you can connect to it using:
Expand All @@ -93,7 +93,7 @@ zenml secret create huggingface_creds --username=HUGGINGFACE_USERNAME --token=HU
<details>
<summary><h3>Set up your local stack</h3></summary>

To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks) with the required components to run the pipelines.
To run this project, you need to create a [ZenML Stack](https://docs.zenml.io/user-guides/production-guide/understand-stacks) with the required components to run the pipelines.

```shell
make install-stack
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ def promote_metric_compare_promoter(
In this example, the step can be configured to use different input data.
See the documentation for more information:

https://docs.zenml.io/user-guide/advanced-guide/configure-steps-pipelines
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
latest_metrics: Recently trained model metrics results.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def tokenizer_loader(
For more information on how to configure steps in a pipeline, refer to the
following documentation:

https://docs.zenml.io/user-guide/advanced-guide/configure-steps-pipelines
https://docs.zenml.io/how-to/pipeline-development/use-configuration-files

Args:
lower_case: A boolean value indicating whether to convert the input text to
Expand Down
4 changes: 2 additions & 2 deletions llm-complete-guide/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ concepts covered in this guide to your own projects.

Contained within this project is all the code needed to run the full pipelines.
You can follow along [in our
guide](https://docs.zenml.io/user-guide/llmops-guide/) to understand the
guide](https://docs.zenml.io/user-guides/llmops-guide/) to understand the
decisions and tradeoffs behind the pipeline and step code contained here. You'll
build a solid understanding of how to leverage LLMs in your MLOps workflows
using ZenML, enabling you to build powerful, scalable, and maintainable
Expand Down Expand Up @@ -221,7 +221,7 @@ evaluate the responses.
## Embeddings finetuning

For embeddings finetuning we first generate synthetic data and then finetune the
embeddings. Both of these pipelines are described in [the LLMOps guide](https://docs.zenml.io/v/docs/user-guide/llmops-guide/finetuning-embeddings) and
embeddings. Both of these pipelines are described in [the LLMOps guide](https://docs.zenml.io/v/docs/user-guides/llmops-guide/finetuning-embeddings) and
instructions for how to run them are provided below.

### Run the `distilabel` synthetic data generation pipeline
Expand Down
2 changes: 1 addition & 1 deletion llm-complete-guide/steps/eval_retrieval.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
},
{
"question": "How do I generate embeddings as part of a RAG pipeline when using ZenML?",
"url_ending": "user-guide/llmops-guide/rag-with-zenml/embeddings-generation",
"url_ending": "user-guides/llmops-guide/rag-with-zenml/embeddings-generation",
},
{
"question": "How do I use failure hooks in my ZenML pipeline?",
Expand Down
2 changes: 1 addition & 1 deletion llm-complete-guide/steps/url_scraper.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def url_scraper(
docs_urls = [
"https://docs.zenml.io/getting-started/system-architectures",
"https://docs.zenml.io/getting-started/core-concepts",
"https://docs.zenml.io/user-guide/llmops-guide/rag-with-zenml/rag-85-loc",
"https://docs.zenml.io/user-guides/llmops-guide/rag-with-zenml/rag-85-loc",
"https://docs.zenml.io/how-to/track-metrics-metadata/logging-metadata",
"https://docs.zenml.io/how-to/debug-and-solve-issues",
"https://docs.zenml.io/stack-components/step-operators/azureml",
Expand Down
6 changes: 3 additions & 3 deletions llm-complete-guide/tests/test_url_scraping_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@
"url, expected_parent_section",
[
(
"https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline",
"https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline",
"user-guide",
),
(
"https://docs.zenml.io/v/docs/user-guide/production-guide/deploying-zenml",
"https://docs.zenml.io/v/docs/user-guides/production-guide/deploying-zenml",
"user-guide",
),
(
"https://docs.zenml.io/v/0.56.1/stack-components/integration-overview",
"https://docs.zenml.io/stacks",
"stacks-and-components",
),
],
Expand Down
2 changes: 1 addition & 1 deletion nightwatch-ai/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ jobs:
NightWatch is built on ZenML, giving you access to a complete MLOps ecosystem:

- **Orchestration**: Scale with [Airflow](https://docs.zenml.io/stack-components/orchestrators/airflow) or [Kubeflow](https://docs.zenml.io/stack-components/orchestrators/kubeflow)
- **Storage**: Store artifacts on [cloud storage](https://docs.zenml.io/user-guide/starter-guide/cache-previous-executions)
- **Storage**: Store artifacts on [cloud storage](https://docs.zenml.io/user-guides/starter-guide/cache-previous-executions)
- **Tracking**: Monitor experiments with [MLflow integration](https://docs.zenml.io/stack-components/experiment-trackers/mlflow)
- **Alerting**: Customize notifications through various channels

Expand Down
8 changes: 4 additions & 4 deletions sign-language-detection-yolov5/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,18 +32,18 @@ installed on your local machine:
* [Docker](https://www.docker.com/)
* [GCloud CLI](https://cloud.google.com/sdk/docs/install) (authenticated)
* [MLFlow Tracking Server](https://mlflow.org/docs/latest/tracking.html#mlflow-tracking-servers) (deployed remotely)
* [Remote ZenML Server](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database
* [Remote ZenML Server](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml): a Remote Deployment of the ZenML HTTP server and database

### :rocket: Remote ZenML Server

For advanced use cases where we have a remote orchestrator or step operators such as Vertex AI
or to share stacks and pipeline information with a team we need to have a separated non-local remote ZenML Server that can be accessible from your
machine as well as all stack components that may need access to the server.
[Read more information about the use case here](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)
[Read more information about the use case here](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)

In order to achieve this there are two different ways to get access to a remote ZenML Server.

1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guide/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/
1. Deploy and manage the server manually on [your own cloud](https://docs.zenml.io/user-guides/production-guide/deploying-zenml#connecting-to-a-deployed-zenml)/
2. Sign up for [ZenML Enterprise](https://zenml.io/pricing) and get access to a hosted
version of the ZenML Server with no setup required.

Expand All @@ -59,7 +59,7 @@ pip install -r requirements.txt
pip install -r yolov5/requirements.txt
```

Starting with ZenML 0.20.0, ZenML comes bundled with a React-based dashboard. This dashboard allows you to observe your stacks, stack components and pipeline DAGs in a dashboard interface. To access this, you need to [launch the ZenML Server and Dashboard locally](https://docs.zenml.io/user-guide/starter-guide#explore-the-dashboard), but first you must install the optional dependencies for the ZenML server:
Starting with ZenML 0.20.0, ZenML comes bundled with a React-based dashboard. This dashboard allows you to observe your stacks, stack components and pipeline DAGs in a dashboard interface. To access this, you need to [launch the ZenML Server and Dashboard locally](https://docs.zenml.io/user-guides/starter-guide#explore-the-dashboard), but first you must install the optional dependencies for the ZenML server:

```bash
zenml connect --url=$ZENML_SERVER_URL
Expand Down
6 changes: 3 additions & 3 deletions zencoder/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@

One of the first jobs of somebody entering MLOps is to convert their manual scripts or notebooks into pipelines that can be deployed on the cloud. This job is tedious, and can take time. For example, one has to think about:

1. Breaking down things into [step functions](https://docs.zenml.io/user-guide/starter-guide/create-an-ml-pipeline)
1. Breaking down things into [step functions](https://docs.zenml.io/user-guides/starter-guide/create-an-ml-pipeline)
2. Type annotating the steps properly
3. Connecting the steps together in a pipeline
4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guide/production-guide/configure-pipeline)
4. Creating the appropriate YAML files to [configure your pipeline](https://docs.zenml.io/user-guides/production-guide/configure-pipeline)
5. Developing a Dockerfile or equivalent to encapsulate [the environment](https://docs.zenml.io/how-to/customize-docker-builds).

Frameworks like [ZenML](https://github.com/zenml-io/zenml) go a long way in alleviating this burden by abstracting much of the complexity away. However, recent advancement in Large Language Model based Copilots offer hope that even more repetitive aspects of this task can be automated.
Expand Down Expand Up @@ -82,7 +82,7 @@ python run.py --deploy-pipeline --config <NAME_OF_CONFIG_IN_CONFIGS_FOLDER>
python run.py --deploy-pipeline --config deployment_a100.yaml
```

The `feature_engineering` and `deployment` pipeline can be run simply with the `default` stack, but the training pipelines [stack](https://docs.zenml.io/user-guide/production-guide/understand-stacks) will depend on the config.
The `feature_engineering` and `deployment` pipeline can be run simply with the `default` stack, but the training pipelines [stack](https://docs.zenml.io/user-guides/production-guide/understand-stacks) will depend on the config.

The `deployment` pipelines relies on the `training_pipeline` to have run before.

Expand Down
2 changes: 1 addition & 1 deletion zenml-support-agent/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ artifacts for your own data, you can change values as appropriate.
## ☁️ Running it on GCP

It is much more ideal to run a pipeline like the agent creation pipeline on a regular schedule. In order to achieve that,
you have to [deploy ZenML](https://docs.zenml.io/user-guide/production-guide/deploying-zenml)
you have to [deploy ZenML](https://docs.zenml.io/user-guides/production-guide/deploying-zenml)
and set up a stack that supports
[our scheduling
feature](https://docs.zenml.io/how-to/build-pipelines/schedule-a-pipeline). If you
Expand Down
Loading