Skip to content

Commit ddc9e32

Browse files
committed
Merge branch 'develop' into feature/dynamic-pipelines
2 parents ec7bae4 + 6d19205 commit ddc9e32

File tree

145 files changed

+5825
-1631
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

145 files changed

+5825
-1631
lines changed

README.md

Lines changed: 37 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
</a>
88
<br />
99
<div align="center">
10-
<h3 align="center">Your unified toolkit for shipping everything from decision trees to complex AI agents.</h3>
10+
<h3 align="center">One AI Platform From Pipelines to Agents </h3>
1111
</div>
1212

1313
[![PyPi][pypi-shield]][pypi-url]
@@ -36,6 +36,7 @@
3636
<a href="https://github.com/zenml-io/zenml/issues">Report Bug</a> •
3737
<a href="https://zenml.io/pro">Sign up for ZenML Pro</a> •
3838
<a href="https://www.zenml.io/blog">Blog</a> •
39+
<a href="https://docs.zenml.io/">Docs</a>
3940
<br />
4041
<br />
4142
🎉 For the latest release, see the <a href="https://github.com/zenml-io/zenml/releases">release notes</a>.
@@ -78,19 +79,9 @@ ZenML is used by thousands of companies to run their AI workflows. Here are some
7879

7980
## 🚀 Get Started (5 minutes)
8081

81-
### 🏗️ Architecture Overview
82-
83-
ZenML uses a [**client-server architecture**](https://docs.zenml.io/getting-started/system-architectures) with an integrated web dashboard ([zenml-io/zenml-dashboard](https://github.com/zenml-io/zenml-dashboard)):
84-
85-
- **Local Development**: `pip install "zenml[server]"` - runs both client and server locally
86-
- **Production**: Deploy server separately, connect with `pip install zenml` + `zenml login <server-url>`
87-
8882
```bash
8983
# Install ZenML with server capabilities
90-
pip install "zenml[server]"
91-
92-
# Install required dependencies
93-
pip install scikit-learn openai numpy
84+
pip install "zenml[server]" # pip install zenml will install a slimmer client
9485

9586
# Initialize your ZenML repository
9687
zenml init
@@ -99,9 +90,40 @@ zenml init
9990
zenml login
10091
```
10192

102-
Here is a brief demo:
93+
You can then explore any of the [examples](examples/) in this repo. We recommend starting with the [quickstart](examples/quickstart/), which demonstrates core ZenML concepts: pipelines, steps, artifacts, snapshots, and deployments.
94+
95+
### 🏗️ Architecture Overview
96+
97+
ZenML uses a [**client-server architecture**](https://docs.zenml.io/getting-started/system-architectures) with an integrated web dashboard ([zenml-io/zenml-dashboard](https://github.com/zenml-io/zenml-dashboard)):
98+
99+
- **Local Development**: `pip install "zenml[local]"` - runs both client and server locally
100+
- **Production**: Deploy server separately, connect with `pip install zenml` + `zenml login <server-url>`
101+
102+
## 🎮 Demo
103+
104+
Here is a short demo:
105+
106+
[![Watch the video](https://img.youtube.com/vi/rzWmaHMaI88/0.jpg)](https://youtu.be/rzWmaHMaI88)
107+
108+
## 🖼️ Resources
109+
110+
The best way to learn about ZenML is through our comprehensive documentation and tutorials:
111+
112+
- **[Documentation](https://docs.zenml.io/)** - Complete product documentation
113+
- **[Your First AI Pipeline](https://docs.zenml.io/getting-started/your-first-ai-pipeline)** - Build and evaluate an AI service in minutes
114+
- **[Starter Guide](https://docs.zenml.io/user-guides/starter-guide)** - From zero to production in 30 minutes
115+
- **[LLMOps Guide](https://docs.zenml.io/user-guides/llmops-guide)** - Specific patterns for LLM applications
116+
- **[SDK Reference](https://sdkdocs.zenml.io/)** - Complete SDK reference
117+
118+
## 📚 More examples
103119

104-
https://github.com/user-attachments/assets/edeb314c-fe07-41ba-b083-cd9ab11db4a7
120+
1. **[Agent Architecture Comparison](examples/agent_comparison/)** - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
121+
2. **[Deploying ML Models](examples/deploying_ml_model/)** - Deploy classical ML models as production endpoints with monitoring and versioning
122+
3. **[Deploying Agents](examples/deploying_agent/)** - Document analysis service with pipelines, evaluation, and embedded web UI
123+
4. **[E2E Batch Inference](examples/e2e/)** - Complete MLOps pipeline with feature engineering
124+
5. **[LLM RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide)** - Production RAG with evaluation loops
125+
6. **[Agentic Workflow (Deep Research)](https://github.com/zenml-io/zenml-projects/tree/main/deep_research)** - Orchestrate your agents with ZenML
126+
7. **[Fine-tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/gamesense)** - Fine-tune and deploy LLMs
105127

106128
## 🗣️ Chat With Your Pipelines: ZenML MCP Server
107129

@@ -121,26 +143,6 @@ Stop clicking through dashboards to understand your ML workflows. The **[ZenML M
121143

122144
The MCP (Model Context Protocol) integration transforms your ZenML metadata into conversational insights, making pipeline debugging and analysis as easy as asking a question. Perfect for teams who want to democratize access to ML operations without requiring dashboard expertise.
123145

124-
## 📚 Learn More
125-
126-
### 🖼️ Getting Started Resources
127-
128-
The best way to learn about ZenML is through our comprehensive documentation and tutorials:
129-
130-
- **[Your First AI Pipeline](https://docs.zenml.io/getting-started/your-first-ai-pipeline)** - Build and evaluate an AI service in minutes
131-
- **[Starter Guide](https://docs.zenml.io/user-guides/starter-guide)** - From zero to production in 30 minutes
132-
- **[LLMOps Guide](https://docs.zenml.io/user-guides/llmops-guide)** - Specific patterns for LLM applications
133-
- **[SDK Reference](https://sdkdocs.zenml.io/)** - Complete SDK reference
134-
135-
### 📖 Production Examples
136-
137-
1. **[Agent Architecture Comparison](examples/agent_comparison/)** - Compare AI agents with LangGraph workflows, LiteLLM integration, and automatic visualizations via custom materializers
138-
2. **[Minimal Agent Production](examples/minimal_agent_production/)** - Document analysis service with pipelines, evaluation, and web UI
139-
3. **[E2E Batch Inference](examples/e2e/)** - Complete MLOps pipeline with feature engineering
140-
4. **[LLM RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide)** - Production RAG with evaluation loops
141-
5. **[Agentic Workflow (Deep Research)](https://github.com/zenml-io/zenml-projects/tree/main/deep_research)** - Orchestrate your agents with ZenML
142-
6. **[Fine-tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/gamesense)** - Fine-tune and deploy LLMs
143-
144146
### 🎓 Books & Resources
145147

146148
<div align="center">
@@ -152,7 +154,7 @@ The best way to learn about ZenML is through our comprehensive documentation and
152154
</a>
153155
</div>
154156

155-
ZenML is featured in these comprehensive guides to production AI systems.
157+
[ZenML](https://zenml.io) is featured in these comprehensive guides to production AI systems.
156158

157159
## 🤝 Join ML Engineers Building the Future of AI
158160

docker/zenml-quickstart-dev.Dockerfile

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,6 @@ ARG CLOUD_PROVIDER
1111
# Install the Python requirements
1212
RUN pip install uv
1313

14-
RUN uv pip install "git+https://github.com/zenml-io/zenml.git@$ZENML_BRANCH" notebook pyarrow datasets transformers transformers[torch] torch sentencepiece
15-
1614
RUN echo "Cloud Provider: $CLOUD_PROVIDER";
1715
# Install cloud-specific ZenML integrations
1816
RUN if [ "$CLOUD_PROVIDER" = "aws" ]; then \

docker/zenml-quickstart.Dockerfile

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,6 @@ ARG CLOUD_PROVIDER
1414
# Install the Python requirements
1515
RUN pip install uv
1616

17-
RUN uv pip install zenml${ZENML_VERSION:+==$ZENML_VERSION} notebook pyarrow datasets transformers transformers[torch] torch sentencepiece
18-
1917
RUN echo "Cloud Provider: $CLOUD_PROVIDER";
2018
# Install cloud-specific ZenML integrations
2119
RUN if [ "$CLOUD_PROVIDER" = "aws" ]; then \
97.3 KB
Loading

docs/book/getting-started/hello-world.md

Lines changed: 83 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -39,76 +39,131 @@ def basic_step() -> str:
3939

4040

4141
@pipeline
42-
def basic_pipeline():
42+
def basic_pipeline() -> str:
4343
"""A simple pipeline with just one step."""
44-
basic_step()
44+
greeting = basic_step()
45+
return greeting
4546

4647

4748
if __name__ == "__main__":
4849
<strong> basic_pipeline()
4950
</strong></code></pre>
5051

51-
{% hint style="success" %}
52-
Run this pipeline locally with `python run.py`. ZenML automatically tracks the execution and stores artifacts.
53-
{% endhint %}
52+
Run this pipeline in batch mode locally:
53+
54+
```bash
55+
python run.py
56+
```
57+
58+
You will see ZenML automatically tracks the execution and stores artifacts. View these on the CLI or on the dashboard.
59+
5460
{% endstep %}
5561

5662
{% step %}
57-
#### Create your ZenML account
63+
#### Create a Pipeline Snapshot (Optional but Recommended)
5864

59-
Create a [ZenML Pro account](https://zenml.io/pro) with a 14-day free trial (no payment information required). It will provide you with a dashboard to visualize pipelines, manage infrastructure, and collaborate with team members.
65+
Before deploying, you can create a **snapshot** - an immutable, reproducible version of your pipeline including code, configuration, and container images:
6066

61-
<figure><img src="../.gitbook/assets/dcp_walkthrough.gif" alt="ZenML Pro Dashboard"><figcaption><p>The ZenML Pro Dashboard</p></figcaption></figure>
67+
```bash
68+
# Create a snapshot of your pipeline
69+
zenml pipeline snapshot create run.basic_pipeline --name my_snapshot
70+
```
6271

63-
First-time users will need to set up a workspace and project. This process might take a few minutes. In the meanwhile, feel free to check out the [Core Concepts](core-concepts.md) page to get familiar with the main ideas ZenML is built on. Once ready, connect your local environment:
72+
Snapshots are powerful because they:
73+
- **Freeze your pipeline state** - Ensure the exact same pipeline always runs
74+
- **Enable parameterization** - Run the same snapshot with different inputs
75+
- **Support team collaboration** - Share ready-to-use pipeline configurations
76+
- **Integrate with automation** - Trigger from dashboards, APIs, or CI/CD systems
77+
78+
[Learn more about Snapshots](../how-to/snapshots/snapshots.md)
79+
{% endstep %}
80+
81+
{% step %}
82+
#### Deploy your pipeline as a real-time service
83+
84+
ZenML can deploy your pipeline (or snapshot) as a persistent HTTP service for real-time inference:
6485

6586
```bash
66-
# Log in and select your workspace
67-
zenml login
87+
# Deploy your pipeline directly
88+
zenml pipeline deploy run.basic_pipeline --name my_deployment
89+
90+
# OR deploy a snapshot (if you created one above)
91+
zenml pipeline snapshot deploy my_snapshot --deployment my_deployment
92+
```
93+
94+
Your pipeline now runs as a production-ready service! This is perfect for serving predictions to web apps, powering AI agents, or handling real-time requests.
95+
96+
**Key insight**: When you deploy a pipeline directly with `zenml pipeline deploy`, ZenML automatically creates an implicit snapshot behind the scenes, ensuring reproducibility.
6897

69-
# Activate your project
98+
[Learn more about Pipeline Deployments](../how-to/deployment/deployment.md)
99+
{% endstep %}
100+
101+
{% step %}
102+
#### Set up a ZenML Server (For Remote Infrastructure)
103+
104+
To use remote infrastructure (cloud deployers, orchestrators, artifact stores), you need to deploy a ZenML server to manage your pipelines centrally. You can use [ZenML Pro](https://zenml.io/pro) (managed, 14-day free trial) or [deploy it yourself](../getting-started/deploying-zenml/README.md) (self-hosted, open-source).
105+
106+
Connect your local environment:
107+
108+
```bash
109+
zenml login
70110
zenml project set <PROJECT_NAME>
71111
```
112+
113+
Once connected, you'll have a centralized dashboard to manage infrastructure, collaborate with team members, and schedule pipeline runs.
72114
{% endstep %}
73115

74116
{% step %}
75-
#### Create your first remote stack
117+
#### Create your first remote stack (Optional)
76118

77-
A "stack" in ZenML represents the infrastructure where your pipelines run. Moving from local to cloud resources is where ZenML truly shines.
119+
A "stack" in ZenML represents the infrastructure where your pipelines run. You can now scale from local development to cloud infrastructure without changing any code.
78120

79121
<figure><img src="../.gitbook/assets/stack-deployment-options.png" alt="ZenML Stack Deployment Options"><figcaption><p>Stack deployment options</p></figcaption></figure>
80122

81-
The fastest way to create a cloud stack is through the **Infrastructure-as-Code** option. This uses Terraform to deploy cloud resources and register them as a ZenML stack.
123+
Remote stacks can include:
124+
- **[Remote Deployers](https://docs.zenml.io/stacks/stack-components/deployers)** ([AWS App Runner](https://docs.zenml.io/stacks/stack-components/deployers/aws-app-runner), [GCP Cloud Run](https://docs.zenml.io/stacks/stack-components/deployers/gcp-cloud-run), [Azure Container Instances](https://docs.zenml.io/stacks/stack-components/container-registries/azure)) - for deploying your pipelines as scalable HTTP services on the cloud
125+
- **[Remote Orchestrators](https://docs.zenml.io/stacks/stack-components/orchestrators)** ([Kubernetes](https://docs.zenml.io/stacks/stack-components/orchestrators/kubernetes), [GCP Vertex AI](https://docs.zenml.io/stacks/stack-components/orchestrators/vertex), [AWS SageMaker](https://docs.zenml.io/stacks/stack-components/orchestrators/sagemaker)) - for running batch pipelines at scale
126+
- **[Remote Artifact Stores](https://docs.zenml.io/stacks/stack-components/artifact-stores)** ([S3](https://docs.zenml.io/stacks/stack-components/artifact-stores/s3), [GCS](https://docs.zenml.io/stacks/stack-components/artifact-stores/gcp), [Azure Blob](https://docs.zenml.io/stacks/stack-components/artifact-stores/azure)) - for storing and versioning pipeline artifacts
127+
128+
The fastest way to create a cloud stack is through the **Infrastructure-as-Code** option, which uses Terraform to deploy cloud resources and register them as a ZenML stack.
82129

83130
You'll need:
84131

85132
* [Terraform](https://developer.hashicorp.com/terraform/install) version 1.9+ installed locally
86133
* Authentication configured for your preferred cloud provider (AWS, GCP, or Azure)
87134
* Appropriate permissions to create resources in your cloud account
88135

89-
The deployment wizard will guide you through each step.
136+
```bash
137+
# Create a remote stack using the deployment wizard
138+
zenml stack register <STACK_NAME> \
139+
--deployer <DEPLOYER_NAME> \
140+
--orchestrator <ORCHESTRATOR_NAME> \
141+
--artifact-store <ARTIFACT_STORE_NAME>
142+
```
143+
144+
The wizard will guide you through each step.
90145
{% endstep %}
91146

92147
{% step %}
93-
#### Run your pipeline on the remote stack
94-
95-
Now run your pipeline in the cloud without changing any code.
148+
#### Deploy and run on remote infrastructure
96149

97-
First, activate your new stack:
150+
Once you have a remote stack, you can:
98151

152+
1. **Deploy your service to the cloud** - Your deployment runs on managed cloud infrastructure:
99153
```bash
100-
zenml stack set <NAME_OF_YOUR_NEW_STACK>
154+
zenml stack set <REMOTE_STACK_NAME>
155+
zenml pipeline deploy run.basic_pipeline --name my_production_deployment
101156
```
102157

103-
Then run the exact same script:
104-
158+
2. **Run batch pipelines at scale** - Use the same code with a cloud orchestrator:
105159
```bash
106-
python run.py
160+
zenml stack set <REMOTE_STACK_NAME>
161+
python run.py # Automatically runs on cloud infrastructure
107162
```
108163

109-
ZenML handles packaging code, building containers, orchestrating execution, and tracking artifacts automatically.
164+
ZenML handles packaging code, building containers, orchestrating execution, and tracking artifacts automatically across all cloud providers.
110165

111-
<figure><img src="../.gitbook/assets/pipeline-run-on-the-dashboard.png" alt="Pipeline Run in ZenML Dashboard"><figcaption><p>Your pipeline in the ZenML dashboard</p></figcaption></figure>
166+
<figure><img src="../.gitbook/assets/pipeline-run-on-the-dashboard.png" alt="Pipeline Run in ZenML Dashboard"><figcaption><p>Your pipeline in the ZenML Pro Dashboard</p></figcaption></figure>
112167
{% endstep %}
113168

114169
{% step %}
@@ -131,6 +186,8 @@ To continue your ZenML journey, explore these key topics:
131186

132187
**For LLMs and AI Agents:**
133188
* **LLMOps Guide**: Write your [first AI pipeline](your-first-ai-pipeline.md) for agent development patterns
189+
* **Deploying Agents**: To see an example of a deployed document extraction agent, see the [deploying agents](https://github.com/zenml-io/zenml/tree/main/examples/deploying_agent) example
190+
* **Agent Outer Loop**: See the [Agent Outer Loop](https://github.com/zenml-io/zenml/tree/main/examples/agent_outer_loop) example to learn about training classifiers and improving agents through feedback loops
134191
* **Agent Evaluation**: Learn to [systematically evaluate](https://github.com/zenml-io/zenml/tree/main/examples/agent_comparison) and compare different agent architectures
135192
* **Prompt Management**: Version and track prompts, tools, and agent configurations as [artifacts](../how-to/artifacts/artifacts.md)
136193

0 commit comments

Comments
 (0)