You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Update header.png file
* Update ZenML logo alt text in README.md
* Update core concepts and steps for ML workflows and agents
* Update model registration and promotion steps
* Enhance project and workspace organization features
* Refactor ML pipeline steps for clarity and consistency
* Update dependencies and add type hints to create_dataset().
* Update create_dataset to return split data as tuples
* Add agent comparison pipeline steps
* Refactor code for better readability
* Update integration information in README.md
* Update prompts, test architectures, and generate diagrams
* Update typing annotations to Any in prompt materializer and visualizer
* Add Langfuse observability integration to LLM utility calls
* Update Langfuse integration for LiteLLM
* Update readme_problem.png asset image
* Optimised images with calibre/image-actions
* Step Status Refresh Functionality + Kubernetes Orchestrator Implementation (#3735)
* first checkpoint
* new changes
* fixes
* new changes
* small change
* deprecate old method
* new changes
* missing import
* listen to events
* linting
* loop optimization
* changed the deprecation warning
* new condition
* switching to jobs
* formatting
* handling the store
* not allowing finished steps to be updated
* docstrings
* label param name
* removed unused function
* comment and formatting
* renamed function
* moved steps outside
* removed unused input
* additional check
* docstrings and formatting
* removed status checks
* orchestrator pod updates
* new check
* Upper limit datasets version (#3824)
* Add Docker settings to pipeline and refactor data loading steps
* Update agent visualizations with automatic generation
* Update visualizations method in Agent Architecture Comparison example
* Register agent materializer import to trigger registration
* Refactor data_loading function return annotations
* Add handling for missing OpenAI library import
* Remove detailed agent workflow print statement
* Update examples/agent_comparison/agent_comparison_pipeline.py
Co-authored-by: Alexej Penner <[email protected]>
* Update pipeline script with evaluation message
* Update README.md
* Update docs/book/how-to/secrets/secrets.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Update README.md
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
* Remove Langfuse integration and references
* Add token counting notes for accurate counting
* Add import of "re" at the top of the file
* Update README.md
* Update imports to remove unnecessary type ignore
* Update environment variables to use None as default
* Integrate ZenML MCP Server for conversational insights
* Auto-update of LLM Finetuning template
---------
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Barış Can Durak <[email protected]>
Co-authored-by: Michael Schuster <[email protected]>
Co-authored-by: Alexej Penner <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <[email protected]>
(cherry picked from commit 6b0047a)
Copy file name to clipboardExpand all lines: docs/book/getting-started/core-concepts.md
+21-5Lines changed: 21 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ icon: lightbulb
7
7
8
8

9
9
10
-
**ZenML** is an extensible, open-source MLOps framework for creating portable, production-ready **MLOps pipelines**. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. In order to achieve this goal, ZenML introduces various concepts for different aspects of an ML workflow, and we can categorize these concepts under three different threads:
10
+
**ZenML** is a unified, extensible, open-source MLOps framework for creating portable, production-ready **MLOps pipelines**. It's built for data scientists, ML Engineers, and MLOps Developers to collaborate as they develop to production. By extending the battle-tested principles you rely on for classical ML to the new world of AI agents, ZenML serves as one platform to develop, evaluate, and deploy your entire AI portfolio - from decision trees to complex multi-agent systems. In order to achieve this goal, ZenML introduces various concepts for different aspects of ML workflows and AI agent development, and we can categorize these concepts under three different threads:
11
11
12
12
<table data-view="cards"><thead><tr><th></th><th></th><th data-hidden></th><th data-hidden data-card-target data-type="content-ref"></th><th data-hidden data-card-cover data-type="files"></th></tr></thead><tbody><tr><td><mark style="color:purple;"><strong>1. Development</strong></mark></td><td>As a developer, how do I design my machine learning workflows?</td><td></td><td><a href="core-concepts.md#1-development">#1-development</a></td><td><a href="../.gitbook/assets/development.png">development.png</a></td></tr><tr><td><mark style="color:purple;"><strong>2. Execution</strong></mark></td><td>While executing, how do my workflows utilize the large landscape of MLOps tooling/infrastructure?</td><td></td><td><a href="core-concepts.md#2-execution">#2-execution</a></td><td><a href="../.gitbook/assets/execution.png">execution.png</a></td></tr><tr><td><mark style="color:purple;"><strong>3. Management</strong></mark></td><td>How do I establish and maintain a production-grade and efficient solution?</td><td></td><td><a href="core-concepts.md#3-management">#3-management</a></td><td><a href="../.gitbook/assets/management.png">management.png</a></td></tr></tbody></table>
13
13
@@ -17,7 +17,7 @@ If you prefer visual learning, this short video demonstrates the key concepts co
17
17
18
18
## 1. Development
19
19
20
-
First, let's look at the main concepts that play a role during the development stage of an ML workflow with ZenML.
20
+
First, let's look at the main concepts that play a role during the development stage of ML workflows and AI agent pipelines with ZenML.
Executing the Pipeline is as easy as calling the function that you decorated with the `@pipeline` decorator.
69
82
70
83
```python
71
84
if__name__=="__main__":
72
85
my_pipeline()
86
+
agent_evaluation_pipeline()
73
87
```
74
88
75
89
#### Artifacts
76
90
77
91
Artifacts represent the data that goes through your steps as inputs and outputs, and they are automatically tracked and stored by ZenML in the artifact store. They are produced by and circulated among steps whenever your step returns an object or a value. This means the data is not passed between steps in memory. Rather, when the execution of a step is completed, they are written to storage, and when a new step gets executed, they are loaded from storage.
78
92
93
+
Artifacts can be traditional ML data (datasets, models, metrics) or AI agent components (prompt templates, agent configurations, evaluation results). The same artifact system seamlessly handles both use cases.
94
+
79
95
The serialization and deserialization logic of artifacts is defined by [Materializers](../how-to/artifacts/materializers.md).
80
96
81
97
#### Models
82
98
83
-
Models are used to represent the outputs of a training process along with all metadata associated with that output. In other words: models in ZenML are more broadly defined as the weights as well as any associated information. Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client, as well as on the [ZenML Pro](https://zenml.io/pro) dashboard.
99
+
Models are used to represent the outputs of a training process along with all metadata associated with that output. In other words: models in ZenML are more broadly defined as the weights as well as any associated information. This includes traditional ML models (scikit-learn, PyTorch, etc.) and AI agent configurations (prompt templates, tool definitions, multi-agent system architectures). Models are first-class citizens in ZenML and as such viewing and using them is unified and centralized in the ZenML API, client, as well as on the [ZenML Pro](https://zenml.io/pro) dashboard.
84
100
85
101
#### Materializers
86
102
@@ -160,9 +176,9 @@ Secrets are sensitive data that you don't want to store in your code or configur
160
176
161
177
#### Collaboration
162
178
163
-
Collaboration is a crucial aspect of any MLOps team as they often need to bring together individuals with diverse skills and expertise to create a cohesive and effective workflow for machine learning projects. A successful MLOps team requires seamless collaboration between data scientists, engineers, and DevOps professionals to develop, train, deploy, and maintain machine learning models.
179
+
Collaboration is a crucial aspect of any MLOps team as they often need to bring together individuals with diverse skills and expertise to create a cohesive and effective workflow for machine learning projects and AI agent development. A successful MLOps team requires seamless collaboration between data scientists, engineers, and DevOps professionals to develop, train, deploy, and maintain both traditional ML models and AI agent systems.
164
180
165
-
With a deployed **ZenML Server**, users have the ability to create their own teams and project structures. They can easily share pipelines, runs, stacks, and other resources, streamlining the workflow and promoting teamwork.
181
+
With a deployed **ZenML Server**, users have the ability to create their own teams and project structures. They can easily share pipelines, runs, stacks, and other resources, streamlining the workflow and promoting teamwork across the entire AI development lifecycle.
Copy file name to clipboardExpand all lines: docs/book/getting-started/hello-world.md
+14-4Lines changed: 14 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ icon: hand-wave
7
7
8
8
# Hello World
9
9
10
-
This guide will help you build and deploy your first ZenML pipeline, starting locally and then transitioning to the cloud without changing your code.
10
+
This guide will help you build and deploy your first ZenML pipeline, starting locally and then transitioning to the cloud without changing your code. The same principles you'll learn here apply whether you're building classical ML models or AI agents.
11
11
12
12
{% stepper %}
13
13
{% step %}
@@ -115,14 +115,24 @@ ZenML handles packaging code, building containers, orchestrating execution, and
115
115
Congratulations! You've just experienced the core value proposition of ZenML:
116
116
117
117
***Write Once, Run Anywhere**: The same code runs locally during development and in the cloud for production
118
-
***Separation of Concerns**: Infrastructure configuration and ML code are completely decoupled, enabling independent evolution of each
119
-
***Full Tracking**: Every run, artifact, and model is automatically versioned and tracked
118
+
***Unified Framework**: Use the same MLOps principles for both classical ML models and AI agents
119
+
***Separation of Concerns**: Infrastructure configuration and ML code are completely decoupled, enabling independent
120
+
evolution of each
121
+
***Full Tracking**: Every run, artifact, and model is automatically versioned and tracked - whether it's a scikit-learn model or a multi-agent system
120
122
121
123
To continue your ZenML journey, explore these key topics:
122
124
125
+
**For All AI Workloads:**
123
126
***Pipeline Development**: Discover advanced features like [scheduling](../how-to/steps-pipelines/advanced_features.md#scheduling) and [caching](../how-to/steps-pipelines/advanced_features.md#caching)
124
127
***Artifact Management**: Learn how ZenML [stores, versions, and tracks your data](../how-to/artifacts/artifacts.md) automatically
125
-
***Organization**: Use [tags](../how-to/tags/tags.md) and [metadata](../how-to/metadata/metadata.md) to keep your ML projects structured
128
+
***Organization**: Use [tags](../how-to/tags/tags.md) and [metadata](../how-to/metadata/metadata.md) to keep your AI projects structured
129
+
130
+
**For LLMs and AI Agents:**
131
+
***LLMOps Guide**: Follow our comprehensive [LLMOps Guide](https://docs.zenml.io/user-guides/llmops-guide) for agent development patterns
132
+
***Agent Evaluation**: Learn to [systematically evaluate](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) and compare different agent architectures
133
+
***Prompt Management**: Version and track prompts, tools, and agent configurations as [artifacts](../how-to/artifacts/artifacts.md)
134
+
135
+
**Infrastructure & Deployment:**
126
136
***Containerization**: Understand how ZenML [handles containerization](../how-to/containerization/containerization.md) for reproducible execution
127
137
***Stacks & Infrastructure**: Explore the concepts behind [stacks](../how-to/stack-components/stack_components.md) and [service connectors](../how-to/stack-components/service_connectors.md) for authentication
128
138
***Secrets Management**: Learn how to [handle sensitive information](../how-to/secrets/secrets.md) securely
Copy file name to clipboardExpand all lines: docs/book/getting-started/zenml-pro/projects.md
+6-2Lines changed: 6 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,18 +5,19 @@ icon: clipboard-list
5
5
6
6
# Projects
7
7
8
-
Projects in ZenML Pro provide a logical subdivision within workspaces, allowing you to organize and manage your MLOps resources more effectively. Each project acts as an isolated environment within a workspace, with its own set of pipelines, artifacts, models, and access controls.
8
+
Projects in ZenML Pro provide a logical subdivision within workspaces, allowing you to organize and manage your MLOps resources more effectively. Each project acts as an isolated environment within a workspace, with its own set of pipelines, artifacts, models, and access controls. This isolation is particularly valuable when working with both traditional ML models and AI agent systems, allowing teams to separate different types of experiments and workflows.
9
9
10
10
## Understanding Projects
11
11
12
-
Projects help you organize your ML work and resources. You can use projects to separate different initiatives, teams, or experiments while sharing common resources across your workspace.
12
+
Projects help you organize your ML work and resources. You can use projects to separate different initiatives, teams, or experiments while sharing common resources across your workspace. This includes separating traditional ML experiments from AI agent development work.
13
13
14
14
Projects offer several key benefits:
15
15
16
16
1.**Resource Isolation**: Keep pipelines, artifacts, and models organized and separated by project
17
17
2.**Granular Access Control**: Define specific roles and permissions at the project level
18
18
3.**Team Organization**: Align projects with specific teams or initiatives within your organization
19
19
4.**Resource Management**: Track and manage resources specific to each project independently
20
+
5.**Experiment Separation**: Isolate different types of AI development work (ML vs agents vs multi-modal systems)
20
21
21
22
## Using Projects with the CLI
22
23
@@ -112,14 +113,17 @@ Projects provide isolation for various MLOps resources:
112
113
* Create projects based on logical boundaries (e.g., use cases, teams, or products)
113
114
* Use clear naming conventions for projects
114
115
* Document project purposes and ownership
116
+
* Separate traditional ML and agent development where needed
115
117
2.**Access Control**
116
118
* Start with default roles before creating custom ones
117
119
* Regularly audit project access and permissions
118
120
* Use teams for easier member management
121
+
* Implement stricter controls for production agent systems
119
122
3.**Resource Management**
120
123
* Monitor resource usage within projects
121
124
* Set up appropriate quotas and limits
122
125
* Clean up unused resources regularly
126
+
* Track LLM API costs per project for agent development
Copy file name to clipboardExpand all lines: docs/book/getting-started/zenml-pro/workspaces.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ icon: briefcase
9
9
**Note**: Workspaces were previously called "Tenants" in earlier versions of ZenML Pro. We've updated the terminology to better reflect their role in organizing MLOps resources.
10
10
{% endhint %}
11
11
12
-
Workspaces are individual, isolated deployments of the ZenML server. Each workspace has its own set of users, roles, projects, and resources. Essentially, everything you do in ZenML Pro revolves around a workspace: all of your projects, pipelines, stacks, runs, connectors and so on are scoped to a workspace.
12
+
Workspaces are individual, isolated deployments of the ZenML server. Each workspace has its own set of users, roles, projects, and resources. Essentially, everything you do in ZenML Pro revolves around a workspace: all of your projects, pipelines, stacks, runs, connectors and so on are scoped to a workspace. This includes both traditional ML workflows and AI agent development projects.
13
13
14
14

15
15
@@ -125,10 +125,12 @@ Another approach is to create workspaces based on your organization's structure
125
125
* Data Science Department Workspace
126
126
* Research Department Workspace
127
127
* Production Department Workspace
128
+
* AI Agent Development Workspace
128
129
2.**Team-based Separation**: Align workspaces with your organizational structure:
129
130
* ML Engineering Team Workspace
130
131
* Research Team Workspace
131
132
* Operations Team Workspace
133
+
* Agent Development Team Workspace
132
134
3.**Data Classification**: Separate workspaces based on data sensitivity:
"""Pipeline that creates and processes artifacts."""
80
+
# Traditional ML artifacts
57
81
data = create_data() # Produces an artifact
58
82
processed_data = process_data(data) # Uses and produces artifacts
83
+
84
+
# AI agent artifacts
85
+
prompt = create_prompt_template() # Produces a prompt artifact
86
+
agent_test = test_agent_response(prompt, "Where is my order?") # Uses prompt artifact
59
87
```
60
88
61
89
### Artifacts vs. Parameters
@@ -498,7 +526,9 @@ Artifacts are a central part of ZenML's approach to ML pipelines. They provide:
498
526
* Visualization capabilities
499
527
* Cross-pipeline data sharing
500
528
501
-
By understanding how artifacts work, you can build more effective, maintainable, and reproducible ML pipelines.
529
+
Whether you're working with traditional ML models, prompt templates, agent configurations, or evaluation datasets, ZenML's artifact system treats them all uniformly. This enables you to apply the same MLOps principles across your entire AI stack - from classical ML to complex multi-agent systems.
530
+
531
+
By understanding how artifacts work, you can build more effective, maintainable, and reproducible ML pipelines and AI workflows.
502
532
503
533
For more information on specific aspects of artifacts, see:
0 commit comments