Skip to content

Commit 648fe52

Browse files
authored
Merge pull request opendatahub-io#43 from gmfrasca/dashboard-tiles
feat(manifests): ODH Dashboard tiles
2 parents 8889082 + 285d139 commit 648fe52

File tree

4 files changed

+169
-1
lines changed

4 files changed

+169
-1
lines changed

manifests/opendatahub/base/deployments/ml-pipeline-visualizationserver.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ spec:
5353
requests:
5454
cpu: 30m
5555
memory: 500Mi
56-
limits:
56+
limits:
5757
cpu: 250m
5858
memory: 1Gi
5959
serviceAccountName: ml-pipeline-visualizationserver
Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,8 @@
1+
apiVersion: kustomize.config.k8s.io/v1beta1
2+
kind: Kustomization
3+
commonLabels:
4+
app: odh-dashboard
5+
app.kubernetes.io/part-of: odh-dashboard
6+
resources:
7+
- ./odhapplications/data-science-pipelines-odhapplication.yaml
8+
- ./odhquickstarts/data-science-pipelines-odhquickstart.yaml
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
apiVersion: dashboard.opendatahub.io/v1
2+
kind: OdhApplication
3+
metadata:
4+
name: data-science-pipelines
5+
annotations:
6+
opendatahub.io/categories: 'Model development,Model training,Model optimization,Data analysis,Data preprocessing'
7+
spec:
8+
beta: true
9+
betaTitle: Data Science Pipelines
10+
betaText: This application is available for early access prior to official release.
11+
displayName: Data Science Pipelines
12+
description: Data Science Pipelines is a workflow platform with a focus on enabling Machine Learning operations such as Model development, experimentation, orchestration and automation.
13+
provider: Red Hat
14+
category: ODH Core
15+
support: Open Data Hub
16+
docsLink: https://www.kubeflow.org/docs/components/pipelines/
17+
quickStart: create-data-science-pipeline
18+
img: '<svg id="Layer_1" data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 192 145"><defs><style>.cls-1{fill:#e00;}</style></defs><title>RedHat-Logo-Hat-Color</title><path d="M157.77,62.61a14,14,0,0,1,.31,3.42c0,14.88-18.1,17.46-30.61,17.46C78.83,83.49,42.53,53.26,42.53,44a6.43,6.43,0,0,1,.22-1.94l-3.66,9.06a18.45,18.45,0,0,0-1.51,7.33c0,18.11,41,45.48,87.74,45.48,20.69,0,36.43-7.76,36.43-21.77,0-1.08,0-1.94-1.73-10.13Z"/><path class="cls-1" d="M127.47,83.49c12.51,0,30.61-2.58,30.61-17.46a14,14,0,0,0-.31-3.42l-7.45-32.36c-1.72-7.12-3.23-10.35-15.73-16.6C124.89,8.69,103.76.5,97.51.5,91.69.5,90,8,83.06,8c-6.68,0-11.64-5.6-17.89-5.6-6,0-9.91,4.09-12.93,12.5,0,0-8.41,23.72-9.49,27.16A6.43,6.43,0,0,0,42.53,44c0,9.22,36.3,39.45,84.94,39.45M160,72.07c1.73,8.19,1.73,9.05,1.73,10.13,0,14-15.74,21.77-36.43,21.77C78.54,104,37.58,76.6,37.58,58.49a18.45,18.45,0,0,1,1.51-7.33C22.27,52,.5,55,.5,74.22c0,31.48,74.59,70.28,133.65,70.28,45.28,0,56.7-20.48,56.7-36.65,0-12.72-11-27.16-30.83-35.78"/></svg>'
19+
getStartedLink: https://www.kubeflow.org/docs/started/
20+
enable:
21+
title: Enable Data Science Pipelines
22+
actionLabel: Enable
23+
description: |-
24+
Clicking enable will add a card to the Enabled page to access the Data Science Pipelines interface.
25+
26+
Before enabling, be sure you have installed OpenShift Pipelines and have an S3 Object store configured
27+
validationConfigMap: ds-pipelines-dashboardtile-validation-result
28+
kfdefApplications: []
29+
#kfdefApplications: ['data-science-pipelines'] # https://github.com/opendatahub-io/odh-dashboard/issues/625
30+
route: ml-pipeline-ui
31+
internalRoute: ml-pipeline-ui
32+
getStartedMarkDown: |-
33+
# Getting Started With Data Science Pipelines
34+
Below are the list of samples that are currently running end to end taking the compiled Tekton yaml and deploying on a Tekton cluster directly. If you are interested more in the larger list of pipelines samples we are testing for whether they can be 'compiled to Tekton' format, please [look at the corresponding status page](https://github.com/opendatahub-io/ml-pipelines/tree/master/sdk/python/tests/README.md)
35+
[DSP Tekton User Guide](https://github.com/opendatahub-io/ml-pipelines/tree/master/guides/kfp-user-guide) is a guideline for the possible ways to develop and consume Data Science Pipelines. It's recommended to go over at least one of the methods in the user guide before heading into the KFP Tekton Samples.
36+
## Prerequisites
37+
- Install [OpenShift Pipelines Operator](https://docs.openshift.com/container-platform/4.7/cicd/pipelines/installing-pipelines.html). Then connect the cluster to the current shell with `oc`
38+
39+
- Install [kfp-tekton](https://github.com/opendatahub-io/ml-pipelines/tree/master/sdk/README.md) SDK
40+
41+
```
42+
# Set up the python virtual environment
43+
python3 -m venv .venv
44+
source .venv/bin/activate
45+
46+
# Install the kfp-tekton SDK
47+
pip install kfp-tekton
48+
```
49+
50+
## Samples
51+
- [MNIST End to End example with DSP components](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/e2e-mnist)
52+
53+
- [Hyperparameter tuning using Katib](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/katib)
54+
55+
- [Trusted AI Pipeline with AI Fairness 360 and Adversarial Robustness 360 components](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/trusted-ai)
56+
57+
- [Training and Serving Models with Watson Machine Learning](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/watson-train-serve#training-and-serving-models-with-watson-machine-learning)
58+
59+
- [Lightweight python components example](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/lightweight-component)
60+
61+
- [The flip-coin pipeline](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/flip-coin)
62+
63+
- [Nested pipeline example](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/nested-pipeline)
64+
65+
- [Pipeline with Nested loops](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/nested-loops)
66+
67+
- [Using Tekton Custom Task on DSP](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/tekton-custom-task)
68+
69+
- [The flip-coin pipeline using custom task](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/flip-coin-custom-task)
70+
71+
- [Retrieve DSP run metadata using Kubernetes downstream API](https://github.com/opendatahub-io/ml-pipelines/tree/master/samples/k8s-downstream-api)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
apiVersion: console.openshift.io/v1
2+
kind: OdhQuickStart
3+
metadata:
4+
name: create-data-science-pipeline
5+
annotations:
6+
opendatahub.io/categories: 'Getting started,Model development,Model training,Model optimization,Data analysis,Data preprocessing'
7+
spec:
8+
displayName: Creating a Data Science Pipeline
9+
appName: data-science-pipelines
10+
durationMinutes: 5
11+
icon: TODO
12+
description: Create a simple pipeline that automatically runs tasks in a machine learning deployment workflow
13+
introduction: |-
14+
### This quick start shows you how to create a Data Science Pipeline.
15+
Open Data Hub lets you run Data Science Pipelines in a scalable OpenShift hybrid cloud environment.
16+
This quickstart shows you how to compile, create and run a simple example pipeline execution using the Kubeflow Pipelines Python SDK and Data Science Pipelines UI.
17+
tasks:
18+
- title: Launch Data Science Pipelines
19+
description: |-
20+
### To find the Data Science Pipelines Launch action:
21+
1. Click **Applications** &#x2192; **Enabled**.
22+
2. Find the Data Science Pipelines card.
23+
3. Click **Launch** on the Data Science Pipelines card to access the **Piplines dashboard**.
24+
A new browser tab will open displaying the **Pipelines Dashboard** page.
25+
review:
26+
instructions: |-
27+
#### To verify you have launched Data Science Pipelines:
28+
Is a new **Data Science Pipelines** browser tab visible with the **Dashboard** page open?
29+
failedTaskHelp: This task is not verified yet. Try the task again.
30+
summary:
31+
success: You have launched Data Science Pipelines.
32+
failed: Try the steps again.
33+
34+
- title: Install Python SDK and compile sample pipeline
35+
description: |-
36+
### Install the Kubeflow Pipelines Python SDK
37+
1. Follow the [Kubeflow Pipelines Tekton Python SDK Installation instructions](https://github.com/opendatahub-io/ml-pipelines/blob/master/samples/README.md#prerequisites)
38+
2. Download, clone or copy the [flip-coin example pipeline](https://github.com/opendatahub-io/ml-pipelines/blob/master/samples/flip-coin/condition.py)
39+
3. Compile the python pipeline defintion into a Tekton YAML:
40+
```
41+
python condition.py
42+
```
43+
review:
44+
instructions: |-
45+
#### To verify you compiled the flip-coin sample pipeline:
46+
Is there now a `condition.yaml` file in the directory you downloaded `condition.py` from?
47+
failedTaskHelp: This task is not verified yet. Try the task again.
48+
summary:
49+
success: You have installed the Kubeflow Pipelines Tekton SDK and compiled a sample pipeline definition into a Tekton yaml
50+
failed: Try the steps again.
51+
52+
- title: Create a Pipeline
53+
description: |-
54+
### Create a simple Pipeline from an example Data Science Pipeline .py file:
55+
1. Click the **+Upload Pipeline** button in the top right corner
56+
2. Leave the **Create a new pipeline** radio button selected
57+
3. Type a pipeline name in the **Pipeline Name** field
58+
4. Add a short description in the **Pipeline Description** field
59+
5. Select the **Upload a file** radio button and click **Choose file** in the **File** text box
60+
6. Find and select the condition.yaml file you compiled from the previous step
61+
7. Click **Create**
62+
The Data Science Pipelines **Upload Pipeline** page will redirect to a graph of the Pipeline you created
63+
review:
64+
instructions: |-
65+
#### To verify that you have created a Pipeline:
66+
Do you see a graph/chart in the shape of a flow diagram, that is titled with your sample pipeline's name?
67+
failedTaskHelp: This task is not verified yet. Try the task again.
68+
summary:
69+
success: You have successfully created a Data Science Pipeline.
70+
failed: Try the steps again.
71+
72+
- title: Run the Pipeline
73+
description: |-
74+
### Create the pipeline created in the previous setp:
75+
1. Click the **+ Create run** button in the top right corner. You will be redirected to a **Start a run** form.
76+
2. Click the **Choose** button in the **Experiment** text field. Select the **Default** experiment.
77+
3. Leave all other fields the same.
78+
4. Click the **Start** button.
79+
You will now be redirected to the **Default** Experiment page. You should see an execution of the pipeline you created in the **Active** list of runs.
80+
review:
81+
instructions: |-
82+
#### To verify that you have executed a Pipeline run:
83+
Are you on the **Experiments** page of the Data Science Pipelines UI? Do you see an entry under **Active** runs with the name of the pipeline you created?
84+
failedTaskHelp: This task is not verified yet. Try the task again.
85+
summary:
86+
success: You have successfully run a Data Science Pipeline.
87+
failed: Try the steps again.
88+
conclusion: You are now able to create and run a sample Data Science Pipeline!
89+
nextQuickStart: []

0 commit comments

Comments
 (0)