Skip to content

Conversation

darksapien23151
Copy link

Logistic Regression model trained on iris dataset

Logisitic Regression trained on iris dataset
Copy link
Collaborator

@mzegla mzegla left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also provide client sample to interact with deployed Python servable

ENV PYTHONPATH=/ovms/lib/python

RUN apt update && apt install -y python3-pip git \
build-essential python3-dev libatlas-base-dev
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need those packages?
Since we only want to download a few python packages, they don't look necessary.

RUN apt update && apt install -y python3-pip git \
build-essential python3-dev libatlas-base-dev

RUN pip3 install --break-system-packages numpy pandas scikit-learn
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why using --break-system-packages?

Comment on lines 14 to 17
COPY pipeline/graph.pbtxt /models/iris_pipeline/
COPY pipeline/ovmsmodel.py /models/iris_pipeline/
COPY model/model.pkl /models/iris_pipeline/
COPY model_config.json /model_config.json
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's drop those lines. Eventually we would like to have a Dockerfile for multiple models, datasets etc. so users would build the image with all required packages and mount their data while launching the container.

@@ -0,0 +1,24 @@
from sklearn.datasets import load_iris
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This entire file should be wrapped in OvmsPythonModel with training done in execute method

Comment on lines 1 to 10
{
"custom_node_library_config_list": [],
"graph_config_list": [
{
"name": "iris_pipeline",
"base_path": "/models/iris_pipeline",
"graph_path": "graph.pbtxt"
}
]
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -0,0 +1,29 @@
import pandas as pd
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is unnecessary. After exporting model to ONNX we can use native KServe API support.

Comment on lines 1 to 14
input_stream: "inference_input"
output_stream: "inference_output"

node {
calculator: "InferenceCalculator"
input_stream: "inference_input"
output_stream: "inference_output"
options: {
[type.googleapis.com/openvino.CalculatorOptions] {
model_name: "iris_pipeline"
signature_name: "serving_default"
}
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mzegla mzegla self-assigned this Jun 3, 2025
@mzegla mzegla added the GSoC Contributions that are part of Google Summer of Code projects label Jun 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
GSoC Contributions that are part of Google Summer of Code projects
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants