Interest in Project 17 – Optimized Serving Endpoint for ML Training & Inference #29301
Replies: 2 comments 5 replies
-
Beta Was this translation helpful? Give feedback.
-
Good Day Mentors, @dtrawins , @mzegla |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Dariusz Trawinski, Milosz Zeglarski,
This project sounds incredibly exciting, and I’d love to understand more about its scope and expectations. The idea of integrating oneDAL optimizations with scikit-learn and deploying a training + inference endpoint via OpenVINO Model Server is fascinating, and I see a lot of potential in making ML model serving more efficient and scalable.
However, I’d love some clarification on a few aspects:
Is the goal to support multiple classic ML algorithms out of the box, or is there a specific subset that would be the focus?
Will the model training endpoint include hyperparameter tuning and dataset pre-processing, or should the client handle that before sending requests?
For versioned model storage, is there an existing structure in mind, or would defining an efficient model management system be part of the project scope?
Additionally, is there flexibility for exploring adaptive model selection or smart caching to further enhance performance? Or would the direction be entirely up to the candidates?
Would love to hear your thoughts.
Looking forward to a smooth collaboration!
Harshitha Manne
Beta Was this translation helpful? Give feedback.
All reactions