|
1 | 1 | <div align="center"> |
2 | 2 |
|
3 | | - <a href="https://www.seldon.io/"> |
| 3 | + <a href="https://www.seldon.io/solutions/core/"> |
4 | 4 | <img alt="Core 2 Logo" src="/.images/core-2-logo.png" alt="Core 2 Logo" style="max-width: 100%; height: auto; width: 400px;"> |
5 | 5 | </a> |
6 | 6 | </div> |
7 | 7 |
|
8 | 8 | # Deploy Modular, Data-centric AI applications at scale |
9 | 9 |
|
10 | | -## 📖 About |
11 | | -Seldon Core 2 is an MLOps and LLMOps framework for deploying, managing and scaling AI systems in Kubernetes - from singular models, to modular, data-centric applications. With Core 2 you can deploy across a wide range of model types, on-prem or in any cloud, in a standardized way that is production-ready out of the box. |
| 10 | +## 💡 About |
| 11 | +Seldon Core 2 is an MLOps and LLMOps framework for deploying, managing and scaling AI systems in Kubernetes - from singular models, to modular and data-centric applications. With Core 2 you can deploy in a standardized way across a wide range of model types, on-prem or in any cloud, and production-ready out of the box. |
12 | 12 |
|
13 | | -[](https://www.youtube.com/watch?v=ar5lSG_idh4) |
| 13 | +</br> |
| 14 | + <div align="center"> |
| 15 | + <a href="https://www.youtube.com/watch?v=ar5lSG_idh4"> |
| 16 | + <img src="./docs-gb/images/Core-intro-thumbnail.png" alt="Introductory Youtube Video" style="max-width: 100%; width: 500px; height: auto;"> |
| 17 | + </a> |
| 18 | + </div> |
| 19 | +</br> |
14 | 20 |
|
15 | | -## Features |
| 21 | +To learn more, or to contact Seldon regarding commercial use: |
16 | 22 |
|
17 | | - * **Pipelines**: Deploy composable AI pipelines, leveraging Kafka for realtime data streaming between components |
| 23 | +👉 [Read the Documentation](https://docs.seldon.ai/seldon-core-2) |
| 24 | +👉 [Contact Seldon](https://www.seldon.io/) |
| 25 | + |
| 26 | + |
| 27 | +## 🧩 Features |
| 28 | + |
| 29 | + * **Pipelines**: Deploy composable AI applications, leveraging Kafka for realtime data streaming between components |
18 | 30 | * **Autoscaling** for models and application components based on native or custom logic |
19 | 31 | * **Multi-Model Serving**: Save infrastructure costs by consolidating multiple models on shared inference servers |
20 | 32 | * **Overcommit**: Deploy more models than available memory allows, saving infrastructure costs for unused models |
21 | | - * **Experiments**: Route data between candidate models or pipeline, with support for A/B tests and shadow deployments |
| 33 | + * **Experiments**: Route data between candidate models or pipelines, with support for A/B tests and shadow deployments |
22 | 34 | * **Custom Components**: Implement custom logic, drift & outlier detection, LLMs and more through plug-and-play integrate with the rest of Seldon's ecosytem of ML/AI products! |
23 | 35 |
|
24 | | -## Publication |
25 | | - |
26 | | -These features are influenced by our position paper on the next generation of ML model serving frameworks: |
| 36 | +## 🔬 Research |
27 | 37 |
|
28 | | -*Title*: [Desiderata for next generation of ML model serving](http://arxiv.org/abs/2210.14665) |
| 38 | +These features are influenced by our position paper on the next generation of ML model serving frameworks: |
29 | 39 |
|
30 | | -*Workshop*: Challenges in deploying and monitoring ML systems workshop - NeurIPS 2022 |
| 40 | +👉 [Desiderata for next generation of ML model serving](http://arxiv.org/abs/2210.14665) |
31 | 41 |
|
| 42 | +## 📜 License |
32 | 43 |
|
33 | | -## ⚡️ Quickstart |
34 | | - |
| 44 | +Seldon is distributed under the terms of the The Business Source License. A complete version of the license is available in the [LICENSE file](LICENSE) in this repository. Any contribution made to this project will be licensed under the Business Source License. |
35 | 45 |
|
36 | | -## Documentation |
37 | 46 |
|
38 | | -[Seldon Core 2 docs](https://docs.seldon.ai/seldon-core-2) |
39 | 47 |
|
40 | | -## 📜 License |
|
0 commit comments