|
1 | 1 |  |
2 | 2 |
|
3 | | -# Focoos pre-trained models |
4 | | - |
5 | | -| Model Name | Task | Metrics | Domain | |
6 | | -| ------------------- | --------------------- | ------- | ------------------------------- | |
7 | | -| focoos_object365 | Detection | - | Common Objects, 365 classes | |
8 | | -| focoos_rtdetr | Detection | - | Common Objects, 80 classes | |
9 | | -| focoos_cts_medium | Semantic Segmentation | - | Autonomous driving, 30 classes | |
10 | | -| focoos_cts_large | Semantic Segmentation | - | Autonomous driving, 30 classes | |
11 | | -| focoos_ade_nano | Semantic Segmentation | - | Common Scenes, 150 classes | |
12 | | -| focoos_ade_small | Semantic Segmentation | - | Common Scenes, 150 classes | |
13 | | -| focoos_ade_medium | Semantic Segmentation | - | Common Scenes, 150 classes | |
14 | | -| focoos_ade_large | Semantic Segmentation | - | Common Scenes, 150 classes | |
15 | | -| focoos_aeroscapes | Semantic Segmentation | - | Drone Aerial Scenes, 11 classes | |
16 | | -| focoos_isaid_nano | Semantic Segmentation | - | Satellite Imagery, 15 classes | |
17 | | -| focoos_isaid_medium | Semantic Segmentation | - | Satellite Imagery, 15 classes | |
18 | | - |
19 | | -# Focoos |
20 | | -Focoos is a comprehensive SDK designed for computer vision tasks such as object detection, semantic segmentation, instance segmentation, and more. It provides pre-trained models that can be easily integrated and customized by users for various applications. |
21 | | -Focoos supports both cloud and local inference, and enables training on the cloud, making it a versatile tool for developers working in different domains, including autonomous driving, common scenes, drone aerial scenes, and satellite imagery. |
22 | | - |
23 | | -### Key Features |
24 | | - |
25 | | -- **Pre-trained Models**: A wide range of pre-trained models for different tasks and domains. |
26 | | -- **Cloud Inference**: API to Focoos cloud inference. |
27 | | -- **Cloud Training**: Train custom models with the Focoos cloud. |
28 | | -- **Multiple Local Inference Runtimes**: Support for various inference runtimes including CPU, GPU, Torchscript CUDA, OnnxRuntime CUDA, and OnnxRuntime TensorRT. |
29 | | -- **Model Monitoring**: Monitor model performance and metrics. |
30 | | - |
31 | | - |
32 | | - |
33 | | -# 🐍 Setup |
34 | | -We recommend using [UV](https://docs.astral.sh/uv/) as a package manager and environment manager for a streamlined dependency management experience. |
35 | | -Here’s how to create a new virtual environment with UV: |
36 | | -```bash |
37 | | -pip install uv |
38 | | -uv venv --python 3.12 |
39 | | -source .venv/bin/activate |
40 | | -``` |
| 3 | +# Welcome to Focoos AI |
41 | 4 |
|
42 | | -Focoos models support multiple inference runtimes. |
43 | | -To keep the library lightweight and to allow users to use their environment, optional dependencies (e.g., torch, onnxruntime, tensorrt) are not installed by default. |
44 | | -Foocoos is shipped with the following extras dependencies: |
| 5 | +Focoos AI provides an advanced development platform designed to empower developers and businesses with efficient, customizable computer vision solutions. Whether you're working with data from cloud infrastructures or deploying on edge devices, Focoos AI enables you to select, fine-tune, and deploy state-of-the-art models optimized for your unique needs. |
45 | 6 |
|
46 | | -- `[torch]`: torchscript CUDA |
47 | | -- `[cuda]`: onnxruntime CUDA |
48 | | -- `[tensorrt]`: onnxruntime TensorRT |
| 7 | +## SDK Overview |
49 | 8 |
|
50 | | -## CPU only or Remote Usage |
| 9 | +<!-- Unlock the full potential of Focoos AI with the Focoos Python SDK! 🚀 --> |
| 10 | +This powerful SDK gives you seamless access to our cutting-edge computer vision models and tools, allowing you to effortlessly interact with the Focoos API. With just a few lines of code, you can easily **select, customize, test, and deploy** pre-trained models tailored to your specific needs. |
51 | 11 |
|
52 | | -```bash |
53 | | -uv pip install focoos[cpu] git+https://github.com/FocoosAI/focoos.git |
54 | | -``` |
| 12 | +Whether you're deploying in the cloud or on edge devices, the Focoos Python SDK integrates smoothly into your workflow, speeding up your development process. |
55 | 13 |
|
56 | | -## GPU Runtimes |
57 | | -### Torchscript CUDA |
58 | | -```bash |
59 | | -uv pip install focoos[torch] git+https://github.com/FocoosAI/focoos.git |
60 | | -``` |
| 14 | +### Key Features 🔑 |
61 | 15 |
|
62 | | -### OnnxRuntime CUDA |
63 | | -- |
| 16 | +1. **Select Ready-to-use Models** 🧩 |
| 17 | + Get started quickly by selecting one of our efficient, [pre-trained models](https://focoosai.github.io/focoos/models/) that best suits your data and application needs. |
64 | 18 |
|
65 | | -```bash |
66 | | -uv pip install focoos[cuda] git+https://github.com/FocoosAI/focoos.gi |
67 | | -``` |
| 19 | +2. **Personalize Your Model** ✨ |
| 20 | + Customize the selected model for higher accuracy through [fine-tuning](https://focoosai.github.io/focoos/how_to/cloud_training/). Adapt the model to your specific use case by training it on your own dataset. |
| 21 | + |
| 22 | +3. **Test and Validate** 🧪 |
| 23 | + Upload your data sample to [test the model](https://focoosai.github.io/focoos/how_to/inference/)'s accuracy and efficiency. Iterate the process to ensure the model performs to your expectations. |
| 24 | + |
| 25 | +4. **Remote and Local Inference** 🖥️ |
| 26 | + Deploy the model on your devices or use it on our servers. Download the model to run it locally, without sending any data over the network, ensuring full privacy. |
| 27 | + |
| 28 | + |
| 29 | +### Quickstart 🚀 |
| 30 | +Ready to dive in? Get started with the setup in just a few simple steps! |
68 | 31 |
|
69 | | -### OnnxRuntime TensorRT |
| 32 | +**Install** the Focoos Python SDK (for more options, see [setup](https://focoosai.github.io/focoos/setup)) |
70 | 33 |
|
71 | | -To perform inference using TensorRT, ensure you have TensorRT version 10.5 installed. |
72 | | -```bash |
73 | | -sudo apt-get install tensorrt |
| 34 | +**uv** |
| 35 | +```bash linenums="0" |
| 36 | +uv pip install 'focoos @ git+https://github.com/FocoosAI/focoos.git' |
74 | 37 | ``` |
75 | 38 |
|
76 | | -```bash |
77 | | -uv pip install focoos[tensorrt] git+https://github.com/FocoosAI/focoos.git |
| 39 | +**pip** |
| 40 | +```bash linenums="0" |
| 41 | +pip install 'focoos @ git+https://github.com/FocoosAI/focoos.git' |
78 | 42 | ``` |
79 | 43 |
|
| 44 | +**conda** |
| 45 | +```bash linenums="0" |
| 46 | +conda install pip # if you don't have it already |
| 47 | +pip install 'focoos @ git+https://github.com/FocoosAI/focoos.git' |
| 48 | +``` |
80 | 49 |
|
81 | | -## 🤖 Cloud Inference |
| 50 | +🚀 [Directly use](https://focoosai.github.io/focoos/how_to/inference/) our **Efficient Models**, optimized for different data, applications, and hardware. |
82 | 51 |
|
83 | 52 | ```python |
84 | 53 | from focoos import Focoos |
85 | 54 |
|
86 | | -focoos = Focoos(api_key=os.getenv("FOCOOS_API_KEY")) |
87 | | - |
88 | | -model = focoos.get_remote_model("focoos_object365") |
89 | | -detections = model.infer("./image.jpg", threshold=0.4) |
90 | | -``` |
91 | | - |
92 | | -## 🤖 Cloud Inference with Gradio |
| 55 | +# Initialize the Focoos client with your API key |
| 56 | +focoos = Focoos(api_key="<YOUR-API-KEY>") |
93 | 57 |
|
94 | | -setup FOCOOS_API_KEY_GRADIO environment variable with your Focoos API key |
| 58 | +# Get the remote model (fai-rtdetr-m-obj365) from Focoos API |
| 59 | +model = focoos.get_remote_model("fai-rtdetr-m-obj365") |
95 | 60 |
|
96 | | -```bash |
97 | | -uv pip install focoos[dev] git+https://github.com/FocoosAI/focoos.git |
98 | | -``` |
| 61 | +# Run inference on an image |
| 62 | +detections, _ = model.infer(image_path, threshold=0.4) |
99 | 63 |
|
100 | | -```bash |
101 | | -gradio gradio/app.py |
| 64 | +# Output the detections |
| 65 | +print(detections) |
102 | 66 | ``` |
103 | 67 |
|
104 | | -## Local Inference |
| 68 | +⚙️ **Customize** the models to your specific needs by [fine-tuning](https://focoosai.github.io/focoos/how_to/cloud_training/) on your own dataset. |
105 | 69 |
|
106 | 70 | ```python |
107 | 71 | from focoos import Focoos |
| 72 | +from focoos.ports import Hyperparameters |
| 73 | + |
| 74 | +focoos = Focoos(api_key="<YOUR-API-KEY>") |
| 75 | +model = focoos.new_model(name="awesome", |
| 76 | + focoos_model="fai-rtdetr-m-obj365", |
| 77 | + description="An awesome model") |
| 78 | + |
| 79 | +res = model.train( |
| 80 | + dataset_ref="<YOUR-DATASET-ID>", |
| 81 | + hyperparameters=Hyperparameters( |
| 82 | + learning_rate=0.0001, |
| 83 | + batch_size=16, |
| 84 | + max_iters=1500, |
| 85 | + ) |
| 86 | +) |
| 87 | +``` |
108 | 88 |
|
109 | | -focoos = Focoos(api_key=os.getenv("FOCOOS_API_KEY")) |
| 89 | +See more examples in the [how to](https://focoosai.github.io/focoos/how_to) section. |
110 | 90 |
|
111 | | -model = focoos.get_local_model("focoos_object365") |
| 91 | +### Our Models 🧠 |
| 92 | +Focoos AI offers the best models in object detection, semantic and instance segmentation, and more is coming soon. |
112 | 93 |
|
113 | | -detections = model.infer("./image.jpg", threshold=0.4) |
114 | | -``` |
| 94 | +Using Focoos AI helps you save both time and money while delivering high-performance AI models 💪: |
| 95 | + |
| 96 | +- **10x Faster** ⏳: Our models are able to process images up to ten times faster than traditional methods. |
| 97 | +- **4x Cheaper** 💰: Our models require up to 4x less computational power, letting you save on hardware or cloud bill while ensuring high-quality results. |
| 98 | +- **Tons of CO2 saved annually per model** 🌱: Our models are energy-efficient, helping you reduce your carbon footprint by using less powerful hardware with respect to mainstream models. |
115 | 99 |
|
| 100 | +These are not empty promises, but the result of years of research and development by our team 🔬: |
| 101 | +<div style="space-between; margin: 20px 0;"> |
| 102 | + <div style="flex: 1; margin-right: 10px;"> |
| 103 | + <img src="https://raw.githubusercontent.com/FocoosAI/focoos/refs/heads/main/docs/models/fai-ade.png" alt="ADE-20k Semantic Segmentation" style="width: 100%;"> |
| 104 | + <figcaption style="text-align: center;">ADE-20k <a href="https://focoosai.github.io/focoos/models/#semantic-segmentation">Semantic Segmentation</a> Results</figcaption> |
| 105 | + </div> |
| 106 | + <div style="flex: 1; margin-left: 10px;"> |
| 107 | + <img src="https://raw.githubusercontent.com/FocoosAI/focoos/refs/heads/main/docs/models/fai-coco.png" alt="COCO Object Detection" style="width: 100%;"> |
| 108 | + <figcaption style="text-align: center;">COCO <a href="https://focoosai.github.io/focoos/models/#object-detection">Object Detection</a> Results</figcaption> |
| 109 | + </div> |
| 110 | +</div> |
116 | 111 |
|
117 | | -## Docker and devcontainers |
118 | | -For container support, Focoos offers four different Docker images: |
119 | | -- `focoos-cpu`: only CPU |
120 | | -- `focoos-onnx`: Includes ONNX support |
121 | | -- `focoos-torch`: Includes ONNX and Torchscript support |
122 | | -- `focoos-tensorrt`: Includes ONNX, Torchscript, and TensorRT support |
| 112 | +See the list of our models in the [models](https://focoosai.github.io/focoos/models/) section. |
123 | 113 |
|
124 | | -This repository also includes a devcontainer configuration for each of the above images. You can launch these devcontainers in Visual Studio Code for a seamless development experience. |
| 114 | +--- |
| 115 | +### Start now! |
| 116 | +By choosing Focoos AI, you can save time, reduce costs, and achieve superior model performance, all while ensuring the privacy and efficiency of your deployments. |
| 117 | +[Reach out to us](mailto:info@focoos.ai) to ask for your API key for free and power your computer vision projects. 🚀 |
0 commit comments