You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Focoos AI provides an advanced development platform designed to empower developers and businesses with efficient, customizable computer vision solutions. Whether you're working with data from cloud infrastructures or deploying on edge devices, Focoos AI enables you to select, fine-tune, and deploy state-of-the-art models optimized for your unique needs.
6
8
7
-
## SDK Overview
9
+
## Overview
8
10
9
-
<!-- Unlock the full potential of Focoos AI with the Focoos Python SDK! 🚀 -->
10
-
The Focoos Python SDK is your gateway to easily access cutting-edge computer vision models and development tools. With just a few lines of code, you can **fine tune** pre-trained models tailored to your specific needs.
11
+
<!-- Unlock the full potential of Focoos AI with the Focoos library! 🚀 -->
12
+
The Focoos library is your gateway to easily access cutting-edge computer vision models and development tools. With just a few lines of code, you can **fine tune** pre-trained models tailored to your specific needs.
11
13
12
-
Whether you're working in the cloud or on edge devices, the Focoos Python SDK seamlessly integrates into your workflow, accelerating development and simplifying the implementation of computer vision solutions.
14
+
Whether you're working in the cloud or on edge devices, the Focoos library seamlessly integrates into your workflow, accelerating development and simplifying the implementation of computer vision solutions.
13
15
14
16
### Key Features 🔑
15
17
@@ -27,7 +29,7 @@ Whether you're working in the cloud or on edge devices, the Focoos Python SDK se
27
29
Ready to dive in? Get started with the setup in just a few simple steps!
28
30
29
31
## Installation
30
-
**Install** the Focoos Python SDK (for more options, see [setup](https://focoosai.github.io/focoos/setup))
32
+
**Install** the Focoos library (for more options, see [setup](https://focoosai.github.io/focoos/setup))
@@ -105,7 +107,7 @@ Using Focoos AI helps you save both time and money while delivering high-perform
105
107
-**4x Cheaper** 💰: Our models require up to 4x less computational power, letting you save on hardware or cloud bill while ensuring high-quality results.
106
108
-**Tons of CO2 saved annually per model** 🌱: Our models are energy-efficient, helping you reduce your carbon footprint by using less powerful hardware with respect to mainstream models.
107
109
108
-
See the list of our models in the [models](https://focoosai.github.io/focoos/models/) section.
110
+
See the list of our models in the [models](https://focoosai.github.io/focoos/models/models) section.
Copy file name to clipboardExpand all lines: docs/concepts.md
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ The `FocoosModel` class is the main interface for working with computer vision m
15
15
16
16
### Loading Strategies
17
17
18
-
The primary method for loading models is using the `ModelManager.get()` (see [`ModelManager`](../api/model_manager/#focoos.model_manager.ModelManager)). It supports multiple loading strategies based on the input parameters. The return value is a [Focoos Model](#focoosmodel).
18
+
The primary method for loading models is using the `ModelManager.get()` (see [`ModelManager`](/focoos/api/model_manager/#focoos.model_manager.ModelManager)). It supports multiple loading strategies based on the input parameters. The return value is a [Focoos Model](#focoosmodel).
19
19
20
20
The ModelManager employs different loading strategies based on the input:
21
21
@@ -102,7 +102,7 @@ model = ModelManager.get(
102
102
103
103
#### 4. From ModelInfo Object
104
104
105
-
The [`ModelInfo`](../api/ports/#focoos.ports.ModelInfo) class represents comprehensive model metadata including architecture specifications, training configuration, class definitions, and performance metrics. This method provides the most programmatic control over model instantiation.
105
+
The [`ModelInfo`](/focoos/api/ports/#focoos.ports.ModelInfo) class represents comprehensive model metadata including architecture specifications, training configuration, class definitions, and performance metrics. This method provides the most programmatic control over model instantiation.
106
106
107
107
**When to use**: Programmatically construct models, work with dynamic configurations, integrate with custom model management systems, or when you need fine-grained control over model instantiation.
108
108
@@ -142,7 +142,7 @@ Performs end-to-end inference on input images with automatic preprocessing and p
142
142
- NumPy arrays (`numpy.ndarray`)
143
143
- PyTorch tensors (`torch.Tensor`)
144
144
145
-
The input images are automatically preprocessed to the correct size and format required by the model. After inference, the raw model outputs are postprocessed into a standardized [`FocoosDetections`](../api/ports/#focoos.ports.FocoosDetections) format that provides easy access to:
145
+
The input images are automatically preprocessed to the correct size and format required by the model. After inference, the raw model outputs are postprocessed into a standardized [`FocoosDetections`](/focoos/api/ports/#focoos.ports.FocoosDetections) format that provides easy access to:
146
146
147
147
- Detected object classes and confidence scores
148
148
- Bounding box coordinates
@@ -155,7 +155,7 @@ This provides a simple, unified interface for running inference regardless of th
155
155
-`inputs`: Input images in various supported formats (`PIL.Image.Image`, `numpy.ndarray`, `torch.Tensor`)
156
156
-`**kwargs`: Additional arguments passed to postprocessing
@@ -176,7 +176,7 @@ for detection in detections.detections:
176
176
### Training
177
177
Trains the model on provided datasets. The training function accepts:
178
178
179
-
-`args`: Training configuration ([TrainerArgs](../api/ports/#focoos.ports.TrainerArgs)) specifying the main hyperparameters, among which:
179
+
-`args`: Training configuration ([TrainerArgs](/focoos/api/ports/#focoos.ports.TrainerArgs)) specifying the main hyperparameters, among which:
180
180
-`run_name`: Name for the training run
181
181
-`output_dir`: Name for the output folder
182
182
-`num_gpus`: Number of GPUs to use (must be >= 1)
@@ -186,7 +186,7 @@ Trains the model on provided datasets. The training function accepts:
186
186
-`data_val`: Validation dataset (MapDataset)
187
187
-`hub`: Optional FocoosHUB instance for experiment tracking
188
188
189
-
The data can be obtained using the [AutoDataset](../api/auto_dataset/#focoos.data.auto_dataset.AutoDataset) helper.
189
+
The data can be obtained using the [AutoDataset](/focoos/api/auto_dataset/#focoos.data.auto_dataset.AutoDataset) helper.
190
190
191
191
After the training is complete, the model will have updated weights and can be used for inference or export. Furthermore, in the `output_dir` can be found the model metadata (`model_info.json`) and the PyTorch weights (`model_final.pth`).
Exports the model to different runtime formats for optimized inference. The main function arguments are:
217
-
-`runtime_type`: specify the target runtime and must be one of the supported (see [RuntimeType](../api/ports/#focoos.ports.RuntimeType))
217
+
-`runtime_type`: specify the target runtime and must be one of the supported (see [RuntimeType](/focoos/api/ports/#focoos.ports.RuntimeType))
218
218
-`out_dir`: the destination folder for the exported model
219
219
-`image_size`: the target image size, as an optional integer
220
220
@@ -246,7 +246,7 @@ The `InferModel` class represents an optimized model for inference, typically cr
246
246
247
247
### Initialization
248
248
249
-
InferModel instances are typically created through the `export()` method of a [FocoosModel](#focoosmodel), which handles the model optimization and conversion process. This method allows you to specify the target runtime (see the availables in [`Runtimetypes`](focoos/api/ports/#focoos.ports.RuntimeType)) and the output directory for the exported model. The `export()` method returns an `InferModel` instance that is optimized for fast and efficient inference.
249
+
InferModel instances are typically created through the `export()` method of a [FocoosModel](#focoosmodel), which handles the model optimization and conversion process. This method allows you to specify the target runtime (see the availables in [`Runtimetypes`](/focoos/api/ports/#focoos.ports.RuntimeType)) and the output directory for the exported model. The `export()` method returns an `InferModel` instance that is optimized for fast and efficient inference.
250
250
251
251
**Example:**
252
252
```python
@@ -281,7 +281,7 @@ This provides a simple, unified interface for running inference regardless of th
281
281
-`inputs`: Input images in various supported formats (`PIL.Image.Image`, `numpy.ndarray`, `torch.Tensor`)
282
282
-`**kwargs`: Additional arguments passed to postprocessing
Copy file name to clipboardExpand all lines: docs/hub/overview.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,8 @@
1
1
# 🚀 Focoos HUB Overview
2
2
The Focoos HUB is a cloud-based platform that provides seamless integration between your local development environment and the Focoos AI ecosystem. It enables you to manage models, datasets, perform remote inference operations, and monitor training progress through a unified API.
3
3
4
+
[](https://colab.research.google.com/github/FocoosAI/focoos/blob/main/tutorials/hub.ipynb)
Copy file name to clipboardExpand all lines: docs/inference.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ In this section, you'll run a model on the Focoos' servers instead of on your ma
48
48
49
49
```python
50
50
model_ref ="<YOUR-MODEL-REF>"
51
-
dataset= hub.get_remote_model(model_ref)
51
+
model= hub.get_remote_model(model_ref)
52
52
```
53
53
54
54
Using the model is as simple as it could! Just call it with an image.
@@ -139,7 +139,7 @@ In the following cells, we will export the previous model for one of these and r
139
139
140
140
### Torchscript
141
141
142
-
We already provide multiple inference runtime, that you can see on the [`RuntimeTypes`](focoos/api/ports/#focoos.ports.RuntimeType) enum. Let's select Torchscript as an example.
142
+
We already provide multiple inference runtime, that you can see on the [`RuntimeTypes`](/focoos/api/ports/#focoos.ports.RuntimeType) enum. Let's select Torchscript as an example.
143
143
144
144
```python
145
145
from focoos.ports import RuntimeType
@@ -169,7 +169,7 @@ But, let's see its latency, that should be substantially lower than the pure pyt
You can use different runtimes that may fit better your device, such as TensorRT. See the list of available Runtimes at [`RuntimeTypes`](focoos/api/ports/#focoos.ports.RuntimeType). Please note that you need to install the relative packages for onnx and tensorRT for using them.
172
+
You can use different runtimes that may fit better your device, such as TensorRT. See the list of available Runtimes at [`RuntimeTypes`](/focoos/api/ports/#focoos.ports.RuntimeType). Please note that you need to install the relative packages for onnx and tensorRT for using them.
0 commit comments