You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/inference.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# How to Use a Computer Vision Model with Focoos
1
+
# Inference with Focoos Models
2
2
Focoos provides a powerful inference framework that makes it easy to deploy and use state-of-the-art computer vision models in production. Whether you're working on object detection, image classification, or other vision tasks, Focoos offers flexible deployment options that adapt to your specific needs.
3
3
4
4
[](https://colab.research.google.com/github/FocoosAI/focoos/blob/main/tutorials/inference.ipynb)
Copy file name to clipboardExpand all lines: docs/models/fai_cls.md
+56-18Lines changed: 56 additions & 18 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,10 +2,25 @@
2
2
3
3
## Overview
4
4
5
-
FAI-CLS is a versatile image classification model developed by FocoosAI that can utilize any backbone architecture for feature extraction. This model is designed for both single-label and multi-label image classification tasks, offering flexibility in architecture choices and training configurations.
5
+
Fai-cls is a versatile image classification model developed by FocoosAI that can utilize any backbone architecture for feature extraction. This model is designed for both single-label and multi-label image classification tasks, offering flexibility in architecture choices and training configurations.
6
6
7
7
The model employs a simple yet effective approach: a configurable backbone extracts features from input images, followed by a classification head that produces class predictions. This design enables easy adaptation to different domains and datasets while maintaining high performance and computational efficiency.
8
8
9
+
## Available Models
10
+
11
+
Currently, you can find 3 fai-cls models on the Focoos Hub, all trained on COCO dataset for image classification.
12
+
13
+
| Model Name | Architecture | Domain (Classes) | Dataset | Metric | FPS Nvidia-T4 |
This flexible architecture makes FAI-CLS suitable for a wide range of image classification applications, from simple binary classification to complex multi-label scenarios, while maintaining computational efficiency and ease of use.
145
160
146
161
147
-
## Example Usage
162
+
### Quick Start with Pre-trained Model
163
+
164
+
```python
165
+
from focoos importASSETS_DIR, ModelManager
166
+
fromPILimport Image
167
+
168
+
# Load a pre-trained model
169
+
model = ModelManager.get("fai-cls-m-coco")
170
+
171
+
image =ASSETS_DIR/"federer.jpg"
172
+
result = model.infer(image,threshold=0.5, annotate=True)
The quantization of Focoos models is currently in working in progress stage.
4
+
5
+
currently tested and working for **classification models**.
6
+
7
+
## Example
8
+
9
+
```python
10
+
from focoos import ModelManager,ASSETS_DIR, MODELS_DIR, RuntimeType
11
+
from focoos.infer.quantizer import OnnxQuantizer, QuantizationCfg
12
+
import os
13
+
14
+
image_size =224# 224px input size
15
+
model_name ="fai-cls-m-coco"# you can also take model from focoos hub with "hub://YOUR_MODEL_REF"
16
+
im =ASSETS_DIR/"federer.jpg"
17
+
18
+
model = ModelManager.get(model_name)
19
+
20
+
exported_model = model.export(runtime_type=RuntimeType.ONNX_CPU, # optimized for edge or cpu
21
+
image_size=image_size,
22
+
dynamic_axes=False, # quantization need static axes!
23
+
simplify_onnx=False, # simplify and optimize onnx model graph
24
+
onnx_opset=18,
25
+
out_dir=os.path.join(MODELS_DIR, "my_edge_model")) # save to models dir
26
+
27
+
# benchmark onnx model
28
+
exported_model.benchmark(iterations=100)
29
+
30
+
# test onnx model
31
+
32
+
result = exported_model.infer(im,annotate=True)
33
+
Image.fromarray(result.image)
34
+
35
+
36
+
quantization_cfg = QuantizationCfg(
37
+
size= image_size, # input size: must be same as exported model
38
+
calibration_images_folder=str(ASSETS_DIR), # Calibration images folder: It is strongly recommended
39
+
# to use the dataset validation split on which the model was trained.
40
+
# Here, for example, we will use the assets folder.
41
+
format="QDQ", # QO (QOperator): All the quantized operators have their own ONNX definitions, like QLinearConv, MatMulInteger etc.
42
+
# QDQ (Quantize-DeQuantize): inserts DeQuantizeLinear(QuantizeLinear(tensor)) between the original operators to simulate the quantization and dequantization process.
43
+
per_channel=True, # Per-channel quantization: each channel has its own scale/zero-point → more accurate,
44
+
# especially for convolutions, at the cost of extra memory and computation.
45
+
normalize_images=True, # normalize images during preprocessing: some models have normalization outside of model forward
46
+
)
47
+
48
+
quantizer = OnnxQuantizer(
49
+
input_model_path=exported_model.model_path,
50
+
cfg=quantization_cfg
51
+
)
52
+
model_path = quantizer.quantize(
53
+
benchmark=True# benchmark bot fp32 and int8 models
0 commit comments