modes/export/ #7933
Replies: 90 comments 241 replies
-
|
Where can we find working examples of a tf.js exported model? |
Beta Was this translation helpful? Give feedback.
-
|
How to use exported engine file for inference of images in a directory? |
Beta Was this translation helpful? Give feedback.
-
|
I trained a custom model taking yolov8n.pt (backbone) and I want to do a model registry in MLFLOW of the model in the .engine format. It's possible directly without the export step? Someone deal with something similar? Tks for your help! |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I appreciate the really awesome work within Ultralytics. I have a simple question. What is the difference between |
Beta Was this translation helpful? Give feedback.
-
|
Hello @pderrenger Can you plz help me out with how can i use Paddlepaddle Format to extract the text from the images? Your response is very imp to me i am waiting for your reply. |
Beta Was this translation helpful? Give feedback.
-
|
my code from ultralytics import YOLO model = YOLO('yolov8n_web_model/yolov8n.pt') # load an official model model = YOLO('/path_to_model/best.pt') i got an error ERROR: The trace log is below. What you should do instead is wrap ERROR: input_onnx_file_path: /home/ubuntu/Python/runs/detect/train155/weights/best.onnx TensorFlow SavedModel: export failure ❌ 7.4s: SavedModel file does not exist at: /home/ubuntu/Python/runs/detect/train155/weights/best_saved_model/{saved_model.pbtxt|saved_model.pb} what is wrong and what i need to do for fix? thanks a lot |
Beta Was this translation helpful? Give feedback.
-
|
Hello! the error I get is "TypeError: Model.export() takes 1 positional argument but 2 were given" |
Beta Was this translation helpful? Give feedback.
-
|
Are there any examples of getting the output of a pose estimator model in C++ using a torchscript file. I'm getting an output of shape (1, 56, 8400) for an input of size (1, 3, 640, 640) with two people in the sample picture. How should I interpret/post-process this output? |
Beta Was this translation helpful? Give feedback.
-
|
I trained a yolov5 detection model a little while ago and have successfully converted that model to tensorflowjs. That tfjs model works as expected in code only slightly modified from the example available at https://github.com/zldrobit/tfjs-yolov5-example. My version of the relevant section: I have now trained a yolov8 detection model on very similar data. The comments in https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py#L45-L49 However, that does not seem to be the case. The v5 model output is the 4 length array of tensors (which is why the destructuring assignment works), but the v8 model output is a single tensor of shape [1, X, 8400] thus the example code results in an error complaining that the model result is non-iterable when attempting to destructure. From what I understand, the [1, X, 8400] is the expected output shape of the v8 model. Is further processing of the v8 model required, or did I do something wrong during the pt -> tfjs export? |
Beta Was this translation helpful? Give feedback.
-
|
I was wondering if anyone could help me with this code: I exported my custom trained yolov8n.pt model to .onnx but now my code is not working(model.export(format='onnx', int8=True, dynamic=True)). I am having trouble using the outputs after running inference. My Code: def load_image(image_path): def draw_bounding_boxes(image, detections, confidence_threshold=0.5): def main(model_path, image_path): if name == "main": Error: |
Beta Was this translation helpful? Give feedback.
-
|
"batch_size" is not in arguments as previous versions? |
Beta Was this translation helpful? Give feedback.
-
|
I converted the model I trained with costum data to tflite format. Before converting, I set the int8 argument to true. But when I examined the tflite format from the netron website, I saw that the input information is still float32. Is this normal or is there a bug? Also thank you very much for answering every question without getting bored. |
Beta Was this translation helpful? Give feedback.
-
|
!yolo export model=/content/drive/MyDrive/best-1-1.pt format=tflite export failure ❌ 33.0s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined |
Beta Was this translation helpful? Give feedback.
-
|
Hi I havr tried all TFLITE export formats to convert the best.pt to .tflite but non is working. I have also checked my runtime and all the latest imports pip install -U ultralytics, and I have also tried the code you gave to someone in the comments but the issue is not resolvig Step 1: Export to TensorFlow SavedModel!yolo export model='/content/drive/MyDrive/best-1-1.pt' format=saved_model Step 2: Convert the exported SavedModel to TensorFlow Liteimport tensorflow as tf Save the TFLite modelwith open('/content/drive/MyDrive/yolov8_model.tflite', 'wb') as f: but the same error comes back. |
Beta Was this translation helpful? Give feedback.
-
|
can we export sam/mobile sam model to tensorRT or onnx? |
Beta Was this translation helpful? Give feedback.
-
|
Glenn,
Thank you for the reply. The process of elimination continues.
Kevin.
Kevin Logan
President/CEO
***@***.***> ***@***.***
860-861-3172 (cell)
860-535-3885 (office)
From: Glenn Jocher ***@***.***>
Sent: Wednesday, December 4, 2024 4:14 PM
To: ultralytics/ultralytics ***@***.***>
Cc: KPLogan ***@***.***>; Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] modes/export/ (Discussion #7933)
@KPLogan <https://github.com/KPLogan> 1) YOLO models are automatically set to evaluation mode during the export process, so calling model.eval() before export is unnecessary.
2) A dummy input is internally managed by the library during ONNX export, so you don't need to provide it explicitly.
If the inference outputs remain unexpected, ensure you've used the correct preprocessing and postprocessing steps in your C#/.NET environment. If the issue persists, you can revisit the ONNX documentation <https://docs.ultralytics.com/integrations/onnx/> or check your ONNX Runtime pipeline. Let us know if you encounter further issues!
—
Reply to this email directly, view it on GitHub <#7933 (reply in thread)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACQMYD4RRO7QJ6N2B37H7LT2D5WC7AVCNFSM6AAAAABCTE3SQOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNBWGYYTCOI> .
You are receiving this because you were mentioned. <https://github.com/notifications/beacon/ACQMYD6TV6FTQZTJRFILYR32D5WC7A5CNFSM6AAAAABCTE3SQOWGG33NNVSW45C7OR4XAZNRIRUXGY3VONZWS33OINXW23LFNZ2KUY3PNVWWK3TUL5UWJTQAV32YO.gif> Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi, model = YOLO("yolov8m-pose.pt") # Load a model and even changed some arguments, but without success. How should I do that? |
Beta Was this translation helpful? Give feedback.
-
|
Hey there. Im exporting finished YOLOV11 model on custom dataset to TensorflowLite with this code: I Got warning when exporting YOLOv11 on custom dataset model from .pt to .tflite Is this okay? |
Beta Was this translation helpful? Give feedback.
-
|
Issue with Incorrect Predictions from Quantized YOLOv8m-obb Model (TFLite) in TensorFlow Framework I have exported my YOLOv8m-obb model to TFLite format with INT8 quantization enabled, using an image size of 640x640 and a data.yaml for my dataset. When I use the quantized model for inference with the Ultralytics framework (Oriented Bounding Boxes), the predictions are correct. However, when I use the same model in the TensorFlow framework, I encounter several issues with the output: I suspect there might be an issue with how the quantization parameters (scale and zero-point) are being applied in TensorFlow, or possibly with the way I'm handling the model's output or the way I exported the model. I would appreciate guidance on how to correctly handle the quantized model in TensorFlow and resolve the issues with incorrect predictions. |
Beta Was this translation helpful? Give feedback.
-
|
I have .pth mode (not sure exactly what is it). How can i convert it to .pt model? |
Beta Was this translation helpful? Give feedback.
-
|
What should the 'opset' value be for Yolov8 to onnx 1.20.1 to be used in TensorRT 10.7 support matrix? Thanks. |
Beta Was this translation helpful? Give feedback.
-
|
Exporting YOLOv8 models with Oriented Bounding Boxes (OBB) to TensorFlow Lite (TFLite) is unsupported or supported? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm trying to use a YOLO model on the web by exporting it to tensorflow.js format. |
Beta Was this translation helpful? Give feedback.
-
|
when I use nms=True in exporting yolo11n-seg model to torchscript i get 38 features for each box. The first 36 are the box coordinates and 32 masks. what are the last 2 features? |
Beta Was this translation helpful? Give feedback.
-
|
Dear All, I would like to export my Yolov11 pt model into TFLite one. yolo export model=yolo11n.pt format=tflite I got the following error (related to "onnx.serialization"): PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB) TensorFlow SavedModel: starting export with tensorflow 2.15.0... System: (note: export to other formats, e.g. ncnn, openvino, onnx works good) If you have encountered this problem, what is the solution? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Then I trained a model using the hub, made sure it worked in python and exported it in various formats in the hub including CoreML. I can also load it in the app, but nothing happens. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm trying to convert yolo_v12_nano from .pt to .tflite, but I'm running into the error shown below. To check if the issue is specific to YOLOv12 being relatively new, I also tried the same process with yolo_v10, but encountered the same error. ERROR:root:Internal Python error in the inspect module. ERROR: The trace log is below. Dimensions must be equal, but are 32 and 16 for '{{node tf.math.add_61/Add}} = AddV2[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [1,116,232,32], [1,116,232,16]. Call arguments received by layer "tf.math.add_61" (type TFOpLambda): ERROR: input_onnx_file_path: [redacted] Has anyone run into this before or found a workaround? Thank you! |
Beta Was this translation helpful? Give feedback.
-
|
Does using dynamic=True on Nvidia's Jetson development board not include batch dimension? It seems that only w and h have become dynamic dimensions, while batch is fixed at 1? |
Beta Was this translation helpful? Give feedback.
-
|
Title: int8 argument not supported when exporting to ONNX in Ultralytics 8.3.162 Description: I encountered an error when trying to export a trained YOLOv8 model to ONNX format with the following code: model = YOLO('/kb210/wrs/tennis-analysis/tennis_analysis/training/runs/detect/train27/weights/best.pt') Traceback (most recent call last): Question: Is there any plan to support INT8 during ONNX export? If not, what would be the recommended way to apply INT8 quantization after exporting to ONNX? Environment: Ultralytics version: 8.3.162 |
Beta Was this translation helpful? Give feedback.
-
|
I have trained model for instance segmentation and I ma trying to export to run on DPU. Here is the code mentioned below from ultralytics import YOLO best_weights_path = "../weights/yolo11_s/yolo11_s.pt" model = YOLO(best_weights_path) Export for DPU (INT8)model.export( I get the following error.UnpicklingError Traceback (most recent call last) File ~/testing_env/lib/python3.10/site-packages/ultralytics/models/yolo/model.py:23, in YOLO.init(self, model, task, verbose) File ~/testing_env/lib/python3.10/site-packages/ultralytics/engine/model.py:151, in Model.init(self, model, task, verbose) File ~/testing_env/lib/python3.10/site-packages/ultralytics/engine/model.py:240, in Model._load(self, weights, task) File ~/testing_env/lib/python3.10/site-packages/ultralytics/nn/tasks.py:806, in attempt_load_one_weight(weight, device, inplace, fuse) File ~/testing_env/lib/python3.10/site-packages/ultralytics/nn/tasks.py:732, in torch_safe_load(weight) File ~/testing_env/lib/python3.10/site-packages/torch/serialization.py:1524, in load(f, map_location, pickle_module, weights_only, mmap, **pickle_load_args) UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint. |
Beta Was this translation helpful? Give feedback.

Uh oh!
There was an error while loading. Please reload this page.
-
modes/export/
Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!.
https://docs.ultralytics.com/modes/export/
Beta Was this translation helpful? Give feedback.
All reactions