index #8012
Replies: 106 comments 194 replies
-
|
Hey everyone, Glenn here! 🚀 Dive into our comprehensive guide on YOLOv8, the pinnacle of real-time object detection and image segmentation technology. Whether you're just starting out or you're deep into the machine learning world, this page is your go-to resource for installing, predicting, and training with YOLOv8. Got questions or insights? This is the perfect spot to share your thoughts and learn from others in the community. Let's make the most of YOLOv8 together! 💡👥 |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I am super new to computer vision, and I want to know if there is a way to isolate the detected texts (like you would isolate in Roboflow, where all the text areas detected will be split) so I can use it better in my text extraction model. Thank you. |
Beta Was this translation helpful? Give feedback.
-
|
if i pass the model an image if i want to extract class id and class name from result how can i do that |
Beta Was this translation helpful? Give feedback.
-
|
Hello, |
Beta Was this translation helpful? Give feedback.
-
|
I tried using result.show() bit it says it has no obect show and using your
code, it says list has no object pred.
…On Sun, Mar 10, 2024, 3:34 AM Glenn Jocher ***@***.***> wrote:
@Zulkazeem <https://github.com/Zulkazeem> hey there! 👋 It looks like
your code is almost there, but if you're not getting any detections, there
might be a few things to check:
1.
*Model Confidence:* Ensure your model's confidence threshold isn't set
too high, which might prevent detections. Try lowering the conf
argument in your predict call.
2.
*Image Path:* Double-check the image path to ensure it's correct and
the image is accessible.
3.
*Model Compatibility:* Make sure the model you're using is appropriate
for the task. If it's trained on a very different dataset or for a
different task, it might not perform well on your images.
4.
*Looping Through Results:* The way you're iterating through pred and
then r seems a bit off. After calling predict, you should directly
access the detections, like so:
results = pred_model.predict(source=img_path)for result in results:
for *xyxy, conf, cls in result.pred[0]:
# Process each detection here
1. *Visualization:* Before trying to save or further process
detections, simply try visualizing them with result.show() to ensure
detections are being made.
If you've checked these and still face issues, it might be helpful to
share more details or error messages you're encountering. Keep
experimenting, and don't hesitate to reach out for more help! 🚀
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ARQRRZCLUDZ4XTCWWM4NNBDYXQEG3AVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DOMZUGA4TK>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm new to running models myself, and the last time I did any image training was about 15 years ago, though I am a Python veteran. I'd like to try running the building footprint models. Do you have a video series that can take me through setting up YOLOv8 and then running the model to extract footprints? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, Can someone please help? Thanks in advance |
Beta Was this translation helpful? Give feedback.
-
|
Hi, |
Beta Was this translation helpful? Give feedback.
-
|
hi, I'm a student and I'm doing this for my undergraduate thesis. I'm implementing yolov8 model in android gallery's search mechanism. the purpose of yolov8 model is to scan media files and return images that has a bounding box label that matches the search query. i can make it work with yolov5s.torchscript.ptl using org.pytorch:pytorch_android_lite:1.10.0 and org.pytorch:pytorch_android_torchvision_lite:1.10.0 but it wont work with yolov8s.torchscript. the yolov5s.torchscript.ptl has a function to load the model and the classes.txt, is the yolov8 model wont need that? |
Beta Was this translation helpful? Give feedback.
-
|
when i am training yolov8 model i need to store current epoch number to a variable that can be used where ever i want ? |
Beta Was this translation helpful? Give feedback.
-
|
Hey Glenn, However I just wanna obtain the metrics for the lower half of the images. I tried modifying the labels and annotations files of the validation split to contain only those bounding boxes which are in the lower half. However this doesn't seem to work. Any suggestions? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I have a question about the YOLOv8 model. In the pre-trained model, there are labels like "person" and others, but if I create a new model with only the "person" label, will there be a performance difference on my computer between the pre-trained model and the model I create? |
Beta Was this translation helpful? Give feedback.
-
|
In yolo v8.1 I can't find confusion matrix and results.png? Where is it stored?? This is how I started my training: %cd /kaggle/working/HOME/YOLO_V8_OUTPUT !yolo train model=yolov8l.pt data=/kaggle/working/HOME/FRUITS_AND_VEGITABLES_NITHIN-6/data.yaml epochs=100 imgsz=640 patience = 10 device=0,1 project=/kaggle/working/HOME/YOLO_V8_OUTPUT |
Beta Was this translation helpful? Give feedback.
-
|
I am tasked with developing a shelf management system tailored for a specific brand. This system aims to automate the process of sales personnel visiting stores to assess product stock levels and required replenishments. Utilizing object detection, I intend to accurately count the products on the shelves and inform the salesperson of the quantities needed to refill. One major challenge to address is product occlusion, where items may partially or fully obscure others, complicating accurate counting. I'm particularly interested in exploring how YOLOv8, a popular object detection model, can be employed to tackle this problem effectively. Any guidance or insights on implementing such a solution would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @glenn-jocher can you please have a look at this google doc, I have tried to explain the problem through screenshots, i am facing after fine tuning the model, I would really appreciate your kind guidance. https://docs.google.com/document/d/1WJ5SBdunWSqyd3FjgYgrZn2KjeYxezlel2LeXspmWAQ/edit?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
|
Hello team Ultralytics, |
Beta Was this translation helpful? Give feedback.
-
|
Thank you. I got it. The usage of different YOLO models involves loading different yolov(8,11) n.yaml files isn’t it ?
发自我的iPhone
…------------------ Original ------------------
From: Glenn Jocher ***@***.***>
Date: Tue,Nov 19,2024 2:30 AM
To: ultralytics/ultralytics ***@***.***>
Cc: xiuyuan mao ***@***.***>, Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] index (Discussion #8012)
@maoxiuyuan hello, thank you for your question. YOLOv8 has not been merged into YOLOv11 files; they are separate models. You can find YOLOv8 in the Ultralytics repository, and I recommend checking the Ultralytics documentation for instructions on downloading and using YOLOv8 models. If you have further questions, feel free to check the YOLOv8 documentation.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
I’m really appreciate for your reply ,it’s strongly useful for me.
发自我的iPhone
…------------------ Original ------------------
From: Glenn Jocher ***@***.***>
Date: Wed,Nov 20,2024 2:44 AM
To: ultralytics/ultralytics ***@***.***>
Cc: xiuyuan mao ***@***.***>, Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] index (Discussion #8012)
@maoxiuyuan yes, that's correct. Different YOLO models require loading different YAML files (e.g., yolov8n.yaml for YOLOv8 or yolo11n.yaml for YOLO11) to configure the model architecture and parameters. For detailed guidance, you can refer to the respective model documentation.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
I want to use YOLO for object detection in the images. Is there ready pre-trained model, which can have many (al least 500) classes (labels) and not the standard COCO model with only 90 classes? Also, I would prefer that model can detect objects in the image of any size. |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your help. My problem is, that if I go to YOLOv8 page here
https://docs.ultralytics.com/models/yolov8/#supported-tasks-and-modes I can
download the models, but once I click link "Detection Docs" in the
sentence "See Detection Docs for usage examples", then I redirected to
YOLO11 documentation and actually I cannot really find YOLOv8 documentation.
…On Thu, 21 Nov 2024 at 07:06, Glenn Jocher ***@***.***> wrote:
@zlelik <https://github.com/zlelik> for YOLO11, pre-trained models are
primarily trained on the COCO dataset. However, Ultralytics YOLO models can
be fine-tuned on other datasets to meet specific requirements. Regarding
the YOLOv8 model trained with Open Images v7, the output shape you
mentioned indicates a prediction for multiple classes and bounding boxes.
You can find documentation on interpreting these outputs in the Ultralytics
documentation. If you need further details, please refer to the YOLOv8
documentation <https://docs.ultralytics.com/models/yolov8/>.
—
Reply to this email directly, view it on GitHub
<#8012 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADTQ4K6PUOM7OZCCE4EUT4D2BVZ4VAVCNFSM6AAAAABCZBK6B6VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCMZTGIZDIMQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
Hi, I believe there is a link misdirection in Datasets. After clicking on ImageNet in the docs I am getting redirected to |
Beta Was this translation helpful? Give feedback.
-
|
Is it possible to train YOLO segmentation models using the Cityscapes dataset? |
Beta Was this translation helpful? Give feedback.
-
|
Hey everyone, I encountered a situation where I have only 1 instance, and I get the YOLO results as follows: |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Is there any method that we can log .pt file of yolo model in mlflow? |
Beta Was this translation helpful? Give feedback.
-
|
Understanding YOLOv8 Coordinates in Label Files I have a question regarding the YOLOv8 annotation format:
Thank you for your help! |
Beta Was this translation helpful? Give feedback.
-
|
Hi YOLO Team, I have a few questions regarding the YOLOv8 annotation format:
Understanding this would greatly help in handling different formats and ensuring annotation consistency across my projects. Thank you for your insights! |
Beta Was this translation helpful? Give feedback.
-
|
Process Process-14: Facing this error when i try to run multiprocessing. |
Beta Was this translation helpful? Give feedback.
-
|
could you please help me i used this for my yolov8 infrencing its works good and consume gpu but my cpu is jumped to 9 to 10 percent is it possile to make it between 2 to 5 percent cpu optimization class YOLOVideoProcessor: def init(self, video_path, new_model, use_tracking=True):self.cap = cv2.VideoCapture(video_path)self.model = YOLO(new_model).to(torch.device("cuda"))print(self.model.names,"+++++++++")# self.use_tracking = use_tracking # Toggle for tracking# if self.use_tracking:# self.tracker = Sort() # Initialize tracker if tracking is enabled# print("Model classes:", self.model.names)def process_video(self):while True:ret, frame = self.cap.read()if not ret:breakframe = cv2.resize(frame, (640, 480))detections = self.model.predict(frame, iou=0.5)[0]# dets = []# Process detectionsfor row in detections.boxes.data.tolist():a1, b1, a2, b2 = map(int, row[:4]) # Bounding boxconf = row[4] # Confidence scorecls_id = int(row[5])print(cls_id) # Class ID# if cls_id == 0: # Only process specific class (e.g., 0)# dets.append([a1, b1, a2, b2, conf]) |
Beta Was this translation helpful? Give feedback.
-
|
The "Quick Start Guide: Raspberry Pi with Ultralytics YOLO11" (https://docs.ultralytics.com/guides/raspberry-pi/) has instructions for setting up Ultralytics on the Raspberry Pi either by using Docker or without using Docker. If I follow the instructions for installing Ultralytics without using Docker, it fails the install process with a "resolution-too-deep" error when I do the "pip install ultralytics[export]" step. This problem was first reported by Dave294448 in the comments of https://core-electronics.com.au/guides/raspberry-pi/getting-started-with-yolo-object-and-animal-recognition-on-the-raspberry-pi/ back on May 3. Various people tried to help him solve the problem, but the best that anyone could offer was to do |
Beta Was this translation helpful? Give feedback.
-
|
Hello, Ultralytics Team Is it possible to correct the inference workflow to output binary mask instead of polygon? Thanks in advance. |
Beta Was this translation helpful? Give feedback.


Uh oh!
There was an error while loading. Please reload this page.
-
index
Explore a complete guide to Ultralytics YOLOv8, a high-speed, high-accuracy object detection & image segmentation model. Installation, prediction, training tutorials and more.
https://docs.ultralytics.com/
Beta Was this translation helpful? Give feedback.
All reactions