guides/hyperparameter-tuning/ #9536
Replies: 37 comments 110 replies
-
|
from ultralytics import YOLO Initialize the YOLO modelmodel = YOLO('yolov8n.pt') Tune hyperparameters on COCO8 for 30 epochsmodel.tune(data='coco8.yaml', epochs=30, iterations=300, optimizer='AdamW', plots=False, save=False, val=False) in this portion of code I have facing the following error: local variable 'ckpt_file' referenced before assignment |
Beta Was this translation helpful? Give feedback.
-
|
Hello, comp sci student here! I just wanted to ask about how to improve the accuracy. I have it trained a custom dataset involving plants, and after training, I used the tune method cause the accuracy after training wasn't enough. I used best.pt from the tune folder for the chosen model but the results were the same before tuning. Is there anything else I need to be doing/changing to see results? |
Beta Was this translation helpful? Give feedback.
-
|
What optimizers I can use beside AdamW? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I got stuck with some issue related with GPU workflow. |
Beta Was this translation helpful? Give feedback.
-
|
hello I'm trying to find the best hyperparameter using gridsearch, but I'm having trouble comparing each mAP. Is there a solution or reference related to using gridsearch to find the best hyperparameter? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I recently tuned the So far so good. However, I wanted to replicate these results, which I believe is possible since the In my |
Beta Was this translation helpful? Give feedback.
-
|
I have a question about hyperparameter tuning: I illusrate this with two pieces of code: Run 500 iteration at once is like:model = YOLO("yolov8m.yaml") Run 250 iterations then 25 iterationsmodel = YOLO("yolov8m.yaml") So, should I get the same results in scenario one and scenario two? |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I'm new in AI, and I want to fine tuned my yolov10 model. I have already trained it on my own data set. Now I want to test to find the best hyperparameter, but I don't know how to write it. |
Beta Was this translation helpful? Give feedback.
-
|
Hi there, I'm encountering the following error message: TypeError: 'SegmentMetrics' object is not subscriptable param_grid = { param_combinations = list(itertools.product(*param_grid.values())) Find the parameters with the best mAPbest_params = max(results_dict, key=results_dict.get) |
Beta Was this translation helpful? Give feedback.
-
Only tuning a set of HyperparametersMy name is Mario, and I am currently conducting research on atherosclerosis detection in coronary angiography medical images using YOLOv8. Due to specific requirements of my project, I have developed a custom data augmentation class and therefore, I am not utilizing any of the YOLOv8 augmentation parameters. I would like to inquire if there is a method by which I can selectively fine-tune a specific set of hyperparameters using the Genetic Algorithm (GA). For instance, I wish to optimize parameters such as Your assistance and guidance on this matter would be greatly appreciated! |
Beta Was this translation helpful? Give feedback.
-
|
can ı use this for yolov4 |
Beta Was this translation helpful? Give feedback.
-
|
How can I choose the best parameters for my custom model? Which parameters impact the model's performance? Is using Optuna a good option for finding the best parameter values? |
Beta Was this translation helpful? Give feedback.
-
|
Greetings, All
|
Beta Was this translation helpful? Give feedback.
-
|
Hello. I would like to use YOLOv8 hyperparameter tuning but also I want to optimize the copy-paste augmentation. I noticed by default for hyperparameter tuning the copy-paste augmentation is set to zero across all iterations and not explored. How can I enable this? |
Beta Was this translation helpful? Give feedback.
-
|
hello, I am having trouble running the fine tuning code below """ Initialize the YOLO modelmodel = YOLO("yolov8n.pt") Tune hyperparameters on COCO8 for 30 epochsmodel.tune(data="GlobalWheat2020.yaml", epochs=1, iterations=4, optimizer="AdamW", plots=True, save=True, val=True) paths are correct for both model and data �[34m�[1mTuner: �[0mInitialized Tuner instance with 'tune_dir=C:\Users\msi\runs\detect\tune' �[34m�[1mTuner: �[0m1/4 iterations complete ✅ (2.54s) Printing '�[1m�[30mC:\Users\msi\runs\detect\tune\best_hyperparameters.yaml�[0m' it's like the fine tuning is not doing the training at all and there is no folder train, why is that ? how to solve this ? thank you |
Beta Was this translation helpful? Give feedback.
-
Issue: Missing Results for Early-Stopped Iterations in YOLOv8 TuningHi Ultralytics Team, I'm currently running a hyperparameter tuning session for the YOLOv8n-seg model with 300 iterations of 600 epochs each. To speed up the process, I included early stopping (patience=50) in my script, expecting that tuning would be shortened while still producing meaningful results. However, at the end of the tuning session, I noticed that not all iterations produced results. Specifically:
My Questions
My Tuning Script: from ultralytics import YOLO
if __name__ == "__main__":
# Define parameters
epochs = 600
iterations = 300
optimizer = "Adam"
# Output directory with dynamic naming
project_path = f"D:/Bachelorarbeit_Brüning/runs/segment/Tune_Full_Rotor_v5/nano/tune_ep{epochs}_it{iterations}_opt{optimizer}"
# Initialize YOLOv8 model
model = YOLO("yolov8n-seg.pt")
# Perform hyperparameter tuning
model.tune(
data="D:/Bachelorarbeit_Brüning/Datensätze/Full_Rotor_v5/data.yaml", # Dataset configuration
epochs=epochs, # Number of epochs
iterations=iterations, # Number of tuning iterations
optimizer=optimizer, # Optimizer
plots=False, # Disable plots
save=False, # Disable intermediate saving
val=True, # Enable validation
project=project_path, # Dynamic storage location
patience=50 # Early stopping after 50 epochs without improvement
)
print(f"Tuning completed. Results saved at: {project_path}")Any insights or suggestions on how to resolve this would be greatly appreciated! Thanks in advance, |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I have a question. I successfully merged pose and segmentation into a single model, but the results are not very satisfactory. While the segmentation results are acceptable, the keypoints are not very accurate. Is there a way to search for hyperparameters that can improve their performance? The tuning function is based on a fitness function that is calculated solely using box metrics. Thank for your reply. |
Beta Was this translation helpful? Give feedback.
-
|
Hello, |
Beta Was this translation helpful? Give feedback.
-
I have a question,
|
Beta Was this translation helpful? Give feedback.
-
|
me funciona bien el enternamiento con tune y ray, pero cuando intento: model.add_callback("on_train_start", self.on_train_start) habiendo declarado previamente las funciones, nunca entra en ellas, el entrenamiento sigue bien, pero al no entrar, estoy perdiendo funcionalidad en mi aplicación podrian ayudarme porfa? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I have a question regarding the following code for hyperparameter tuning in YOLO: from ultralytics import YOLO Initialize the YOLO modelmodel = YOLO("yolo11s.pt") Tune hyperparametersmodel.tune( As I understand it, this code can fine-tune hyperparameters such as epochs, batch size, learning rate, and others. It mutates (makes small and random changes) to the hyperparameter values in order to find the optimal set. However, I would like to clarify what the initial values are for each hyperparameter. If they start randomly, there is a risk if the base values are not well-chosen. Additionally, when I fine-tune the hyperparameter epochs, for example, by adding the argument epochs=30, what does it exactly mean? Does it set the number of epochs for the entire tuning process or for each individual tuning iteration? Looking forward to your response. Best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I hope this message finds you well. I am interested in understanding the inner workings of the genetic algorithm used in Ultralytics YOLO, particularly regarding the role of mutation in the hyperparameter optimization process. As mentioned, mutation helps explore the hyperparameter space by introducing small, random changes to existing hyperparameters, creating new candidates for evaluation. My question is as follows: Additionally, I have another question: What is the purpose of including these loss weights (box, cls, dfl) inside the search space with a defined range, and how do these values influence the optimization process during tuning? Specifically, why are they treated as tunable parameters with ranges such as box (0.02, 0.2) and cls (0.2, 4.0), even though these values are learned from the data during training and validation? I appreciate your insights and look forward to your response. Best regards, |
Beta Was this translation helpful? Give feedback.
-
Hello, I am training a model using YOLOv11, and the dataset consists of traffic signs. There are 22 classes, and each class has 400 labeled images. I’m not sure what parameters I should use for training. What parameters should I set to prevent overfitting, and what are the correct values? Could you help me with this? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I am training a model using YOLOv11, and the dataset consists of traffic signs. There are 22 classes, and each class has 400 labeled images. I’m not sure what parameters I should use for training. What parameters should I set to prevent overfitting, and what are the correct values? Could you help me with this? |
Beta Was this translation helpful? Give feedback.
-
|
Hello, I hope this message finds you well. I fine-tuned YOLOv11s on my custom dataset, which is relatively small (1,156 images split into 75% training, 15% validation, and 10% test). The dataset consists of 41 classes, with each class containing between 200 and 300 images. I experimented with three different sets of hyperparameters: the default values, those optimized using Ray Tune, and others obtained through a Genetic Algorithm. In all cases, I observed oscillations in the validation dfl and box losses, which may indicate signs of overfitting. I would appreciate any suggestions or insights on how to address this issue, particularly given the small size of the dataset, and how to improve model generalization. best regards, |
Beta Was this translation helpful? Give feedback.
-
|
Thank you for your feedback. I trained my dataset using YOLOv11s. Although
I didn’t experience severe overfitting with the current parameters, I
couldn’t achieve optimal performance either. Some traffic signs were
misclassified or not detected at all.
While searching for the best hyperparameters, I realized that I hadn't
applied data augmentation—except for the flip parameter. I have an example
parameter file, and I plan to include all relevant augmentation options
except for flipping.
I would really appreciate your help in finding the most suitable parameters
for my dataset, as I’m planning to participate in a competition soon. I
need to enhance my model by adjusting the parameters properly to achieve
the best possible performance.
Thank you in advance for your support.
…On Sun, Apr 20, 2025 at 1:51 PM phatima khanjar ***@***.***> wrote:
Hello,
I hope this message finds you well.
image.png (view on web)
<https://github.com/user-attachments/assets/be6db84b-fe0d-4356-ba77-9b4426c253ee>
I fine-tuned YOLOv11s on my custom dataset, which is relatively small
(1,156 images split into 75% training, 15% validation, and 10% test). The
dataset consists of 41 classes, with each class containing between 200 and
300 images. I experimented with three different sets of hyperparameters:
the default values, those optimized using Ray Tune, and others obtained
through a Genetic Algorithm. In all cases, I observed oscillations in the
validation dfl and box losses, which may indicate signs of overfitting. I
would appreciate any suggestions or insights on how to address this issue,
particularly given the small size of the dataset, and how to improve model
generalization.
best regards,
—
Reply to this email directly, view it on GitHub
<#9536 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BPQ7ZLAQ2G7ZL434FN3WFYD22N345AVCNFSM6AAAAABFVXPSL2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEOBYHE3DSNI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
|
Hello again, Additionally, the models sometimes confuse left and right directional signs. Why might these errors be occurring, and how can I fix them? |
Beta Was this translation helpful? Give feedback.
-
|
When running the code below the model.tune() call does not recognize my model specification. Could someone explain to me what I am doing wrong. I am using model.overrides so that I can also tune the shear and perspective parameters that are otherwise initialized at 0. from ultralytics import YOLO import argparse def main(): if name == "main": |
Beta Was this translation helpful? Give feedback.
-
|
Hi I have used the above code to apply the tuning but neither the diagram nor the models will be saved. I only get a single tune folder with all tuning details. I may ask if someone else has also the same problem. |
Beta Was this translation helpful? Give feedback.
-
|
You’ve shown the box range as 0.02 to 0.2, but the default value appears to be 7.5. Is there a mistake here? Could you please check? |
Beta Was this translation helpful? Give feedback.




Uh oh!
There was an error while loading. Please reload this page.
-
guides/hyperparameter-tuning/
Dive into hyperparameter tuning in Ultralytics YOLO models. Learn how to optimize performance using the Tuner class and genetic evolution.
https://docs.ultralytics.com/guides/hyperparameter-tuning/
Beta Was this translation helpful? Give feedback.
All reactions