zh/modes/train/ #8251
Replies: 69 comments 89 replies
-
Beta Was this translation helpful? Give feedback.
-
|
这个翻译太那啥了吧, train翻译成火车? |
Beta Was this translation helpful? Give feedback.
-
|
我想了解一下,AMD的7900XTX能进行调用训练吗?我发现在pytorch当中已经出现了ROCm5.7和ROCm6.0的支持 |
Beta Was this translation helpful? Give feedback.
-
|
谢谢你们开发了这么完善的 yolov8🙏, |
Beta Was this translation helpful? Give feedback.
-
|
Hello, how can I retain the ability of the preprocessor model to recognize certain classes when training a new model ? For example, the new model still retains the old model, like yolov8n.pt, the ability to recognize a person |
Beta Was this translation helpful? Give feedback.
-
|
您好,请问我在train时和val时使用多个数据集,类似如下的yaml文件应该怎么配置指定的labels路径呢? |
Beta Was this translation helpful? Give feedback.
-
|
Hello!"Can YOLOv8 detect lane lines?" |
Beta Was this translation helpful? Give feedback.
-
|
您好,为什么我训练过程中保存的best.pt和last.pt模型文件(200MB)比官方提供的预训练模型yolov8m.pt(50MB)大很多?以及如果我希望用我训练出来的best.pt来进行预测,我也是直接model = YOLO('best.pt')就可以吗?期待您的回复。 |
Beta Was this translation helpful? Give feedback.
-
|
你好,请问在目标识别预测测试集的时候,只能预测图像的结果和坐标什么的吗。有没有什么能计算出像训练时候验证集的评价指标的东西来评价测试集。就是测试集的什么精确率召回率,怎么计算。希望解答,谢谢。 |
Beta Was this translation helpful? Give feedback.
-
|
您好,当我用python -m torch.distributed.run --nproc_per_node x train.py来进行多GPU训练时,每块GPU得到的batch-size是batch-size/num_gpu还是完整的batch-size?之所以会产生这个疑问,是因为我发现当设置同样的batch-size时,4GPU训练的精度似乎并不如2GPU。 |
Beta Was this translation helpful? Give feedback.
-
|
model = YOLO('yolov8n.pt') # 加载预训练模型(推荐用于训练) 官方文档描述 训练模型,第一次训练100轮,查看训练图型 results.png 正常,重新训练100轮,results.png 出现过拟合现象, |
Beta Was this translation helpful? Give feedback.
-
|
model = YOLO('yolov8n.pt') # 加载预训练模型(推荐用于训练) 官方文档描述 训练模型,第一次训练100轮,查看训练图型 results.png 正常,重新训练100轮,results.png 出现过拟合现象, |
Beta Was this translation helpful? Give feedback.
-
|
您好,我想修改一下模型的损失函数,因为我的数据集涉及到数据不均衡问题,所以想使用focal loss,我该在哪里修改呢? |
Beta Was this translation helpful? Give feedback.
-
|
How to freeze weights for other classes and train weights for new classes only on the training set. |
Beta Was this translation helpful? Give feedback.
-
|
How can you use yolov8 to train only on new class data sets to achieve incremental learning without forgetting previous detection capabilities。 |
Beta Was this translation helpful? Give feedback.
-
|
你好!请问如何在python中使用Ultralytics库来简单的进行灰度图训练yolo11模型?我的数据集是完整的3通道彩色图片,请问有没有什么方便的方法来完成? |
Beta Was this translation helpful? Give feedback.
-
|
The documentation mentions that erasing is a Classification only parameter, but erasing: 0.4 appears in the args.yaml that is automatically saved at the end of the training of my detection task, and I want to make sure that this parameter is in effect in the detection task |
Beta Was this translation helpful? Give feedback.
-
|
你好,When training a multi-class segmentation task with YOLO, I only see overall metrics like Precision (P) and Recall (R) in the results. How can I modify the code to display classification accuracy for each individual class? Additionally, I'm unable to configure parameters for the seg_loss during training setup—how can I resolve this issue? |
Beta Was this translation helpful? Give feedback.
-
|
Can I implement YOLOv7 training through a script? If so, how to do it? |
Beta Was this translation helpful? Give feedback.
-
|
from ultralytics import YOLO 您好,我使用以下配置来训练识别鸟的模型,但识别效果并不准确,因为鸟在实际场景中占到的像素点很小,就像一个小黑点,这个情况会造成很多误识别,比如地上的草或者小路灯,我想通过运动检测能力来增强模型的识别能力,但是不知道怎么加这个参数,您给给出一些建议吗? |
Beta Was this translation helpful? Give feedback.
-
|
May I ask if I want to modify the strategy of saving the best weights during segmentation model training, for example, it seems that only P, R, MAP, MAP50-95 can add weights by default? I want to save the model weights that perform best on IOU during validation. What should I do? |
Beta Was this translation helpful? Give feedback.
-
|
这里有关imgsz参数的说明是什么意思啊? |
Beta Was this translation helpful? Give feedback.
-
|
请问,若是多类别目标检测,是否可以设置class_weights 或 cls_pw 参数,如何设置? |
Beta Was this translation helpful? Give feedback.
-
|
from ultralytics import YOLO Load a modelmodel = YOLO("/home/objdet/Ultralytics/ultralytics/cfg/models/26/yolo26l.yaml").load("/home/objdet/Ultralytics/yolo26l.pt") Train the model with 2 GPUsresults = model.train(data="/home/objdet/data/Utralytics/dataset.yaml", WARNING This is my train configs and the some warnings about my rect_input_shape(1024,512), I want to know whether yolo26 can be trained with "rect=True" on multi GPU and single GPU? |
Beta Was this translation helpful? Give feedback.
-
|
为什么云GPU(Linux系统)里训练时,即使pretrained=False,还是会自动下载最新的权重文件,难道训练时还是会采用已有的权重吗? |
Beta Was this translation helpful? Give feedback.
-
|
应该是你指定了要加载权重,你可以手动下载预训练权重到本地
发自我的iPhone
…------------------ Original ------------------
From: kn ***@***.***>
Date: Fri,Jan 16,2026 11:16 AM
To: ultralytics/ultralytics ***@***.***>
Cc: Zhu guoqing ***@***.***>, Comment ***@***.***>
Subject: Re: [ultralytics/ultralytics] zh/modes/train/ (Discussion #8251)
为什么云GPU(Linux系统)里训练时,即使pretrained=False,还是会自动下载最新的权重文件,难道训练时还是会采用已有的权重吗?
但是在Windows系统里,pretrained=False是有用的
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
同样的训练代码,Linux云环境下,会自动下载最新权重,并且自动生成runs/detect/predict文件夹,windows环境下不会下载最新权重,也不会生成predict文件夹
mytrain.py:
model = YOLO('yolov8n.yaml')
results = model.train(
data='SSLAD.yaml',
epochs=100,
imgsz=640,
workers=2,
pretrained=False,
batch=32
)
SSLAD.yaml:
path: SSLAD # dataset root dir
train: images/train # train images (relative to 'path') 5985 images
val: images/val # val images (relative to 'path') 1496 images
names:
0: Pedestrian
1: Cyclist
2: Car
3: Truck
4: Tram
5: Tricycle
当运行在window环境下打印:
(base) PS D:\Desktop\ultralytics-main> python mytrain.py New https://pypi.org/project/ultralytics/8.4.6 available Update with 'pip install -U ultralytics' Ultralytics 8.4.3 Python-3.12.7 torch-2.9.1+cpu CPU (Intel Core Ultra 7 155H) engine\trainer: agnostic_nms=False, amp=True, angle=1.0, augment=False, auto_augment=randaugment, batch=32, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, compile=False, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=SSLAD.yaml, degrees=0.0, deterministic=True, device=cpu, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=100, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8n.yaml, momentum=0.937, mosaic=1.0, multi_scale=0.0, name=train13, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=False, profile=False, project=None, rect=False, resume=False, retina_masks=False, rle=1.0, save=True, save_conf=False, save_crop=False, save_dir=D:\Desktop\ultralytics-main\runs\detect\train13, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=True, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=2, workspace=None Overriding model.yaml nc=80 with nc=6 from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] 4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True] 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] 6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] 22 [15, 18, 21] 1 752482 ultralytics.nn.modules.head.Detect [6, 16, None, [64, 128, 256]] YOLOv8n summary: 130 layers, 3,012,018 parameters, 3,012,002 gradients, 8.2 GFLOPs Freezing layer 'model.22.dfl.conv.weight' train: Fast image access (ping: 0.00.0 ms, read: 25.510.5 MB/s, size: 287.8 KB) train: Scanning D:\Desktop\ultralytics-main\SSLAD\labels\train... 33 images, 0 backgrounds, 0 corrupt: 1% ──────────── 33/5000 96.train: Scanning D:\Desktop\ultralytics-main\SSLAD\labels\train... 89 images, 1 backgrounds, 0 corrupt: 2% ──────────── 89/5000 225
运行在Linux云环境下:
(torch) ***@***.***:/data/coding/ultralytics-main# python mytrain.py New https://pypi.org/project/ultralytics/8.4.6 available 😃 Update with 'pip install -U ultralytics' Ultralytics 8.4.3 🚀 Python-3.10.16 torch-2.7.1+cu126 CUDA:0 (Tesla P4, 8109MiB) engine/trainer: agnostic_nms=False, amp=True, angle=1.0, augment=False, auto_augment=randaugment, batch=32, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, compile=False, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=SSLAD.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=100, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8n.yaml, momentum=0.937, mosaic=1.0, multi_scale=0.0, name=train, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=False, profile=False, project=None, rect=False, resume=False, retina_masks=False, rle=1.0, save=True, save_conf=False, save_crop=False, save_dir=/data/coding/ultralytics-main/runs/detect/train, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=True, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=2, workspace=None Overriding model.yaml nc=80 with nc=6 from n params module arguments 0 -1 1 464 ultralytics.nn.modules.conv.Conv [3, 16, 3, 2] 1 -1 1 4672 ultralytics.nn.modules.conv.Conv [16, 32, 3, 2] 2 -1 1 7360 ultralytics.nn.modules.block.C2f [32, 32, 1, True] 3 -1 1 18560 ultralytics.nn.modules.conv.Conv [32, 64, 3, 2] 4 -1 2 49664 ultralytics.nn.modules.block.C2f [64, 64, 2, True] 5 -1 1 73984 ultralytics.nn.modules.conv.Conv [64, 128, 3, 2] 6 -1 2 197632 ultralytics.nn.modules.block.C2f [128, 128, 2, True] 7 -1 1 295424 ultralytics.nn.modules.conv.Conv [128, 256, 3, 2] 8 -1 1 460288 ultralytics.nn.modules.block.C2f [256, 256, 1, True] 9 -1 1 164608 ultralytics.nn.modules.block.SPPF [256, 256, 5] 10 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 11 [-1, 6] 1 0 ultralytics.nn.modules.conv.Concat [1] 12 -1 1 148224 ultralytics.nn.modules.block.C2f [384, 128, 1] 13 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 14 [-1, 4] 1 0 ultralytics.nn.modules.conv.Concat [1] 15 -1 1 37248 ultralytics.nn.modules.block.C2f [192, 64, 1] 16 -1 1 36992 ultralytics.nn.modules.conv.Conv [64, 64, 3, 2] 17 [-1, 12] 1 0 ultralytics.nn.modules.conv.Concat [1] 18 -1 1 123648 ultralytics.nn.modules.block.C2f [192, 128, 1] 19 -1 1 147712 ultralytics.nn.modules.conv.Conv [128, 128, 3, 2] 20 [-1, 9] 1 0 ultralytics.nn.modules.conv.Concat [1] 21 -1 1 493056 ultralytics.nn.modules.block.C2f [384, 256, 1] 22 [15, 18, 21] 1 752482 ultralytics.nn.modules.head.Detect [6, 16, None, [64, 128, 256]] YOLOv8n summary: 130 layers, 3,012,018 parameters, 3,012,002 gradients, 8.2 GFLOPs Freezing layer 'model.22.dfl.conv.weight' AMP: running Automatic Mixed Precision (AMP) checks... Downloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 1% ──────────── 64.0KB/5.3MB 180.0Downloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 4% ──────────── 224.0KB/5.3MB 1.5MDownloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 9% ━─────────── 512.0KB/5.3MB 2.5MDownloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 25% ━━━───────── 1.3/5.3MB 8.4MB/sDownloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 65% ━━━━━━━╸──── 3.4/5.3MB 21.0MB/Downloading https://github.com/ultralytics/assets/releases/download/v8.4.0/yolo26n.pt to 'yolo26n.pt': 100% ━━━━━━━━━━━━ 5.3MB 9.1MB/s 0.6s AMP: checks passed ✅ train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 2512.8±900.0 MB/s, size: 287.8 KB) train: Scanning /data/coding/ultralytics-main/SSLAD/labels/train.cache... 5000 images, 33 backgrounds, 0 corrupt: 100% ━━━━━━━━━━━━ 5000/5000 1.0Git/s 0.0s
黄子龙
***@***.***
…------------------ 原始邮件 ------------------
发件人: "Glenn ***@***.***>;
发送时间: 2026年1月18日(星期天) 凌晨5:04
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [ultralytics/ultralytics] zh/modes/train/ (Discussion #8251)
在全新的 Linux 云环境里,如果你传的是 model=yolo26n.pt(或 model=yolo26n 会自动解析成 .pt),那它本质上就是“预训练权重文件”,不存在时会先自动下载到本地再开始训练;pretrained=False 只影响用 model=*.yaml 搭建网络时是否加载预训练权重,并不会阻止下载你指定的 .pt 模型文件。参数优先级/合并规则也可以参考 Configuration reference。
yolo detect train data=your.yaml model=yolo26n.yaml pretrained=False
from ultralytics import YOLO model = YOLO("yolo26n.yaml") model.train(data="your.yaml", pretrained=False)
如果你已经用的是 .yaml 但仍在下载,请把你实际运行的完整命令(文本)和 yolo checks 输出贴出来,方便定位是哪一步触发了下载。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
|
请问不同版本的ultralytics库本身会有差异吗,如果我需要训练调整不同版本的yolo,需要去下载他们各自的仓库吗 |
Beta Was this translation helpful? Give feedback.
-
|
我在platform探索时,为什么不能把我标注好的数据集上传(已经包含txt文件),而且我发现上传之前我的图片排列是有顺序的,标注起来很方便,但是上传之后,image集就变得混乱起来了,标注起来很不方便 |
Beta Was this translation helpful? Give feedback.
-
|
自动选择算法优先考虑具有以下特征的 GPU:1.更低的电流利用率,我觉得这里的表述容易误导读者,换成更低的功耗负载的GPU或许更好一些,或者换成其他表述 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
zh/modes/train/
使用Ultralytics YOLO 训练YOLOv8 模型的分步指南,包括单 GPU 和多 GPU 训练示例
https://docs.ultralytics.com/zh/modes/train/?h=%E8%8B%B9%E6%9E%9C
Beta Was this translation helpful? Give feedback.
All reactions