Replies: 2 comments 2 replies
-
|
👋 Hello @Bstrum36, thank you for the detailed report and code sample — this is very helpful 🚀. It looks like you may be reporting a SAM3 behavior issue with negative exemplars, so this is an automated response to help gather the right debugging details while an Ultralytics engineer also assists soon 🤝 We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered. If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it. Since you’ve already shared code and environment details, please also include:
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results. Join the Ultralytics community where it suits you best. For real-time chat, head to Discord 🎧. Prefer in-depth discussions? Check out Discourse. Or dive into threads on our Subreddit to share knowledge with the community. UpgradeUpgrade to the latest pip install -U ultralyticsEnvironmentsYOLO may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLO Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
Beta Was this translation helpful? Give feedback.
-
|
For SAM3, you need to use |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am getting confusing results on trying to apply negative exemplars.

The image shows the source image with the bounding boxes of the exemplars .
expected the the opposite of what the image shows. When I run with labels = [1,1} I get the same results.
Please advise.
Here is the prediction 👍
results = predictor(bboxes=bboxes,
labels = [0,0],
show=False,
boxes = False,
)
Here is the environment:
Ultralytics 8.4.21 Python-3.12.11 torch-2.9.0.dev20250716+cu129 CUDA:0 (NVIDIA GeForce RTX 3080 Ti, 12288MiB)
Windows 11
And finally here is all the code 👍
from ultralytics.models.sam import SAM3SemanticPredictor
import os
import cv2
bboxes = [[269.0, 145.0, 313.0, 192.0], [370.0, 190.0, 414.0, 237.0]]
img_path = os.path.join("Images", "baggage_claim.jpg")
img_src = cv2.imread(img_path)
if img_src is None:
raise FileNotFoundError(f"Image not found: {img_path}")
img = img_src.copy() # draw boxes on a copy for display
for bbox in bboxes:
x1, y1, x2, y2 = map(int, bbox)
cv2.rectangle(img, (x1, y1), (x2, y2), (0, 255, 0), 2) # Draw green rectangle with thickness 2
Initialize predictor
overrides = dict(conf=0.25, task="segment", mode="predict", model="sam3.pt", half=True,
save=True,name = "sam3_bbox_test",project = "Output_Images")
predictor = SAM3SemanticPredictor(overrides=overrides)
Set image
predictor.set_image(img_path)
Provide bounding box examples to segment similar objects
results = predictor(bboxes=bboxes,
labels = [0,0],
show=False,
boxes = False,
)
img_out= results[0].plot(boxes = False)
def side_by_side(left, right):
if left.shape[0] != right.shape[0]:
scale = left.shape[0] / right.shape[0]
new_w = int(right.shape[1] * scale)
right = cv2.resize(right, (new_w, left.shape[0]), interpolation=cv2.INTER_AREA)
return cv2.hconcat([left, right])
combined = side_by_side(img, img_out)
#cv2.namedWindow("Result", cv2.WND_PROP_AUTOSIZE)
cv2.setWindowProperty("Result", cv2.WND_PROP_AUTOSIZE, cv2.WINDOW_AUTOSIZE)
cv2.imshow("Combined ", combined)
print("Showing result. Press spacebar to continue...")
cv2.waitKey(5)
while True:
key = cv2.waitKey()
if key == 32: # 32 is the ASCII code for spacebar
break
cv2.destroyAllWindows()
Beta Was this translation helpful? Give feedback.
All reactions