-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Description
App Version
v3.25.4
API Provider
OpenAI Compatible
Model Used
Qwen3-235B-A22B
Roo Code Task Links (Optional)
Engine loop is not running. Inspect the stacktrace to find the original error: OutOfMemoryError('HIP out of memory. Tried to allocate 1.63 GiB. GPU 0 has a total capacity of 63.98 GiB of which 1.18 GiB is free. Of the allocated memory 57.09 GiB is allocated by PyTorch, with 26.00 MiB allocated in private pools (e.g., HIP Graphs), and 1.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)').
Retry attempt 1
Retrying now...
π Steps to Reproduce
Engine loop is not running. Inspect the stacktrace to find the original error: OutOfMemoryError('HIP out of memory. Tried to allocate 1.63 GiB. GPU 0 has a total capacity of 63.98 GiB of which 1.18 GiB is free. Of the allocated memory 57.09 GiB is allocated by PyTorch, with 26.00 MiB allocated in private pools (e.g., HIP Graphs), and 1.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)').
Retry attempt 1
Retrying now...
π₯ Outcome Summary
Engine loop is not running. Inspect the stacktrace to find the original error: OutOfMemoryError('HIP out of memory. Tried to allocate 1.63 GiB. GPU 0 has a total capacity of 63.98 GiB of which 1.18 GiB is free. Of the allocated memory 57.09 GiB is allocated by PyTorch, with 26.00 MiB allocated in private pools (e.g., HIP Graphs), and 1.23 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)').
Retry attempt 1
Retrying now...
π Relevant Logs or Errors (Optional)
Metadata
Metadata
Assignees
Labels
Type
Projects
Status