Skip to content

Added torchrun compatibility for distributet training across multiple GPUs in a single node (single instance) #1568

Added torchrun compatibility for distributet training across multiple GPUs in a single node (single instance)

Added torchrun compatibility for distributet training across multiple GPUs in a single node (single instance) #1568

Triggered via pull request August 8, 2024 22:42
@brunopistonebrunopistone
synchronize #4766
Status Cancelled
Total duration 59m 43s
Artifacts

codebuild-ci.yml

on: pull_request_target
collab-check
3s
collab-check
wait-for-approval
2s
wait-for-approval
Matrix: unit-tests
Fit to window
Zoom out
Zoom in

Deployment protection rules

Reviewers, timers, and other rules protecting deployments in this run
Event Environments Comment
sage-maker
approved Aug 8, 2024
manual-approval

Annotations

3 errors
codestyle-doc-tests
Build status: FAILED
integ-tests
Canceling since a higher priority waiting request for 'PR Checks-4766' exists
integ-tests
The operation was canceled.