Skip to content

Add support for running inferences with bfloat16#787

Open
paspf wants to merge 1 commit intofacebookresearch:mainfrom
paspf:main
Open

Add support for running inferences with bfloat16#787
paspf wants to merge 1 commit intofacebookresearch:mainfrom
paspf:main

Conversation

@paspf
Copy link

@paspf paspf commented Oct 28, 2024

When running an inference with bfloat16, the script crashes within the SamPredictor.predict() function when converting the tensors to numpy arrays. This is caused by numpy's lack of support for bfloat16.

The PyTorchs.to() function is used to explicitly convert the tensor to float32, which adresses the problem.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants