Replies: 2 comments
-
The GPU device on M2 is called "mps", you have to use:
Currently it has many limitations with Torch. We cannot make it default. see: #2044 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks Frank. I tried it (mps) to calculate embeddings using sentence
transformers all_mini_lm and the performance was actually a bit worse than
using cpu. could this be because of the limitations you are talking about
above? i did not get any exceptions. just slower performance.
…On Mon, May 6, 2024 at 10:26 PM Frank Liu ***@***.***> wrote:
The GPU device on M2 is called "mps", you have to use:
Device device = Device.of("mps", -1);
Currently it has many limitations with Torch. We cannot make it default.
see: #2044 <#2044>
—
Reply to this email directly, view it on GitHub
<#3160 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6NWEK2FFMZWBXKGSDQYUG3ZBBQYPAVCNFSM6AAAAABHKBEUVOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4TGMZWHEZTE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am trying to use this library. I have added following dependencies:
and this code:
which gives me
on a Mac with M2 chip. so my question is how can I leverage GPU with this library on a Mac?
Beta Was this translation helpful? Give feedback.
All reactions