pin_memory=True
pytorch custom dataloader support
#7487
Unanswered
externalsupplierstaff
asked this question in
Q&A
Replies: 1 comment
-
In terms of devices, what is the difference between the following? What does the import numpy as np
import jax.numpy as jnp
arrays = [np.array([3.14]),
jnp.array([3.14]),
np.array([3.14], dtype=jnp.float32),
jnp.array([3.14], dtype=jnp.float32)]
for arr in arrays:
print(arr)
print(type(arr))
print(arr.dtype)
print('-'*50) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I love jax because it's very transparent. However, when implementing a custom Pytorch Dataset and the subsequent dataloader I have the question of where is the data loaded to. More specifically, in the docs' tutorial about dataloading, there is no
device
,cuda
orgpu
. I also see in the documentation that the (pre)allocation is automagically done.My questions are the following:
pin_memory=True
?jnp.float32
type, as long as gpu is available, true or false?to(device)
?Beta Was this translation helpful? Give feedback.
All reactions