Cache input data in GPU memory between consecutive queries #13435
-
The FAQ page https://docs.nvidia.com/spark-rapids/user-guide/latest/faq.html states that:
May I ask if anyone could confirm that, if I understand it correctly, the input data always need to be transferred from the host memory to the device memory again between consecutive queries? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Yes, your understanding is correct. Caching data onto GPU memory is not currently supported and input data will need to be transferred from host memory to device memory between consecutive queries even if the RAPIDS Cache Serializer is used. |
Beta Was this translation helpful? Give feedback.
Yes, your understanding is correct. Caching data onto GPU memory is not currently supported and input data will need to be transferred from host memory to device memory between consecutive queries even if the RAPIDS Cache Serializer is used.