You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
These methods provide direct support for storing and retrieving PyTorch tensors. They automatically handle serialization and metadata, and include built-in support for**Tensor Parallelism (TP)** by automatically splitting and reconstructing tensor shards.
1010
+
1011
+
⚠️ **Note**: These methods require `torch` to be installed and available in the environment.
1012
+
1013
+
#### get_tensor_into()
1014
+
1015
+
Get a PyTorch tensor from the store directly into a pre-allocated buffer.
-`base_keys` (List[str]): List of base identifiers.
1042
+
-`buffer_ptrs` (List[int]): List of the buffers pointer pre-allocated for tensor, and the buffers should be registered.
1043
+
-`sizes` (List[int]): List of the size of buffers.
1044
+
1045
+
**Returns:**
1046
+
1047
+
-`List[torch.Tensor]`: List of retrieved tensors (or shards). Contains `None`for missing keys.
1048
+
1049
+
#### get_tensor_into_with_tp()
1050
+
1051
+
Get a PyTorch tensor from the store, specifically retrieving the shard corresponding to the given Tensor Parallel rank, directly into the pre-allocated buffer.
0 commit comments