You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jul 1, 2023. It is now read-only.
The Tensor initializer lets you manually specify a device for placement. For example, you should be able to explicitly place a Tensor on the first CPU using the eager mode with the following:
letcpu0=Device(kind:.CPU, ordinal:0, backend:.TF_EAGER)letcpuTensor1=Tensor([0.0,1.0,2.0], on: cpu0)
This does not work for the .TF_EAGER backend. The Tensor will be created on the default accelerator, no matter what device is specified. If a GPU is present, trying to force a Tensor onto the CPU will not work.
As a workaround, encapsulating eager Tensor operations in withDevice(.cpu) { code here } will force the eager Tensors within that closure to run on the first of the specified class of devices.
For X10, this does work, and does allow for correct manual device placement. The eager-mode Tensors should be modified to support this in the same way as X10 Tensors.