-
-
Notifications
You must be signed in to change notification settings - Fork 122
Open
Description
When I train AgentDDPG, this error happen.
In elegant_finrl/agent.py line 278 function select_action:
def select_action(self, state) -> np.ndarray:
states = torch.as_tensor((state,), dtype=torch.float32, device=self.device).detach_()
action = self.act(states)[0].cpu().numpy()
return (action + self.ou_noise()).ratio_clip(-1, 1)
action + self.ou_noise() is a np.ndarray, it has no attribute named "ratio_clip".
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels