Replies: 1 comment 1 reply
-
There are very few cases where we have tried reinforcement learning with ORT. Ideally, you can wrap ORTModule around any nn.Module classes in your model and see if it works. Some examples that have been tried are here for reference. However, as we haven't tested this before, it might be error prone. But feel free to file bugs with onnxruntime if you see issues. Also, tagging @SherlockNoMad if there is interest in this. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Onnxruntime Team,
i am trying to deploy a rl-algorithm on a raspberry pi. In addition to evaluating the policy i also want to continue the training process (based on a DDPG-algorithm). The original algorithm is defined in Simulink/Matlab.
I am able to install onnxruntime on the raspberry. Is it possible to define a DDPG based RL-algorithm with onnxruntime so that i can do policy-evaluation and training on a raspberry pi? Are there different ways of running a DDPG-based RL algorithm on raspberry pin hardware? Maybe with C++ ? I am grateful for any tip which might hint in the right direction.
I have already tried different options of achieving my goal, which i am going to outline here:
1:
Since Matlab allows a deployment of the policy on the hardware but no further training (see https://stackoverflow.com/questions/68412817/matlab-to-onnx-to-tensorflow) , i exported the neural nets (agent and critic) to onnx and from there to tensorflow since i assumed that it might be more ML-friendly and might allow for a training process.
This process has worked fine on my laptop (export matlab nets to onnx, import onnx nets into keras, run rl-script with training process). However I am not able to install onnx on my raspberry. It fails building the wheel. The alternatives of building the wheel in a docker all fail for me and I am sadly not able to debug the error :/.
Build onnx wheel on docker:
https://stackoverflow.com/questions/59715507/how-to-load-or-infer-onnx-models-in-edge-devices-like-raspberry-pi
https://dev.to/mshr_h/how-to-build-onnx-onnx-for-your-armv7l-devices-3cm5
2:
Alternatively I thought about doing the export from matlab to onnx to keras on my laptop and then uploading the keras-models on the raspberry. Again, this works fine on my laptop, but on the raspberry i am not able to load the keras-models with tf.keras.models.load_model(). I assume it is because the OS on the raspberry is 32 bit. I thought about changing the OS, but considering the fact that i might run into further issues on the way, that i will lose all programms installed on the raspberry and that it might not be enought to do what i want in the end, i am hesitating.
This site states, that I will not be able to train my model:
https://qengineering.eu/install-tensorflow-2.5-on-raspberry-64-os.html
To be honest, I do not quite understand why not. Grateful for any explanation.
Sure, the code execution might be too slow, but if the new weights are calculated and updated on the raspberry that would mean training, no?
3:
I am able to install onnxruntime on the raspberry pi, but i cannot find Documentation that shows me if/how it would be possible to do training according to the RRPG rl-algorithm
I hope this is the right place for a discussion. It seems like a fairly relevant topic but there is not a lot out there concerning RL-algorithms on microcomputers.
Thank you and BR,
V
Beta Was this translation helpful? Give feedback.
All reactions