You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# **Playing trading games with deep reinforcement learning**
3
3
4
-
### **Deep reinforcement learning for time series: playing idealized trading games**
4
+
This repo is the code for this [paper](https://arxiv.org/abs/1803.03916). Deep reinforcement learing is used to find optimal strategies in these two scenarios:
5
+
* Momentum trading: capture the underlying dynamics
6
+
* Arbitrage trading: utilize the hidden relation among the inputs
5
7
6
-
You can find the paper [here](https://arxiv.org/abs/1803.03916), and please reach me at gxiang1228@gmail.com for any questions/comments!
7
-
8
-
In this paper I explored deep reinforcement learing as a method to find the optimal strategies for trading. I compared several neural networks: Stacked Gated Recurrent Unit (GRU), stacked Long Short-Term Memory (LSTM), stacked Convolutional Neural Network (CNN), and multi-layer perception (MLP). I designed two simple trading games aiming to test if the trained agent can:
9
-
* capture the underlying dynamics (to be used in momentum trading)
10
-
* utilize the hidden relation among the inputs (to be used in arbitrage trading)
11
-
12
-
It turns out that GRU performs best. However as these are just simplified worlds for the agent to play with, more further investigation is deserved.
8
+
Several neural networks are compared:
9
+
* Recurrent Neural Networks (GRU/LSTM)
10
+
* Convolutional Neural Network (CNN)
11
+
* Multi-Layer Perception (MLP)
13
12
14
13
### Dependencies
15
14
16
-
After you get my repo, you need packages like keras, tensorflow, keras, h5py. But don't worry about these dependencies, I've created a [Anaconda environment](https://conda.io/docs/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file) file, env.yml, for you to run my repo. Simply do this in your terminal:
15
+
You can get all dependencies via the [Anaconda](https://conda.io/docs/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file)environment file, [env.yml](https://github.com/golsun/deep-RL-time-series/blob/master/env.yml):
17
16
18
17
conda env create -f env.yml
19
18
@@ -22,4 +21,4 @@ Just call the main function
22
21
23
22
python main.py
24
23
25
-
But you can choose model (MLP, CNN, or GRU) and parameters by playing with the main function, and you can play with sampler.py to generate a different artifical input datasets to train and test.
24
+
You can play with model parameters (specified in main.py), if you get good results or any trouble, please contact me at gxiang1228@gmail.com
0 commit comments