Skip to content

Commit bd70ab2

Browse files
committed
2 parents dd044ec + 1919233 commit bd70ab2

File tree

1 file changed

+10
-11
lines changed

1 file changed

+10
-11
lines changed

README.md

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,19 +1,18 @@
11

2-
This repo is the source code for:
2+
# **Playing trading games with deep reinforcement learning**
33

4-
### **Deep reinforcement learning for time series: playing idealized trading games**
4+
This repo is the code for this [paper](https://arxiv.org/abs/1803.03916). Deep reinforcement learing is used to find optimal strategies in these two scenarios:
5+
* Momentum trading: capture the underlying dynamics
6+
* Arbitrage trading: utilize the hidden relation among the inputs
57

6-
You can find the paper [here](https://arxiv.org/abs/1803.03916), and please reach me at gxiang1228@gmail.com for any questions/comments!
7-
8-
In this paper I explored deep reinforcement learing as a method to find the optimal strategies for trading. I compared several neural networks: Stacked Gated Recurrent Unit (GRU), stacked Long Short-Term Memory (LSTM), stacked Convolutional Neural Network (CNN), and multi-layer perception (MLP). I designed two simple trading games aiming to test if the trained agent can:
9-
* capture the underlying dynamics (to be used in momentum trading)
10-
* utilize the hidden relation among the inputs (to be used in arbitrage trading)
11-
12-
It turns out that GRU performs best. However as these are just simplified worlds for the agent to play with, more further investigation is deserved.
8+
Several neural networks are compared:
9+
* Recurrent Neural Networks (GRU/LSTM)
10+
* Convolutional Neural Network (CNN)
11+
* Multi-Layer Perception (MLP)
1312

1413
### Dependencies
1514

16-
After you get my repo, you need packages like keras, tensorflow, keras, h5py. But don't worry about these dependencies, I've created a [Anaconda environment](https://conda.io/docs/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file) file, env.yml, for you to run my repo. Simply do this in your terminal:
15+
You can get all dependencies via the [Anaconda](https://conda.io/docs/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file) environment file, [env.yml](https://github.com/golsun/deep-RL-time-series/blob/master/env.yml):
1716

1817
conda env create -f env.yml
1918

@@ -22,4 +21,4 @@ Just call the main function
2221

2322
python main.py
2423

25-
But you can choose model (MLP, CNN, or GRU) and parameters by playing with the main function, and you can play with sampler.py to generate a different artifical input datasets to train and test.
24+
You can play with model parameters (specified in main.py), if you get good results or any trouble, please contact me at gxiang1228@gmail.com

0 commit comments

Comments
 (0)