Skip to content

Commit f0912dc

Browse files
committed
Update Readme
1 parent 60d7949 commit f0912dc

File tree

1 file changed

+18
-18
lines changed

1 file changed

+18
-18
lines changed

README.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,31 @@
1-
## Installation
1+
# Axelrod Evolvers
22

3-
Install axelrod:
3+
This repository contains training code for the strategies LookerUp, PSOGambler, and EvolvedANN (feed-forward neural network).
4+
There are three scripts, one for each strategy:
5+
* looker_evolve.py
6+
* pso_evolve.py
7+
* ann_evolve.py
48

5-
```
6-
pip install axelrod numpy cloudpickle docopt
7-
```
8-
9-
Clone this repository
10-
11-
## Some Changes
12-
13-
In the original repository the strategies were run against all the default strategies in the Axelrod library. This is slow and probably not necessary. For example the Meta* players are just combinations of the other players, and very computationally intensive; it's probably ok to remove those.
9+
In the original iteration the strategies were run against all the default strategies in the Axelrod library. This is slow and probably not necessary. For example the Meta players are just combinations of the other players, and very computationally intensive; it's probably ok to remove those.
1410

15-
This fork uses a subset of about 90 strategies, excluding the most computationally intensives (e.g. the hunters).
16-
17-
## The strategies
11+
## The Strategies
1812

1913
The LookerUp strategies are based on lookup tables with two parameters:
2014
* n, the number of rounds of trailing history to use and
2115
* m, the number of rounds of initial opponent play to use
2216

17+
PSOGambler is a stochastic version of LookerUp, trained with a particle swarm algorithm.
18+
19+
EvolvedANN is one hidden layer feed forward neural network based algorithm.
20+
21+
All three strategies are trained with an evolutionary algorithm and are examples of reinforcement learning.
22+
2323
### Open questions
2424

25-
* What's the best table for n, m?
26-
* What's the best table against parameterized strategies. For example, if the opponents are `[RandomPlayer(x) for x in np.arange(0, 1, 0.01)], what lookup table is best? Is it much different from the generic table?
25+
* What's the best table for n, m for LookerUp and PSOGambler?
26+
* What's the best table against parameterized strategies? For example, if the opponents are `[RandomPlayer(x) for x in np.arange(0, 1, 0.01)], what lookup table is best? Is it much different from the generic table?
2727
* Can we separate n into n1 and n2 where different amounts of history are used for the player and the opponent?
28-
* Incorporate @GKDO's swarm model that makes the tables non-deterministic, for the same values of n and m. Does this produce better results for all n and m?
28+
* Are there other features that would improve the performance of EvolvedANN?
2929

3030

3131
## Running
@@ -62,4 +62,4 @@ python lookup_evolve.py -p 4 -s 4 -g 100000 -k 20 -u 0.002 -b 20 -i 4 -o evolve
6262
```
6363
## Analyzing
6464

65-
The output files `evolve{n}-{m}.csv` can be easily sorted by `analyze_data.py`, which will output the best performing tables. These can be added back into Axelrod.
65+
The output files `evolve{n}-{m}.csv` can be easily sorted by `analyze_data.py`, which will output the best performing tables. These can be added back into Axelrod.

0 commit comments

Comments
 (0)