You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the original repository the strategies were run against all the default strategies in the Axelrod library. This is slow and probably not necessary. For example the Meta* players are just combinations of the other players, and very computationally intensive; it's probably ok to remove those.
14
+
15
+
This fork uses a subset of about 90 strategies, excluding the most computationally intensives (e.g. the hunters).
16
+
17
+
## The strategies
18
+
19
+
The LookerUp strategies are based on lookup tables with two parameters:
20
+
* n, the number of rounds of trailing history to use and
21
+
* m, the number of rounds of initial opponent play to use
22
+
23
+
### Open questions
24
+
25
+
* What's the best table for n, m?
26
+
* What's the best table against parameterized strategies. For example, if the opponents are `[RandomPlayer(x) for x in np.arange(0, 1, 0.01)], what lookup table is best? Is it much different from the generic table?
27
+
* Can we separate n into n1 and n2 where different amounts of history are used for the player and the opponent?
28
+
* Incorporate @GKDO's swarm model that makes the tables non-deterministic, for the same values of n and m. Does this produce better results for all n and m?
29
+
30
+
11
31
## Running
12
32
13
33
`python lookup-evolve.py -h`
@@ -16,7 +36,7 @@ will display help. There are a number of options and you'll want to set the muta
The output files `evolve{n}-{m}.csv` can be easily sorted by `analyze_data.py`, which will output the best performing tables. These can be added back into Axelrod.
0 commit comments