You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See [LICENSE.txt](https://github.com/roycoding/slots/blob/master/LICENSE.txt)
13
11
12
+
13
+
### Introduction
14
+
slots is a Python library designed to allow the user to explore and use simple multi-armed bandit (MAB) strategies. The basic concept behind the multi-armed bandit problem is that you are faced with *n* choices (e.g. slot machines, medicines, or UI/UX designs), each of which results in a "win" with some unknown probability. Multi-armed bandit strategies are designed to let you quickly determine which choice will yield the highest result over time, while reducing the number of tests (or arm pulls) needed to make this determination. Typically, MAB strategies attempt to strike a balance between "exploration", testing different arms in order to find the best, and "exploitation", using the best known choice. There are many variation of this problem, see [here](https://en.wikipedia.org/wiki/Multi-armed_bandit) for more background.
15
+
16
+
slots provides a hopefully simple API to allow you to explore, test, and use these strategies. Basic usage looks like this:
17
+
18
+
```Python
19
+
import slots
20
+
21
+
# Try 3 bandits with arbitrary win probabilities
22
+
b = slots.MAB()
23
+
b.run()
24
+
```
25
+
26
+
To inspect the results and compare the estimated win probabilities versus the true win probabilities:
For "real world" (online) usage, test results can be sequentially fed into an `MAB` object. The tests will continue until a stopping criterion is met.
40
+
41
+
Using slots to determine the best of 3 variations on a live website.
42
+
```Python
43
+
mab = slots.MAB(live=True, payouts=[]*3)
44
+
```
45
+
46
+
Make the first choice randomly, record responses, and input reward 2 was chosen. Run online trial (input most recent result) until test criteria is met.
47
+
```Python
48
+
mab.online_trial(bandit=2,payout=1)
49
+
```
50
+
51
+
The response of mab.online_trial() is a dict of the form:
-`best` is the current best estimate of the highest payout arm.
59
+
60
+
By default, slots uses the epsilon greedy strategy. Besides epsilon greedy, the softmax and upper credibility bound strategies are also implemented.
61
+
62
+
#### Regret analysis
63
+
A common metric used to evaluate the relative success of a MAB strategy is "regret". This reflects that fraction of payouts (wins) that have been lost by using the sequence of pulls versus the currently best known arm. The current regret value can be calculated by calling the `mab.regret()` method.
64
+
65
+
For example, the regret curves for several different MAB strategies can be generated as follows:
66
+
```Python
67
+
68
+
import matplotlib.pyplot as plt
69
+
import seaborn as sns
70
+
import slots
71
+
72
+
# Test multiple strategies for the same bandit probabilities
73
+
probs = [0.4, 0.9, 0.8]
74
+
75
+
ba = slots.MAB(probs=probs)
76
+
bb = slots.MAB(probs=probs)
77
+
bc = slots.MAB(probs=probs)
78
+
79
+
# Run trials and calculate the regret after each trial
0 commit comments