Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 24 additions & 127 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,138 +1,35 @@
# Mesa Examples
## Core Mesa examples
The core Mesa examples are available at the main Mesa repository: https://github.com/mesa/mesa/tree/main/mesa/examples
# Luck vs Skill in Short-Term Gambling
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems you edited the wrong Readme. This is the general readme for all example models. You probably want to add your own readme in a seperate folder


Those core examples are fully tested, updated and guaranteed to work with the Mesa release that they are included with. They are also included in the Mesa package, so you can access them directly from your Python environment.
This example demonstrates how short-term gambling success is dominated by luck,
even when agents differ in skill.

## Mesa user examples
This repository contains user examples and showcases that illustrate different features of Mesa. For more information on each model, see its own Readme and documentation.
## Model Description

- Mesa examples that work on the Mesa and Mesa-Geo main development branches are available here on the [`main`](https://github.com/mesa/mesa-examples) branch.
- Mesa examples that work with Mesa 2.x releases and Mesa-Geo 0.8.x releases are available here on the [`mesa-2.x`](https://github.com/mesa/mesa-examples/tree/mesa-2.x) branch.
Each agent has a fixed skill level between 0 and 1. Skill slightly increases the
probability of winning a bet, but outcomes are still stochastic.

To contribute to this repository, see [CONTRIBUTING.rst](https://github.com/mesa/mesa-examples/blob/main/CONTRIBUTING.rst).
Agents repeatedly place fixed-size bets. After a short number of rounds, agents
are ranked by wealth.

This repo also contains a package that readily lets you import and run some of the examples:
```console
$ # This will install the "mesa_models" package
$ pip install -U -e git+https://github.com/mesa/mesa-examples#egg=mesa-models
```
For Mesa 2.x examples, install:
```console
$ # This will install the "mesa_models" package
$ pip install -U -e git+https://github.com/mesa/mesa-examples@mesa-2.x#egg=mesa-models
```
```python
from mesa_models.boltzmann_wealth_model.model import BoltzmannWealthModel
The model compares the average skill of:
- The top 10% of agents by wealth
- The bottom 10% of agents by wealth

```
You can see the available models at [setup.cfg](https://github.com/mesa/mesa-examples/blob/main/setup.cfg).
## Key Insight

Table of Contents
=================
Over short horizons, the skill distributions of winners and losers differ only slightly.
Early success is therefore a poor indicator of true skill.

* [Grid Space Examples](#grid-space-examples)
* [Continuous Space Examples](#continuous-space-examples)
* [Network Examples](#network-examples)
* [Visualization Examples](#visualization-examples)
* [GIS Examples](#gis-examples)
* [Other Examples](#other-examples)
This illustrates why gambling success, early trading profits, or beginner's luck
are often misattributed to ability rather than chance.

## Grid Space Examples
This model demonstrates:
1. Beginner’s luck is a real statistical phenomenon
2. Early winners are not reliable indicators of skill
3. Skill emerges only over long horizons
4. Human inference from short samples is systematically biased

### [Bank Reserves Model](https://github.com/mesa/mesa-examples/blob/main/examples/bank_reserves)
## How to Run

A highly abstracted, simplified model of an economy, with only one type of agent and a single bank representing all banks in an economy.

### [Color Patches Model](https://github.com/mesa/mesa-examples/tree/main/examples/color_patches)

A cellular automaton model where agents opinions are influenced by that of their neighbors. As the model evolves, color patches representing the prevailing opinion in a given area expand, contract, and sometimes disappear.

### [Conway's Game Of "Life" Model (Fast)](https://github.com/mesa/mesa-examples/tree/main/examples/conways_game_of_life_fast)

A very fast performance optimized version of Conway's Game of Life using the Mesa [`PropertyLayer`](https://github.com/mesa/mesa/pull/1898). About 100x as fast as the regular versions, but limited visualisation (for [now](https://github.com/mesa/mesa/issues/2138)).

### [Conway's Game Of "Life" Model on a Hexagonal Grid](https://github.com/mesa/mesa-examples/tree/main/examples/hex_snowflake)

Conway's game of life on a hexagonal grid.

### [Hexagonal Ant Foraging Model](https://github.com/mesa/mesa-examples/tree/main/examples/hex_ant)

A simulation of ant foraging behavior on a hexagonal grid using pheromone trails and property layers.

### [Forest Fire Model](https://github.com/mesa/mesa-examples/tree/main/examples/forest_fire)

Simple cellular automata of a fire spreading through a forest of cells on a grid, based on the NetLogo [Fire](http://ccl.northwestern.edu/netlogo/models/Fire) model.

### [Hotelling's Law Model](https://github.com/mesa/mesa-examples/tree/main/examples/hotelling_law)

This project is an agent-based model implemented using the Mesa framework in Python. It simulates market dynamics based on Hotelling's Law, exploring the behavior of stores in a competitive market environment. Stores adjust their prices and locations if it's increases market share to maximize revenue, providing insights into the effects of competition and customer behavior on market outcomes.

### [Emperor's Dilemma](https://github.com/mesa/mesa-examples/tree/main/examples/emperor_dilemma)

This project simulates how unpopular norms can dominate a society even when the vast majority of individuals privately reject them. It demonstrates the "illusion of consensus" where agents, driven by a fear of appearing disloyal, not only comply with a rule they hate but also aggressively enforce it on their neighbors. This phenomenon creates a "trap" of False Enforcement, where the loudest defenders of a norm are often its secret opponents.
### [Humanitarian Aid Distribution Model](https://github.com/mesa/mesa-examples/tree/main/examples/humanitarian_aid_distribution)

This model simulates a humanitarian aid distribution scenario using a needs-based behavioral architecture. Beneficiaries have dynamic needs (water, food) and trucks distribute aid using a hybrid triage system.
### [Rumor Mill Model](https://github.com/mesa/mesa-examples/tree/main/examples/rumor_mill)

A simple agent-based simulation showing how rumors spread through a population based on the spread chance and initial knowing percentage, implemented with the Mesa framework and adapted from NetLogo [Rumor mill](https://www.netlogoweb.org/launch#https://www.netlogoweb.org/assets/modelslib/Sample%20Models/Social%20Science/Rumor%20Mill.nlogox).


## Continuous Space Examples
_No user examples available yet._


## Network Examples

### [Boltzmann Wealth Model with Network](https://github.com/mesa/mesa-examples/tree/main/examples/boltzmann_wealth_model_network)

This is the same [Boltzmann Wealth](https://github.com/mesa/mesa-examples/tree/main/examples/boltzmann_wealth_model) Model, but with a network grid implementation.

### [Ant System for Traveling Salesman Problem](https://github.com/mesa/mesa-examples/tree/main/examples/aco_tsp)

This is based on Dorigo's Ant System "Swarm Intelligence" algorithm for generating solutions for the Traveling Salesman Problem.

### [Dining Philosophers Model](https://github.com/mesa/mesa-examples/tree/main/examples/dining_philosophers)

A classic synchronization problem demonstrating resource contention, deadlock, and starvation on a network graph.



## Visualization Examples

### [Charts Example](https://github.com/mesa/mesa-examples/tree/main/examples/charts)

A modified version of the [Bank Reserves](https://github.com/mesa/mesa-examples/tree/main/examples/bank_reserves) example made to provide examples of Mesa's charting tools.

### [Shape Example](https://github.com/mesa/mesa-examples/tree/main/examples/shape_example)

Example of grid display and direction showing agents in the form of arrow-head shape.

## GIS Examples

### Vector Data

- [GeoSchelling Model (Polygons)](https://github.com/mesa/mesa-examples/tree/main/gis/geo_schelling)
- [GeoSchelling Model (Points & Polygons)](https://github.com/mesa/mesa-examples/tree/main/gis/geo_schelling_points)
- [GeoSIR Epidemics Model](https://github.com/mesa/mesa-examples/tree/main/gis/geo_sir)
- [Agents and Networks Model](https://github.com/mesa/mesa-examples/tree/main/gis/agents_and_networks)

### Raster Data

- [Rainfall Model](https://github.com/mesa/mesa-examples/tree/main/gis/rainfall)
- [Urban Growth Model](https://github.com/mesa/mesa-examples/tree/main/gis/urban_growth)

### Raster and Vector Data Overlay

- [Population Model](https://github.com/mesa/mesa-examples/tree/main/gis/population)

## Other Examples

### [El Farol Model](https://github.com/mesa/mesa-examples/tree/main/examples/el_farol)

This folder contains an implementation of El Farol restaurant model. Agents (restaurant customers) decide whether to go to the restaurant or not based on their memory and reward from previous trials. Implications from the model have been used to explain how individual decision-making affects overall performance and fluctuation.

### [Schelling Model with Caching and Replay](https://github.com/mesa/mesa-examples/tree/main/examples/caching_and_replay)

This example applies caching on the Mesa [Schelling](https://github.com/mesa/mesa-examples/tree/main/examples/schelling) example. It enables a simulation run to be "cached" or in other words recorded. The recorded simulation run is persisted on the local file system and can be replayed at any later point.
```bash
solara run app.py
Empty file added __init__.py
Empty file.
16 changes: 16 additions & 0 deletions agent.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
from mesa import Agent


class GamblingAgent(Agent):
def __init__(self, model, skill, wealth, bet_size):
super().__init__(model)
self.skill = skill
self.wealth = wealth
self.bet_size = bet_size

def step(self):
p_win = 0.5 + self.model.alpha * self.skill
if self.model.random.random() < p_win:
self.wealth += self.bet_size
else:
self.wealth -= self.bet_size
39 changes: 39 additions & 0 deletions app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
from mesa.visualization import SolaraViz, make_plot_component
from mesa.visualization.user_param import Slider

from model.model import LuckVsSkillModel


def post_process_lines(ax):
ax.set_xlabel("Simulation Step")
ax.set_ylabel("Average True Skill")
ax.set_title("Luck vs Skill in Short-Term Gambling")
ax.legend()


COLORS = {
"Top 10": "#d62728",
"Bottom 10": "#1f77b4",
}


lineplot_component = make_plot_component(
COLORS,
post_process=post_process_lines,
)

model = LuckVsSkillModel()

model_params = {
"num_agents": 200,
"alpha": Slider("Skill impact (α)", 0.05, 0.0, 0.2, 0.01),
"initial_wealth": 100,
"bet_size": 1,
}

page = SolaraViz(
model,
components=[lineplot_component],
model_params=model_params,
name="Luck vs Skill: Short-Term Gambling",
)
1 change: 1 addition & 0 deletions examples/caching_and_replay/cacheablemodel.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from mesa_replay import CacheableModel, CacheState

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you change all these files?

from model import Schelling


Expand Down
1 change: 1 addition & 0 deletions examples/caching_and_replay/server.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
"""This file was copied over from the original Schelling mesa example."""

import mesa

from model import Schelling


Expand Down
1 change: 1 addition & 0 deletions examples/conways_game_of_life_fast/app.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from mesa.visualization import SolaraViz, make_plot_component, make_space_component

from model import GameOfLifeModel

propertylayer_portrayal = {
Expand Down
1 change: 1 addition & 0 deletions gis/geo_schelling/app.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import solara
from mesa.visualization import Slider, SolaraViz, make_plot_component
from mesa_geo.visualization import make_geospace_component

from model import GeoSchelling


Expand Down
57 changes: 57 additions & 0 deletions model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
from mesa import Model
from mesa.datacollection import DataCollector

from .agent import GamblingAgent


class LuckVsSkillModel(Model):
def __init__(
self,
num_agents=200,
alpha=0.05,
initial_wealth=100,
bet_size=1,
seed=None,
):
super().__init__(seed=seed)

self.alpha = alpha
self.steps = 0

# Create agents
for _ in range(num_agents):
agent = GamblingAgent(
model=self,
skill=self.random.random(),
wealth=initial_wealth,
bet_size=bet_size,
)
self.agents.add(agent)

self.datacollector = DataCollector(
model_reporters={
"Step": lambda m: m.steps,
"Top 10": self.top_10_skill,
"Bottom 10": self.bottom_10_skill,
}
)

def step(self):
self.steps += 1

for agent in list(self.agents):
agent.step()

self.datacollector.collect(self)

def top_10_skill(self):
agents = sorted(self.agents, key=lambda a: a.wealth)
k = max(1, int(0.1 * len(agents)))
top = agents[-k:]
return sum(a.skill for a in top) / k

def bottom_10_skill(self):
agents = sorted(self.agents, key=lambda a: a.wealth)
k = max(1, int(0.1 * len(agents)))
bottom = agents[:k]
return sum(a.skill for a in bottom) / k
3 changes: 2 additions & 1 deletion rl/boltzmann_money/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
import mesa
from mesa.visualization.ModularVisualization import ModularServer
from mesa.visualization.modules import ChartModule
from model import BoltzmannWealthModelRL
from stable_baselines3 import PPO

from model import BoltzmannWealthModelRL


# Modify the MoneyModel class to take actions from the RL model
class MoneyModelRL(BoltzmannWealthModelRL):
Expand Down
3 changes: 2 additions & 1 deletion rl/boltzmann_money/train.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import argparse

from model import NUM_AGENTS, BoltzmannWealthModelRL
from stable_baselines3 import PPO
from stable_baselines3.common.callbacks import EvalCallback

from model import NUM_AGENTS, BoltzmannWealthModelRL


def rl_model(args):
# Create the environment
Expand Down
3 changes: 2 additions & 1 deletion rl/epstein_civil_violence/model.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import gymnasium as gym
import mesa
import numpy as np
from agent import CitizenRL, CopRL
from mesa.examples.advanced.epstein_civil_violence.model import EpsteinCivilViolence
from ray.rllib.env import MultiAgentEnv

from agent import CitizenRL, CopRL

from .utility import create_initial_agents, grid_to_observation


Expand Down
3 changes: 2 additions & 1 deletion rl/epstein_civil_violence/train_config.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import os

from model import EpsteinCivilViolenceRL
from ray.rllib.algorithms.ppo import PPOConfig
from ray.rllib.policy.policy import PolicySpec

from model import EpsteinCivilViolenceRL


# Configuration for the PPO algorithm
# You can change the configuration as per your requirements
Expand Down
3 changes: 2 additions & 1 deletion rl/wolf_sheep/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,11 @@
make_plot_component,
make_space_component,
)
from model import WolfSheepRL
from ray import tune
from ray.rllib.algorithms.algorithm import Algorithm

from model import WolfSheepRL

model_params = {
"width": 20,
"height": 20,
Expand Down
3 changes: 2 additions & 1 deletion rl/wolf_sheep/train_config.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import os

from model import WolfSheepRL
from ray.rllib.algorithms.ppo import PPOConfig
from ray.rllib.policy.policy import PolicySpec

from model import WolfSheepRL


# Configuration to train the model
# Feel free to adjust the configuration as necessary
Expand Down
Loading