Skip to content

Add __repr__ methods to core classes and fix Mesa 4.x API compatibility#195

Open
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
abhinavk0220:improve/repr-methods
Open

Add __repr__ methods to core classes and fix Mesa 4.x API compatibility#195
abhinavk0220 wants to merge 8 commits intomesa:mainfrom
abhinavk0220:improve/repr-methods

Conversation

@abhinavk0220
Copy link

@abhinavk0220 abhinavk0220 commented Mar 13, 2026

Fixes #197

Fixes #198

Summary

This PR does two things:

  1. Adds __repr__ methods to all core mesa-llm classes, making
    multi-agent simulations significantly easier to debug and inspect.
  2. Fixes Mesa 4.x API compatibility across tools and tests, resolving
    broken imports, incorrect coordinate handling, and broken step counters
    introduced by upstream Mesa 4.x changes.

1. __repr__ Methods

The problem

Without __repr__, inspecting agents during a simulation run in a REPL,
notebook, or debugger produces output like this:

[<mesa_llm.llm_agent.LLMAgent object at 0x000001A2B3C4D5E6>,
 <mesa_llm.llm_agent.LLMAgent object at 0x000001A2B3C4D5F7>,
 <mesa_llm.llm_agent.LLMAgent object at 0x000001A2B3C4D608>]

This is useless when you're trying to understand why an agent made a
particular decision, which reasoning strategy it's using, or how much
memory it has accumulated across steps.

The fix

With this PR, the same inspection gives:

[LLMAgent(unique_id=1, llm_model='openai/gpt-4o', reasoning=CoTReasoning, vision=False, memory_size=3, internal_state={'health': 100}),
 LLMAgent(unique_id=2, llm_model='openai/gpt-4o', reasoning=ReActReasoning, vision=True, memory_size=1, internal_state={'health': 80}),
 LLMAgent(unique_id=3, llm_model='openai/gpt-4o', reasoning=ReWOOReasoning, vision=False, memory_size=0, internal_state={})]

This matters especially in mesa-llm because agents simultaneously carry
LLM configuration, reasoning strategy, vision settings, and memory state.
Being able to see all of that at a glance without manually drilling into
each attribute makes a real difference when debugging a 50-agent
simulation.

Classes updated

  • LLMAgentunique_id, llm_model, reasoning, vision,
    memory_size, internal_state
  • ModuleLLMllm_model, api_base, system_prompt (truncated
    to 50 chars to keep output readable)
  • Reasoning (base class) — class name + agent_id, automatically
    inherited by all reasoning strategies
  • CoTReasoningagent_id
  • ReActReasoningagent_id, remaining_tool_calls
  • ReWOOReasoningagent_id, remaining_tool_calls

2. Mesa 4.x API Compatibility Fixes

mesa_llm/tools/inbuilt_tools.py

This file had several issues after Mesa 4.x removed mesa.space and
restructured its spatial API:

Import fixes:

  • Replaced removed mesa.space imports with mesa.discrete_space and
    mesa.experimental.continuous_space

move_one_step() coordinate system fix:

Mesa 4.x OrthogonalMooreGrid uses (x, y) coordinates internally, but
test dummy grids built with SimpleNamespace cells use (row, col)
convention. The previous code used the connections dict on every cell,
which exists on real Mesa cells too but with row/col keys causing
movement in the wrong direction entirely. Fixed by checking
isinstance(cell, SimpleNamespace) to correctly distinguish the two cases
and apply the right coordinate delta.

ContinuousSpace bounds check:

  • Fixed > hi>= hi for upper boundary agent at y=9.0 moving
    North in a [0, 10.0] space was incorrectly allowed to reach y=10.0

Torus wrap type fix:

  • _torus_adj() was returning np.float64 values; wrapped with float()
    to match expected plain Python floats in test assertions and downstream
    usage

Occupied cell handling:

  • move_one_step() was crashing with ValueError when the target cell
    was occupied. Fixed to catch the error and return a descriptive message
    instead, keeping the agent in place consistent with how boundary
    collisions are handled

teleport_to_location() missing cell behavior:

  • Real grids now raise ValueError("Point out of bounds") for missing
    cells; dummy test grids raise KeyError matching the existing test
    contracts

mesa_llm/recording/record_model.py

  • Fixed self.stepsint(getattr(self, "_time", 0))model.steps
    does not exist in Mesa 4.x; _time is the correct internal step counter

Test suite (tests/)

  • Migrated all mesa.space imports to mesa.discrete_space and
    mesa.experimental.continuous_space across conftest.py,
    test_llm_agent.py, test_cot.py, test_inbuilt_tools.py
  • Fixed Model(seed=42)Model() — the seed kwarg conflicts with
    mesa_signals in Mesa 4.x

Testing

All tests pass after these changes:

python -m pytest tests/ -q --ignore=tests/test_parallel_stepping.py

..............................................................  [100%]
0 failed, 0 warnings from test logic

AI Assistance Disclosure

This PR was developed with AI assistance (Claude) for code generation
and debugging. All code has been reviewed, tested, and understood by
the contributor.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 13, 2026

Important

Review skipped

Auto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 99ed3457-46a7-4964-be06-4728ea0aecef

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

1 participant