Releases: google-deepmind/concordia
Releases · google-deepmind/concordia
v2.3.1
v2.3.0
[2.3.0] - 2026-02-04
Changed
- Deleted all deprecated files.
- Support overriding the default language model either for agents or game masters in the generic simulation prefab.
- tests for event resolution filtering
- Replace print() statements with absl.logging
- Add analyze-logs skill
- Add option on generic simulation.play to produce new style structured logs. No
change to default behavior. New log style is off by default (for now). - Improve the new structured logging library and remove functions used only for comparison to the old format.
- Update default constants suggesting action spec formats to suggest the json approach
- Improve observation queue handling and add scene-aware event delivery.
- Add allow_duplicates option to AssociativeMemoryBank Fixes a bug where
identical actions across rounds were incorrectly deduplicated, causing
EventResolution to pick up stale data. - Add allow_duplicates constructor
parameter to AssociativeMemoryBank - Enable allow_duplicates for game_master
memory_bank in generic.py - Copybara import of the project:
- Fixing the search for the last event generated by the player
- Remove the
concordia/prefabs/configuratordirectory. - Correct several lingering references to deprecated types.
- Add physically situated and dramaturgic game master prefab. This prefab is similar to the situated_in_time_and_place prefab, but with scenes added in.
- add support for extra_components in dialogic_and_dramaturgic game master.
- Changed action spec parsing to json for more robustness than string matching
Added
- Add support for deepseek in the together_ai llm wrapper and fix its gemma support.
- README.md for thought_chains
- README.md for the environment
- typing README
- adding a README.md to the components folder, which documents the use of components
- documentation for the interactive_document
- Add PuppetActComponent and Puppet entity prefab.
- Add a rational actor entity
- Create SceneBasedTerminator and update GameMaster prefabs.
- README for language_model
- README for prefabs
- add structured logging
Fixed
- Add dramaturgic formative memories initializer to prevent bug in which the premise of the first scene was not delivered.
- Fix event resolution to correctly filter putative events by active player and add test for puppet_act
- fixing scene based termination
v2.2.0
[2.2.0] - 2026-01-12
Changed
- Allowing commas in the options of the multiple-choice action_spec by using ","
- move prefix_entity_name to config parameters
- refactor questionnaire components logging
- Update README.md with concordia.contrib.language_models changes
- Replace unmaintained
retrydependency withtenacity - Remove dependency on typing_extensions
- Move language models to concordia.contrib.language_models
- Require Python >= 3.12
- For simultaneous engine, skip entity.observe if make observation emits an
empty observation - Skip entity.observe if make observation emits an empty observation
- Improve typing of OutputType
- Improved formatting of multiple-choice questionnaire observations
- Add absl logging in the questionnaire simulation and engine
- More explicitly check questionnaire type. Raise error if type not found.
- Change GPT model verbosity to 'medium' for GPT-4o, since 'low' is apparently
no longer allowed for it. Otherwise use the specified verbosity. - Remove top_p from gpt models (deprecated from GPT-5)
- improve base questionnaire default aggregate robustness
- Fix ollama client temperature, top_p and top_k args
- add top_p and top_k parameters to clients to fix build errors
- Update default sampling temperature from 0.5 to 1.0
- Expose temperature, add top_p and top_k args
- Raise error when preloading memories fails
- Update switch_act default for invalid float response to match concat_act
- Return 'nan' instead of '0.0' on float conversion error for float_action_spec
in ConcatActComponent - small changes to prompts in the conversational entity prefab.
- prevent YOLO termination during formative memories init GM and optionally
prevent the same thing in the dialogic GM (default behavior remains unchanged).
Also update dialog example to set the conversation GM to avoid stopping in this
way. Note: the reason for this change is just that we noticed some language
models are more prone to deciding they want to terminate in YOLO mode than
others. This lets the user have finer control in such cases. - Correct name for Depression Anxiety Stress Scale questionnaire
- move FormativeMemoriesInitializer to its own py file for clarity
Added
- Add HuggingFace language model wrapper
- Add DayInTheLifeInitializer component that generates "day in the life"
observations for two agents to set up a conversation scenario. It generates
personal daily events for each agent and a shared event to bring them together. - add
get_context_concat_orderfunction on ConcatAct and SwitchAct - Add dimension ranges abstract method to questionnaires
- Add CombinedPerception component
- Add README to language_models page encouraging users to add language model
wrappers for additional models and APIs, and to help maintain them. This
README also explains how to implement the two necessary functions on each
language model wrapper. - adding context argument in questionnaires
- Questionnaire component
- Generalize the together_ai language model wrapper to allow more models.
Fixed
- Fix bugs in contrib.language_models
- Fixes bug where ActionSpec choice options containing commas were incorrectly
split by the parser. Replaced with |
v2.1.0
[2.1.0] - 2025-08-18
Changed
- Set randomize choices to false in questionnaires
- Added configurable number of sentences per episode in formative memory generators
- added exponential backoff in retry wrapper
- Increase DEFAULT_MAX_TOKENS.
- Adding fixed acting order to dialogic GM
- Move configurable component preact values above recent observations in
question_of_recent_memories prompt so the former can contextualize the latter
e.g. this is the sensible ordering if you pass instructions. - small improvement to a prompt in the GenerativeClock component
- Modernize the situation_representation_via_narrative component.
- use verbosity and reasoning_effort parameters inside the OpenAI wrapper
- Make formative memories generator throw an error if passed wrong shape parameters
Added
- Add option to remove duplicates, when extracting data from the logs
- Add acting component flag to randomize choices
- Create non-deprecated no_op_context_processor
- An actor and a game master prefabs and required components for running a simulation that follows a strict script. This can be used for generating fine tuning data.
- Parallel stateless questionnaire
- Adding a callback to get the state of the simulation after every step, which can be used to implement custom checkpointing
- Enable loading presaved memory states from agent config
- Marketplace component that handles logic for buyers and sellers trading goods
- questionnaire simulation load memories
- Added a death component
- Add situated_in_time_and_place game master prefab
- Add support for open weights OpenAI models via Together AI.
- Implementing multi-step questionnaire that can handle both open-ended and multiple-choice questions.
- Add option to return raw log from simulation.play
Fixed
- Dummy language model options
- create game_master module in contrib to fix typecheck error
- making OutputTypes explicit strings and adding conversion to and from dictionaries. This enables serialisation.
- fix serialisation to handle action_spec correctly
- Fix action_spec serialization for death gm component
- fix SendEventToRelevantPlayers serialisation to handle action_spec correctly
- Fixing and improving MakeObservation and SendEventToRelevantPlayers by replacing certain llm calls with simple string editing and fixing logic.
- prevent premature termination in default make_observation component
- Minor fix of next acting component, which makes sure the fixed random order starts with the first actor
- OpenAI models no longer support terminators, and remove their hardcoded output limit.
- always use temperature 1.0 for OpenAI since GPT-5 crashes for all other values.
- use
max_completion_tokensinstead ofmax_tokensin base_gpt_model. - Make it so that calling get_currently_active_player on NextActingAllEntities throws a legible error.
v2.0.1
v2.0.0
[2.0.0] - 2025-7-4
Changed
- Game masters are now entities.
- Entities and components no longer require clocks.
- Simplified the way components interact with memory.
Added
- The concept of a "prefab", this replaces the now-deprecated "factory" concept.
- The concept of an "engine" to structure the interaction between agent and game
master. - Two specific engines: "sequential", and "simultaneous" for turn-based games
and simultaneous-move games respectively. - All components need to implement the abstract method get_state() and set_state(state), as well as potentially overwriting the default copy(self) method. This together with the prefab parameters enables serialization and deserialization of entities
- Simulations can now periodically save checkpoints at the end of each interaction turn, enabled by passing a path to a checkpoint directory in the simulation.play(checkpoint_path)
- Simulations can be loaded from checkpoints, with simulation.load_from_checkpoint(checkpoint)
v1.8.10
v1.8.9
[1.8.9] - 2024-11-25
Changed
- Update launch and eval scripts for the eval phase of the contest
- Further improve alternative baseline agents
- Improve a few baseline agents
Added
- Add another time and place module for reality show
- Add support for scene-wise computation of metrics
- Add alternative versions of basic and rational agent factories
Fixed
- Catch another type of together API exception
- Fix time serialization in associative_memory