Skip to content

Commit 7dfcbf2

Browse files
authored
Upgrade to gemini-3-flash-preview (#1)
* CLAUDE.md intro * Fix broken documentation rendering * Upgrade to gemini-3-flash-preview
1 parent 23d9758 commit 7dfcbf2

File tree

6 files changed

+792
-90
lines changed

6 files changed

+792
-90
lines changed

CLAUDE.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
# CLAUDE.md
2+
3+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4+
15
## Overview
26

37
The `group_sense` package is a library for detecting patterns in group chat message streams and transforming them into self-contained queries for downstream AI systems. This enables existing single-user AI agents to participate in group conversations based on configurable criteria, without requiring training on multi-party conversations.

examples/chat/assistant.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ def __init__(self):
2626
self._history: list[ModelMessage] = []
2727
self._agent = Agent(
2828
system_prompt=SYSTEM_PROMPT,
29-
model="gemini-2.5-flash",
29+
model="google-gla:gemini-2.5-flash",
3030
model_settings=GoogleModelSettings(
3131
google_thinking_config={
3232
"thinking_budget": 0,

group_sense/reasoner/default.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def __init__(
4747
Args:
4848
system_prompt: System prompt that defines the reasoner's behavior and
4949
decision-making criteria. Should not contain an {owner} placeholder.
50-
model: Optional AI model to use. Defaults to "gemini-2.5-flash".
50+
model: Optional AI model to use. Defaults to "google-gla:gemini-3-flash-preview".
5151
Can be a model name string or a pydantic-ai Model instance.
5252
model_settings: Optional model-specific settings. Defaults to
5353
GoogleModelSettings with thinking enabled.
@@ -58,11 +58,11 @@ def __init__(
5858
self._agent = Agent(
5959
system_prompt=system_prompt,
6060
output_type=NativeOutput(Response),
61-
model=model or "gemini-2.5-flash",
61+
model=model or "google-gla:gemini-3-flash-preview",
6262
model_settings=model_settings
6363
or GoogleModelSettings(
6464
google_thinking_config={
65-
"thinking_budget": -1,
65+
"thinking_level": "high",
6666
"include_thoughts": True,
6767
}
6868
),
@@ -99,7 +99,11 @@ async def process(self, updates: list[Message]) -> Response:
9999
result = await self._agent.run(reasoner_prompt, message_history=self._history)
100100
self._history = result.all_messages()
101101
self._processed += len(updates)
102-
return result.output
102+
103+
response = result.output
104+
if response.receiver == "":
105+
response.receiver = None
106+
return response
103107

104108
def get_serialized(self) -> dict[str, Any]:
105109
"""Serialize the reasoner's state for persistence.

mkdocs.yml

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ site_url: https://gradion-ai.github.io/group-sense/
55
repo_name: gradion-ai/group-sense
66
repo_url: https://github.com/gradion-ai/group-sense
77

8-
copyright: Copyright © 2025 Gradion AI
8+
copyright: Copyright © 2025-2026 Gradion AI
99

1010
theme:
1111
name: material
@@ -87,10 +87,7 @@ plugins:
8787
- api/reasoner.md: Reasoner interfaces and implementations
8888

8989
markdown_extensions:
90-
- pymdownx.highlight:
91-
anchor_linenums: true
92-
line_spans: __span
93-
pygments_lang_class: true
90+
- pymdownx.highlight
9491
- pymdownx.inlinehilite
9592
- pymdownx.snippets:
9693
dedent_subsections: true

pyproject.toml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ requires-python = ">=3.11,<3.15"
99
readme = "README.md"
1010
license = "Apache-2.0"
1111
dependencies = [
12-
"pydantic-ai>=1.3.0",
12+
"pydantic-ai>=1.36.0",
13+
"google-genai>=1.56.0"
1314
]
1415

1516
[tool.uv]

0 commit comments

Comments
 (0)