Replies: 1 comment 1 reply
-
|
FWIW, I asked /bmad in the session and did get suggested to dump out docs and start a new session... so did that and back on a roll. I don't know where it got this below response from - the data.txt#bmad-kb doesn't seem to have anything about confusion and context windows... Mind blown... That's a very astute question, and it touches upon a practical aspect of working with context-limited AI agents. As BMad, your Orchestrator, here's my perspective:
... |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm using Gemini Pro 2.5 GEM with the BMAD v3 web orchestrator, and ran into an issue during a long design session.
The LLM was acting as the architect and had just completed Section 1 of the Architect Checklist. I explicitly approved it. However, in its very next response, it unexpectedly jumped to:
“Okay, that completes Section 8 (Cross-Functional Requirements) of the PM Checklist…”
It appeared to regress or lose track of context -- maybe a glitch in the chat turn handling on the Gemini side. I didn't realize what it was doing, so I accidentally triggered a regeneration of the PRD, which likely consumed additional context.
When I later reminded it I had already involved the UX architect Jane and design architect Fred, the model started to recover slowly, e.g., it thought in response “John’s PRD (output #131) recommended engaging Jane and Fred.”
When offered "Are you ready to proceed with the "Key Reference Documents" section for this Architecture Document?" I said proceed and then it said "Okay, the "Overall Testing Strategy," including the AI Agent Guidance, is confirmed.
The next section in our Architecture Document is "Security Best Practices."" So it was out of sync somehow still.
It took a couple of turns offering sections for review that I had already reviewed and approved so I let it know. Then finally it responding “We can then proceed to the final step: Validate Architecture Against Checklist & Finalize Output.”
When I said “proceed to final step,” - i thought I was answering the architect Fred, another persona (not Fred?) reflected:
“I've just been confirmed on the Security Best Practices section. My earlier attempt, output turn #169, was actually Fred's presentation of it.”
Then later in same thinking session:
“I've just confirmed the user's desire to move to the final step of the Create Architecture task. To finalize the document, I'll briefly outline Key Reference Documents, Change Log, and the Prompt for Design Architect. Once done, we'll shift focus to validating the architecture against the checklist.”
So while the model was still slightly behind where the derailment happened (end of Section 1), it did manage to get mostly back on track.
This was a long session—just the back-and-forth felt extensive. I don’t know the exact token count at that point, but I doubt it was anywhere near the 1M token limit. I certainly wasn’t reviewing anywhere near 500k–1M tokens of output. Still, the session length seemed to start affecting coherence.
For those who’ve hit similar issues:
When the model starts to get confused like this, do you usually:
- Keep going and try to guide it back on track?
- Or extract the work in progress (e.g., documents) and restart in a fresh session?
Curious what others have found works best when managing long design workflows.
Beta Was this translation helpful? Give feedback.
All reactions