Skip to content

Commit dfdf56c

Browse files
authored
Merge pull request #29 from MachineWisdomAI/docs/sweep-remaining-factual-errors
docs: sweep remaining factual errors across all docs and protocol modules
2 parents 076ea1d + 682b2f4 commit dfdf56c

9 files changed

Lines changed: 21 additions & 20 deletions

File tree

AGENTS_SETUP_INSTRUCTIONS.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ git clone https://github.com/MachineWisdomAI/fava-trails.git && cd fava-trails &
2222

2323
### LLM Configuration (for Trust Gate)
2424

25-
The Trust Gate reviews thoughts before promotion using an LLM. By default, FAVA Trails uses [OpenRouter](https://openrouter.ai/) for unified access to 100+ models.
25+
The Trust Gate reviews thoughts before promotion using an LLM. By default, FAVA Trails uses [OpenRouter](https://openrouter.ai/) for unified access to 300–500+ models.
2626

2727
**OpenRouter (default, recommended):**
2828

@@ -338,7 +338,7 @@ FAVA Trails ships with protocol hook modules that can be enabled via `module:` e
338338
| Protocol | Install | Description |
339339
|----------|---------|-------------|
340340
| **SECOM** | `pip install fava-trails[secom]` | Extractive compression at promote time via LLMLingua-2 ([docs](../src/fava_trails/protocols/secom/README.md)) |
341-
| **ACE** | included | Playbook-driven reranking and anti-pattern detection (Stanford/SambaNova ACE) |
341+
| **ACE** | included | Playbook-driven reranking and anti-pattern detection (Stanford, UC Berkeley, and SambaNova ACE) |
342342
| **RLM** | included | MapReduce orchestration hooks for batch workflows (MIT RLM) |
343343

344344
**Quickest way to add a protocol** — use the CLI setup command:

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ FAVA Trails supports optional **lifecycle protocols** — hook modules that run
223223

224224
### SECOM — Compression at Promote Time
225225

226-
Extractive token-level compression via [LLMLingua-2](https://github.com/microsoft/LLMLingua), based on [Microsoft's ICLR 2025 SECOM paper](https://arxiv.org/abs/2502.05589). Thoughts are compressed once at promote time (WORM pattern), reducing storage and boosting recall density. Zero hallucination — only original tokens survive.
226+
Extractive token-level compression via [LLMLingua-2](https://github.com/microsoft/LLMLingua), based on the [SECOM paper](https://arxiv.org/abs/2502.05589) (Tsinghua University and Microsoft, ICLR 2025). Thoughts are compressed once at promote time (WORM pattern), reducing storage and boosting recall density. Purely extractive — only original tokens survive, no paraphrasing or rewriting.
227227

228228
```bash
229229
pip install fava-trails[secom]
@@ -267,9 +267,9 @@ fava-trails secom setup --write
267267
fava-trails secom warmup
268268
```
269269

270-
### ACE — Agentic Context Engine
270+
### ACE — Agentic Context Engineering
271271

272-
Playbook-driven reranking and anti-pattern detection, based on [Stanford/SambaNova ACE (arXiv:2510.04618)](https://arxiv.org/abs/2510.04618). Applies multiplicative scoring using rules stored in the `preferences/` namespace.
272+
Playbook-driven reranking and anti-pattern detection, based on [ACE (arXiv:2510.04618)](https://arxiv.org/abs/2510.04618) (Stanford, UC Berkeley, and SambaNova, ICLR 2026). Applies multiplicative scoring using rules stored in the `preferences/` namespace.
273273

274274
```bash
275275
pip install fava-trails # included in base install

docs/fava_trails_faq.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,11 +37,11 @@ This is not post-hoc rollback — it is **containment at the source**. The hallu
3737

3838
### How does FAVA Trails relate to Context Engineering protocols like SECOM or ACE?
3939

40-
Academic protocols like Microsoft's SECOM (Segmentation and Compression, ICLR 2025) or Stanford's ACE (Agentic Context Engine) define *what* to do with context — compress memories, curate retrieval, manage information density — but leave the production substrate unspecified. Where do compressed memories live? How do you version them? What happens when compression fails mid-operation?
40+
Academic protocols like SECOM (Segmentation and Compression; Tsinghua University and Microsoft, ICLR 2025) or ACE (Agentic Context Engineering; Stanford, UC Berkeley, and SambaNova, ICLR 2026) define *what* to do with context — compress memories, curate retrieval, manage information density — but leave the production substrate unspecified. Where do compressed memories live? How do you version them? What happens when compression fails mid-operation?
4141

4242
FAVA Trails provides the versioned substrate and the **Event-Action Pipeline** (lifecycle hooks) to run these protocols safely. Hooks fire at key lifecycle points (`before_propose`, `before_save`, `on_recall`, etc.) and return typed actions (`Mutate`, `Advise`, `RecallSelect`) that the pipeline executes atomically.
4343

44-
For example, the built-in [SECOM protocol](../src/fava_trails/protocols/secom/README.md) uses a Write-Once, Read-Many (WORM) optimization: the `before_propose` hook compresses content inline via LLMLingua-2 (extractive token-level compression, zero hallucination) before the thought is committed to its permanent namespace. This avoids read-path latency entirely, amortizes compression cost from O(recalls × thoughts) to O(promotes), and preserves the original verbose draft in the Jujutsu commit history. If compression fails, `fail_mode: open` lets the thought through unchanged — the operation never blocks.
44+
For example, the built-in [SECOM protocol](../src/fava_trails/protocols/secom/README.md) uses a Write-Once, Read-Many (WORM) optimization: the `before_propose` hook compresses content inline via LLMLingua-2 (extractive token-level compression — only original tokens survive) before the thought is committed to its permanent namespace. This avoids read-path latency entirely, amortizes compression cost from O(recalls × thoughts) to O(promotes), and preserves the original verbose draft in the Jujutsu commit history. If compression fails, `fail_mode: open` lets the thought through unchanged — the operation never blocks.
4545

4646
Install with `pip install fava-trails[secom]` and add a `hooks:` entry to your data repo's `config.yaml`. See the [Protocols section](../README.md#protocols) for quick start.
4747

src/fava_trails/protocols/__init__.py

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,15 +3,16 @@
33
Each protocol is a standalone hook module implementing a specific
44
context engineering pattern from the research literature:
55
6-
- ace: Agentic Context Engine (Curator Pattern) -- playbook-driven
6+
- ace: Agentic Context Engineering (Curator Pattern) -- playbook-driven
77
recall reranking and quality enforcement
88
- secom: SECOM Compression (WORM Pattern) -- extractive compression
99
at promote time for information density
1010
- rlm: RLM MapReduce (Orchestration Pattern) -- parallel mapper
1111
validation, progress tracking, and deterministic reducer retrieval
1212
13-
Protocols are independent. Users pick ONE via their config.yaml hooks
14-
section. They are not designed to run simultaneously.
13+
Protocols are composable. Each ships with a staggered default order
14+
(ace=10, rlm=15, secom=20) so they execute in a defined sequence when
15+
combined. Enable any combination via config.yaml hooks section.
1516
1617
Usage::
1718

src/fava_trails/protocols/ace/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# ACE Playbook Hooks (Curator Pattern)
22

3-
FAVA Trails' reference implementation of the [ACE (Agentic Context Engine)](https://arxiv.org/abs/2510.04618) Curator pattern (Stanford/SambaNova, ICLR 2026). ACE treats agent context as an evolving **playbook** of structured rules that grow and refine through feedback loops.
3+
FAVA Trails' reference implementation of the [ACE (Agentic Context Engineering)](https://arxiv.org/abs/2510.04618) Curator pattern (Stanford, UC Berkeley, and SambaNova, ICLR 2026). ACE treats agent context as an evolving **playbook** of structured rules that grow and refine through feedback loops.
44

55
## Architecture: FAVA Trails as the Curator
66

@@ -188,6 +188,6 @@ propose_truth(trail_name="my-trail", thought_id="<ULID>")
188188

189189
## Literature
190190

191-
- Stanford/SambaNova [arXiv:2510.04618](https://arxiv.org/abs/2510.04618) (ICLR 2026)
191+
- Stanford, UC Berkeley, and SambaNova [arXiv:2510.04618](https://arxiv.org/abs/2510.04618) (ICLR 2026)
192192
- ACL 2025 Reflective Memory Management
193-
- Reference implementations: [ace-agent/ace](https://github.com/ace-agent/ace) (Apache-2.0), [kayba-ai/agentic-context-engine](https://github.com/kayba-ai/agentic-context-engine) (MIT)
193+
- Official implementation: [ace-agent/ace](https://github.com/ace-agent/ace) (Apache-2.0)

src/fava_trails/protocols/ace/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
"""ACE Playbook Hooks (Curator Pattern).
22
3-
Implements FAVA Trails' adaptation of the ACE (Agentic Context Engine) Curator
3+
Implements FAVA Trails' adaptation of the ACE (Agentic Context Engineering) Curator
44
pattern, based on:
5-
Stanford/SambaNova arXiv:2510.04618 (ICLR 2026)
5+
Stanford, UC Berkeley, and SambaNova arXiv:2510.04618 (ICLR 2026)
66
ACL 2025 Reflective Memory Management
77
88
Seven lifecycle hooks provide:

src/fava_trails/protocols/secom/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Extractive token-level compression at promote time for information density, based on:
44

5-
> Microsoft ICLR 2025 "On Memory Construction and Retrieval for Personalized Conversational Agents" (arXiv:2502.05589)
5+
> Tsinghua University and Microsoft, ICLR 2025 "On Memory Construction and Retrieval for Personalized Conversational Agents" (arXiv:2502.05589)
66
> Reference implementation: [microsoft/SeCom](https://github.com/microsoft/SeCom)
77
88
## WORM Architecture (Write-Once-Read-Many)
@@ -108,7 +108,7 @@ Unknown engine types fail loudly at configure time. See the [LLMLingua docs](htt
108108

109109
Uses **extractive token-level compression**. For each token, the model predicts keep/discard. Key properties:
110110

111-
- **Zero hallucination**: Only original tokens survive. No paraphrasing, no rewriting.
111+
- **Purely extractive**: Only original tokens survive. No paraphrasing, no rewriting, no new tokens generated.
112112
- **Preserves named entities and identifiers**: Token-level decisions maintain factual anchors.
113113
- **Optimal rate**: tau = 0.5-0.7 (retain 50-70% of tokens). Below 0.5, critical information loss.
114114

src/fava_trails/protocols/secom/__init__.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
"""SECOM Compression Hooks (WORM Pattern).
22
33
Implements the SECOM (SEgmentation + COMpression) pattern from:
4-
Microsoft ICLR 2025 "On Memory Construction and Retrieval for
5-
Personalized Conversational Agents" (arXiv:2502.05589)
4+
Tsinghua University and Microsoft, ICLR 2025 "On Memory Construction and
5+
Retrieval for Personalized Conversational Agents" (arXiv:2502.05589)
66
77
Three lifecycle hooks:
88
- before_propose: Inline extractive compression via Mutate(ThoughtPatch)

uv.lock

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)