This project demonstrates an AI-powered, asynchronous workflow that transforms spoken ideas into precise, reviewable document edits.
graph TD
subgraph " "
IntegA["✨ Integrator Agent"]
OV["📚 Obsidian Vault"]
IntegA -- "Updates" --> OV
end
subgraph " "
U["🧑💻 User"]
VN["📝 Voice Note"]
IA["🤖 Instruct Agent"]
IIF["📃 Instructions File"]
U --> VN
VN --> IA
IA --> IIF
end
IIF --> IntegA
IntegA --> IIF
style U fill:#D6EAF8,stroke:#333,stroke-width:2px
style VN fill:#FCF3CF,stroke:#333,stroke-width:2px
style IIF fill:#FCF3CF,stroke:#333,stroke-width:2px
style OV fill:#E8DAEF,stroke:#333,stroke-width:2px
style IA fill:#ABEBC6,stroke:#333,stroke-width:2px
style IntegA fill:#ABEBC6,stroke:#333,stroke-width:2px
-
Clone the repository:
git clone https://github.com/azhutov/vibe-editing.git cd vibe-editing
-
Install Python dependencies:
pip install llm rich
-
Install an
llm
model plugin: Theinstruct_agent.py
script usesgemini-2.5-flash-preview-04-17
. Installllm-gemini
and configure your API key (https://aistudio.google.com/apikey).llm install llm-gemini llm keys set gemini
-
Create data directories:
mkdir -p data/voice-notes-demo data/transcripts data/integration_instructions
The workflow involves two AI agents: the Instruct Agent
(processes voice notes) and the Integrator Agent
(waits for instructions).
- Open Cursor in the
vibe-editing
project root. - Run the Integrator Agent in Cursor chat with claude-3.7-sonnet model:
This will start both agents and guide you through the process.
Run @integrator_workflow