Skip to content

Neurosymbolic AI agent framework - LLM + Prolog reasoning for small models. Single binary, runs offline or cloud-ready. Python & Rust.

License

Notifications You must be signed in to change notification settings

fabceolin/the_edge_agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

744 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The Edge Agent

GitHub stars License: MIT Documentation

Small LLMs hallucinate. TEA fixes that with Prolog.

One binary. No cloud. Neurosymbolic AI that actually reasons.

TEA combines LLMs with symbolic reasoning (Prolog) to create AI agents that can prove their conclusions, not just generate plausible-sounding text. Perfect for small/local models (Llama, Mistral, Phi, Ollama) where symbolic reasoning compensates for limited model capacity.

30-Second Example

# LLM extracts facts, Prolog derives relationships via temporal reasoning
name: hero-family-reasoning

nodes:
  - name: extract
    uses: llm.call              # LLM extracts: mother(alice, bob). affair(alice, dave, 1980, 1990).
    with:
      model: "gemma3n:e4b"
      messages:
        - role: user
          content: "Extract family relationships as Prolog facts from: {{ state.text }}"
    output: llm_response

  - name: reason
    language: prolog            # Prolog derives: child_of_affair, half_sibling
    run: |
      child_of_affair(Child, Partner) :-
          mother(Mother, Child), birth_year(Child, Year),
          affair(Mother, Partner, Start, End), Year >= Start, Year =< End.

      half_sibling(X, Y) :-
          mother(M, X), mother(M, Y), X \= Y,
          \+ (father(F, X), father(F, Y)).

      state(facts, Facts), tea_load_code(Facts),
      findall(H, half_sibling(bob, H), Results),
      return(half_siblings, Results).

Run it:

tea run examples/prolog/neurosymbolic/hero-family-reasoning.yaml \
  --input '{"text": "Alice had two children: Bob and Carol. Alice had an affair with Dave from 1980 to 1990. Bob was born in 1985. Carol was born in 1975.", "person": "bob"}'
# Output: {"answer": "bob's half-siblings: Carol"}

What happens: LLM extracts facts → Prolog proves Bob is Carol's half-sibling (born during affair = different father).

Full runnable example: examples/prolog/neurosymbolic/hero-family-reasoning.yaml

Why TEA?

Challenge TEA Solution
Small LLMs make reasoning errors Prolog handles logic, math, and constraints while LLM handles language
LLMs hallucinate facts Knowledge graphs with verifiable inference chains
Complex agent frameworks Simple YAML syntax, learn in minutes
Need for external services Single binary, zero dependencies, runs offline
Cloud vendor lock-in Portable agents run on any platform
Building everything from scratch 20+ built-in actions for LLM, RAG, memory, and storage
No visibility into agent behavior Built-in observability with distributed tracing

Quick Install

Python (Recommended) - Full features, 20+ built-in actions:

# Option 1: pip install (requires Python 3.10+)
pip install the-edge-agent
tea --version

# Option 2: AppImage (self-contained, includes Prolog)
VERSION=$(curl -s https://api.github.com/repos/fabceolin/the_edge_agent/releases/latest | grep -Po '"tag_name": "v\K[^"]+')
curl -L "https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/tea-python-${VERSION}-x86_64.AppImage" -o tea && chmod +x tea

Rust - Minimal footprint, embedded/offline:

VERSION=$(curl -s https://api.github.com/repos/fabceolin/the_edge_agent/releases/latest | grep -Po '"tag_name": "v\K[^"]+')
curl -L "https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/tea-${VERSION}-x86_64.AppImage" -o tea && chmod +x tea

See Installation Guide for all platforms, Docker images, and AppImage variants.

Offline LLM Support

Run LLM workflows without internet using bundled GGUF models:

# Download Python LLM-bundled AppImage (~2GB with Gemma 3 1B)
VERSION=$(curl -s https://api.github.com/repos/fabceolin/the_edge_agent/releases/latest | grep -Po '"tag_name": "v\K[^"]+')
curl -L "https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/tea-python-llm-gemma3-1b-${VERSION}-x86_64.AppImage" -o tea-llm.AppImage
chmod +x tea-llm.AppImage

# Run offline chat
./tea-llm.AppImage run examples/llm/local-chat.yaml \
  --input '{"question": "What is the meaning of life?"}'

See LLM-Bundled Distributions for all model variants and configurations.

vs Alternatives

Feature TEA LangGraph AutoGen
Symbolic reasoning (Prolog) Yes No No
Single binary Yes No (Python) No (Python)
Offline operation Yes Limited No
YAML-first Yes Code-first Code-first
Neurosymbolic Yes No No
Local LLM optimized Yes Partial No
Edge/embedded ready Yes No No

TEA Does More

TEA includes 20+ built-in actions. Full documentation in docs/capabilities/:

Capability Description Key Actions
Neurosymbolic LLM + Prolog hybrid reasoning language: prolog
LLM Integration OpenAI, Azure, Ollama, 100+ providers llm.call, llm.structured
RAG Vector search and document retrieval rag.search, rag.embed
Memory Short-term, long-term, cloud-synced memory.store, memory.recall
Web Scraping with Firecrawl, ScrapeGraphAI web.scrape, web.crawl
Observability Distributed tracing and debugging trace.span, trace.log

Documentation

Topic Link
YAML Reference docs/shared/YAML_REFERENCE.md
CLI Reference docs/shared/cli-reference.md
Python Guide docs/python/getting-started.md
Rust Guide docs/rust/getting-started.md
Human-in-the-Loop docs/guides/human-in-the-loop.md

Implementations

Implementation Status Best For
Python Production-ready (Recommended) Full features, 20+ built-in actions, neurosymbolic AI
Rust Active development Embedded, offline, resource-constrained environments
WASM Prototype Browser-based, client-side LLM, zero-server

Python is the reference implementation with the complete feature set. Rust provides a lighter-weight alternative for embedded scenarios with a subset of actions. All implementations share the same YAML syntax.

Browser LLM (WASM)

Run TEA workflows with local LLM inference entirely in the browser:

# Install from npm
npm install tea-wasm-llm

# Or download from releases
VERSION=$(curl -s https://api.github.com/repos/fabceolin/the_edge_agent/releases/latest | grep -Po '"tag_name": "v\K[^"]+')
curl -L "https://github.com/fabceolin/the_edge_agent/releases/download/v${VERSION}/tea-wasm-llm-${VERSION}.tar.gz" -o tea-wasm-llm.tar.gz
import { initTeaLlm, loadModel, executeLlmYaml } from 'tea-wasm-llm';

await initTeaLlm();
await loadModel('./models/phi-4-mini.gguf');

const result = await executeLlmYaml(yamlWorkflow, { question: "What is 2+2?" });

Features:

  • Offline operation after initial model download
  • IndexedDB model caching for fast subsequent loads
  • Multi-threaded inference via SharedArrayBuffer
  • Opik observability integration

Try it live: Interactive WASM Demo - run LLM inference directly in your browser (no server required).

See WASM LLM Deployment Guide for server configuration and examples.

Examples

See examples/ for ready-to-run agents:

  • examples/prolog/neurosymbolic/ - LLM + Prolog reasoning
  • examples/llm/ - Pure LLM workflows
  • examples/llamaindex/ - RAG and document retrieval
  • examples/web/ - Web scraping agents
  • examples/academic/ - Academic writing workflows

Academic Writing Example

Generate research papers with human-in-the-loop review:

cd python
python -m the_edge_agent.cli run \
  ../examples/academic/kiroku-document-writer.yaml \
  --input @../examples/academic/sample-paper-spec.yaml

This workflow includes:

  • AI-assisted title suggestion and research
  • Topic sentence planning with manual review
  • Draft generation with revision cycles
  • Abstract and citation generation

See Migration Guide for migrating LangGraph workflows to TEA.

Contributing

We welcome contributions! Please open an issue or pull request on GitHub.

License

MIT License. See LICENSE for details.

Acknowledgements

TEA is inspired by LangGraph. We thank the LangGraph team for their innovative work in language model workflows.


If TEA helps your project, consider starring the repo!

About

Neurosymbolic AI agent framework - LLM + Prolog reasoning for small models. Single binary, runs offline or cloud-ready. Python & Rust.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •