Skip to content

Commit 9d2fa03

Browse files
committed
Add differences with DSPy
1 parent 7a38d84 commit 9d2fa03

File tree

1 file changed

+66
-0
lines changed

1 file changed

+66
-0
lines changed

docs/Differences with DSPy.md

Lines changed: 66 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,66 @@
1+
# Differences with DSPy
2+
## A Complete Guide
3+
4+
This document highlights the key differences between **DSPy** and **Synalinks**. While both frameworks enable in-context learning and neuro-symbolic programming, they differ significantly in design philosophy, reliability, and production readiness. We assume you have a basic understanding of in-context learning frameworks to be able to compare them.
5+
6+
---
7+
8+
## Fundamental Differences
9+
10+
### DSPy: PyTorch-Inspired
11+
12+
- **Purpose**: Build intelligent applications combining LMs with symbolic reasoning.
13+
- **Memory**: Natively supports vector-only databases.
14+
- **Reliability**: DSPy relies on brittle parsing logic in `Adapter` classes. While optimization reduces errors, exceptions due to LM output format failures remain common in production.
15+
- **Async**: Offers both async and sync code, which can lead to inconsistent practices in production environments.
16+
- **String Variables**: Like TextGrad and other in-context learning frameworks, DSPy represents variables as strings. This limits the ability to handle complex structured variables (e.g., graphs, plans), making them less suitable for advanced neuro-symbolic systems that require learning to plan or structure working memory.
17+
18+
### Synalinks: Keras-Inspired
19+
20+
- **Purpose**: Build intelligent applications combining LMs with symbolic reasoning.
21+
- **Memory**: Natively supports hybrid graph + vector databases, enabling richer data relationships and more flexible memory structures.
22+
- **Reliability**: Uses constrained structured output by default, eliminating brittle parsing and ensuring robust, predictable behavior.
23+
- **Async**: Async by default, enforcing production-ready practices and consistent performance.
24+
- **Strict Module Typing**: Modules in Synalinks are strictly typed using JSON schemas (defined in `compute_output_spec()`). This allows the system to compute output contracts end-to-end before any computation, ensuring type safety and clarity.
25+
- **JSON Variables**: Variables are JSON objects with associated schemas. The optimizer uses constrained structured output to guarantee 100% correct variable structure, enabling complex, nested, and graph-like data handling.
26+
- **Arithmetic & Logical Operators**: Implements JSON-level concatenation and logical operators (OR, AND, XOR) via Python operators, allowing rapid architecture changes without additional class implementations.
27+
- **Robust Branching and Merging**: Dynamically creates schemas on the fly for branching, and handles merging via JSON operators. This enables complex workflows without requiring custom classes.
28+
- **Observable by Default**: Every LM call within a Module can be returned, allowing reward computation based on internal processes, not just outputs, enabling finer-grained optimization and debugging.
29+
30+
---
31+
32+
## Key Concept Mapping
33+
34+
| **DSPy Concept** | **Synalinks Equivalent** | **Key Difference** |
35+
|--------------------------|----------------------------------|-------------------------------------------------------------------------------------|
36+
| `Adapter` | - | No brittle parsing; uses JSON schemas for robust I/O |
37+
| `GEPA` | `OMEGA` | Use a SOTA algorithm (2025) in evolutionary AI instead of a 10 years old method |
38+
| String-based variables | JSON-based variables | Supports complex structures (graphs, plans) and strict validation |
39+
| Sync/Async choice | Async by default | Enforces production best practices |
40+
| Vector-only memory | Hybrid graph + vector memory | Enables richer data relationships and more flexible memory structures |
41+
| Custom branching logic | JSON operators for branching | Dynamic schema creation and merging; no need for custom classes |
42+
| Limited observability | Observable by default | Full visibility into LM calls for reward computation and debugging |
43+
44+
---
45+
46+
## When to Use Each Framework
47+
48+
### Use DSPy when:
49+
- You are in a research environment and need rapid prototyping.
50+
- Your use case is simple and does not require complex structured variables.
51+
- You prefer the flexibility of choosing between sync and async code.
52+
- You are comfortable managing parsing logic and potential LM output format issues (and don't mind about failures in production).
53+
54+
### Use Synalinks when:
55+
- You need a production-ready, reliable system with robust error handling.
56+
- Your application requires complex structured variables (e.g., graphs, plans).
57+
- You want strict typing and end-to-end contract validation.
58+
- You need hybrid memory (graph + vector) for richer data relationships.
59+
- You want to observe and optimize internal LM processes, not just outputs.
60+
- You need to rapidly change architectures using built-in JSON operators.
61+
62+
---
63+
64+
## Summary
65+
66+
While **DSPy** is a powerful research tool inspired by PyTorch’s flexibility, **Synalinks** is designed for production use, inspired by Keras user-friendliness and reliability. Synalinks use of JSON schemas, strict typing, async-by-default design, robust branching/merging makes it ideal for building complex, reliable neuro-symbolic systems that can learn, plan, and reason with structured data.

0 commit comments

Comments
 (0)