Skip to content

Commit e2ae80f

Browse files
committed
bump to 0.1.0 release with improvement in models selction
1 parent 3c30a39 commit e2ae80f

19 files changed

+797
-205
lines changed

CHANGELOG.md

Lines changed: 92 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,99 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
88
## [Unreleased]
99

1010
### Added
11+
- Planned:
12+
- **Plan / Code modes** in interactive CLI (explicit “planning” vs “coding” flows for complex tasks).
13+
- First‑class support for **open‑source models via third‑party providers** (e.g. OpenRouter, Groq and similar gateways), alongside existing Ollama + cloud integrations.
14+
15+
### Changed
16+
- Intent routing to further reduce/eliminate **duplicate code generation**, especially with large open‑source models and remote providers.
17+
18+
### Fixed
1119
- TBC
1220

21+
---
22+
23+
## [0.1.0] - 2025-11-26
24+
25+
### Overview
26+
- **First public release** of DSPy Code: an AI-powered, interactive development and optimization assistant for DSPy (think "Claude Code for DSPy").
27+
28+
### Added
29+
- **Interactive CLI & Workflows**
30+
- Rich TUI with animated thinking indicators, status panels, and history-aware prompts.
31+
- Fully conversational flow: describe what you want in natural language, get DSPy code, ask follow‑ups.
32+
- Two core workflows:
33+
- **Development**: `/init` → describe task → generate → `/validate``/run` → iterate.
34+
- **Optimization**: `/data``/optimize``/eval``/export`.
35+
- **Model Connection & Providers**
36+
- Support for local **Ollama** models.
37+
- Cloud providers: **OpenAI**, **Anthropic (Claude)**, **Google Gemini**.
38+
- New interactive `/model` command:
39+
- Auto-detect and list Ollama models, pick by number.
40+
- Cloud flow to pick provider, then type model name (e.g. `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`).
41+
- Direct `/connect <provider> <model>` command for advanced users.
42+
- `/models`, `/status`, `/disconnect` for model management.
43+
- **LLM Integration & SDK Support**
44+
- OpenAI integration compatible with `openai>=2.x` chat completions API.
45+
- Anthropic integration via the current `anthropic` Python SDK.
46+
- Gemini integration via the official `google-genai` SDK, with fallback to `google-generativeai` when present.
47+
- Local Ollama integration with configurable HTTP timeouts for large models (`OLLAMA_HTTP_TIMEOUT`, `OLLAMA_TEST_TIMEOUT`).
48+
- Optional extras in `pyproject.toml`:
49+
- `dspy-code[openai]`, `dspy-code[anthropic]`, `dspy-code[gemini]`, `dspy-code[llm-all]`.
50+
- **DSPy-Aware Code Generation**
51+
- Natural language → DSPy **Signatures**, **Modules**, and full **Programs**.
52+
- Support for major DSPy patterns: predictors, ChainOfThought, ReAct, RAG, etc.
53+
- Templates and examples for:
54+
- RAG systems
55+
- Question answering
56+
- Classification (e.g. sentiment analyzer)
57+
- Optimization/evaluation workflows
58+
- **Validation & Execution**
59+
- `/validate` to check generated code against DSPy best practices and structure.
60+
- `/run` and `/test` to execute and test generated programs within a sandboxed engine.
61+
- Validation support for signatures, modules, predictors, adapters, metrics, and anti‑patterns.
62+
- **GEPA Optimization**
63+
- End‑to‑end optimization workflows:
64+
- `/optimize` for one‑shot optimization scripts.
65+
- `/optimize-start`, `/optimize-status`, `/optimize-resume`, `/optimize-cancel` for long‑running GEPA jobs.
66+
- Integration with evaluation metrics (Accuracy, F1, ROUGE, BLEU, etc.).
67+
- Documentation and warnings about cloud costs and recommended hardware (32 GB RAM for heavy local optimization).
68+
- **MCP (Model Context Protocol) Integration**
69+
- Built‑in MCP client with commands:
70+
- `/mcp-connect`, `/mcp-disconnect`, `/mcp-servers`, `/mcp-tools`, `/mcp-call`, `/mcp-resources`, `/mcp-read`, `/mcp-prompts`, `/mcp-prompt`.
71+
- Enables connecting DSPy Code to external tools and data sources.
72+
- **Project & Session Management**
73+
- `/init` and `/project` for initializing and inspecting DSPy projects.
74+
- Codebase indexing and RAG support for answering questions about your own code.
75+
- Session management commands: `/sessions`, `/session`, `/history`, `/clear`, `/save-data`, export/import.
76+
- Export/import of sessions and packages for deployment via `/export` and `/import`.
77+
- **Documentation & Examples**
78+
- Full docs site (MkDocs Material) with:
79+
- Getting Started (installation, quick start, first program, understanding DSPy Code).
80+
- Guides (model connection, interactive mode, natural language commands, optimization, project management, validation, slash commands).
81+
- Tutorials (RAG system, question answering, sentiment analyzer, GEPA optimization).
82+
- Reference (commands, configuration, templates, troubleshooting, FAQ, security).
83+
- Homepage and README aligned around DSPy Code as:
84+
- **Development assistant** (build DSPy apps quickly).
85+
- **Optimization engine** (real GEPA).
86+
- **Learning environment** for DSPy concepts.
1387

1488
### Changed
15-
- First release
89+
- Default Ollama generation timeout increased to 120 seconds to better support large models.
90+
- Examples across README and docs updated to use modern models (e.g. `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`, `gpt-oss:120b`) and to recommend `/model` as the primary way to connect.
91+
- Quick Start and model‑connection docs now make model connection mandatory and show clear virtual‑env + provider‑SDK installation flows using `dspy-code[...]` extras and `uv`/`pip`.
92+
- Interactive UI improved with modern Rich versions and a `DSPY_CODE_SIMPLE_UI` mode for environments with limited emoji/spinner support.
93+
- Natural language intent routing in interactive mode refined to:
94+
- Prefer natural‑language answers for questions.
95+
- Avoid double code generation and incorrect `/explain` follow‑ups.
96+
- MkDocs navigation configuration tuned (tabs, sections) to keep the left nav stable and highlight the active page correctly.
97+
98+
### Fixed
99+
- OpenAI deprecation issues (`APIRemovedInV1`) by migrating from `ChatCompletion` to the new client API, and removing unsupported `max_tokens`/`temperature` parameters for models like `gpt-5-nano`.
100+
- Interactive mode errors:
101+
- `name 'explanations' is not defined` during `/explain`.
102+
- Syntax errors in `nl_command_router` debug logging.
103+
- Ollama timeout handling for large models, with clearer error messages on connection/generation failures.
104+
- Documentation glitches:
105+
- Stray `\n` in callouts.
106+
- Navigation behavior that caused pages to disappear or not highlight correctly.

README.md

Lines changed: 17 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -312,8 +312,8 @@ dspy-code
312312
# Initialize your project (creates config and scans your environment)
313313
/init
314314

315-
# Connect to a model (example with Ollama)
316-
/connect ollama llama3.1:8b
315+
# Connect to a model (interactive selector)
316+
/model
317317

318318
# Generate your first program using natural language
319319
Create a sentiment analyzer that takes text and outputs positive or negative
@@ -339,7 +339,8 @@ DSPy Code is **interactive-only** - all commands are slash commands. Here are th
339339
- `/exit` - Exit the interactive session
340340

341341
### 🤖 Model Connection
342-
- `/connect <provider> <model>` - Connect to LLM (ollama, openai, anthropic, gemini)
342+
- `/model` - Interactive model selection (local via Ollama or cloud providers)
343+
- `/connect <provider> <model>` - Directly connect to LLM when you know the model name
343344
- `/disconnect` - Disconnect current model
344345
- `/models` - List available models
345346
- `/status` - Show current connection status
@@ -411,7 +412,7 @@ DSPy Code is **interactive-only** - all commands are slash commands. Here are th
411412
```bash
412413
dspy-code
413414
/init
414-
/connect ollama llama3.1:8b
415+
/model
415416
Create a RAG system for document Q&A
416417
/save rag_system.py
417418
/validate
@@ -481,20 +482,25 @@ dspy-code
481482
Connect to any LLM provider:
482483

483484
```bash
485+
# Recommended: interactive model selector
486+
/model
487+
488+
# Or connect directly if you know the model name:
489+
484490
# Ollama (local, free)
485-
/connect ollama llama3.1:8b
491+
/connect ollama gpt-oss:120b
486492

487493
# OpenAI (example small model)
488494
/connect openai gpt-5-nano
489495

490496
# Anthropic (paid key required)
491-
/connect anthropic claude-3-5-sonnet-20241022
497+
/connect anthropic claude-sonnet-4.5
492498

493499
# Google Gemini (example model)
494500
/connect gemini gemini-2.5-flash
495501
```
496502

497-
> 💡 **Tip:** These are just starting points. Check your provider docs for the **latest models** (for example gpt-4o / gpt‑5 family, Gemini 2.5, latest Claude Sonnet/Opus) and plug them into `/connect`.
503+
> 💡 **Tip:** These are just starting points. Check your provider docs for the **latest models** (for example gpt-4o / gpt‑5 family, Gemini 2.5, latest Claude Sonnet/Opus) and either pick them via `/model` or plug them into `/connect`.
498504
499505
## 🧬 GEPA Optimization
500506

@@ -555,8 +561,11 @@ pip install -e .
555561
### With uv (Faster)
556562

557563
```bash
558-
# Always get the latest version
564+
# Always get the latest version into your current environment
559565
uv pip install --upgrade dspy-code
566+
567+
# Or add it to your project's pyproject.toml in one step
568+
uv add dspy-code
560569
```
561570

562571
## 🏗️ Architecture

docs/getting-started/installation.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,9 @@ source .venv/bin/activate.fish
6767
```bash
6868
# If you use uv, you can install dspy-code like this
6969
uv pip install --upgrade dspy-code
70+
71+
# Or add it to your project dependencies (pyproject.toml) in one step
72+
uv add dspy-code
7073
```
7174

7275
That's it! DSPy Code is now installed in your project.

docs/getting-started/quick-start.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,9 +92,22 @@ You'll see a beautiful welcome screen with the DSPy version and helpful tips.
9292

9393
Before you do anything else in the CLI, you **must connect to a model**. DSPy Code relies on an LLM for code generation and understanding.
9494

95+
**Easiest (recommended): use the interactive selector**
96+
97+
```bash
98+
/model
99+
```
100+
101+
This lets you:
102+
103+
- Choose **Ollama** local models from a numbered list
104+
- Choose a **cloud provider** (OpenAI, Anthropic, Gemini) and then type a model name (for example `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`)
105+
106+
**Direct connect (advanced users):**
107+
95108
```bash
96109
# Ollama (local, free)
97-
/connect ollama llama3.1:8b
110+
/connect ollama gpt-oss:120b
98111

99112
# Or OpenAI (example small model)
100113
/connect openai gpt-5-nano

docs/getting-started/understanding.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ All commands are slash commands in interactive mode:
4343
```
4444
dspy-code
4545
→ /init
46-
→ /connect ollama llama3.1:8b
46+
→ /model
4747
→ Create a sentiment analyzer
4848
→ /save sentiment.py
4949
→ /validate

docs/guide/interactive-mode.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@ Use `/` prefix for specific commands:
9393
```
9494
/help
9595
/init
96-
/connect ollama llama3.1:8b
96+
/model
9797
/save my_module.py
9898
/validate
9999
/run
@@ -223,7 +223,8 @@ Clear conversation history:
223223
**2. Connect model:**
224224

225225
```
226-
→ /connect ollama llama3.1:8b
226+
→ /model
227+
[Pick Ollama vs cloud, then choose a model]
227228
✓ Connected!
228229
```
229230

docs/guide/model-connection.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,22 @@ DSPy Code supports both **local** and **cloud** LLMs:
1717

1818
## Quick Connect
1919

20-
### Ollama (Local - Recommended for Beginners)
20+
### Easiest: Interactive Model Selector
2121

22+
```bash
23+
/model
2224
```
23-
/connect ollama llama3.1:8b
25+
26+
This walks you through:
27+
28+
- Picking **Ollama** (local) vs **cloud** providers
29+
- For Ollama: selecting from detected models (for example `gpt-oss:120b`, `llama3.2`) by number
30+
- For cloud: picking **OpenAI**, **Anthropic**, or **Gemini** and then typing a model name (for example `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`)
31+
32+
### Ollama (Local - Recommended for Beginners)
33+
34+
```bash
35+
/connect ollama gpt-oss:120b
2436
```
2537

2638
**Advantages:**
@@ -31,7 +43,7 @@ DSPy Code supports both **local** and **cloud** LLMs:
3143

3244
**Requirements:**
3345
- Ollama installed
34-
- Model downloaded: `ollama pull llama3.1:8b`
46+
- Model downloaded: `ollama pull gpt-oss:120b`
3547

3648
### OpenAI (Cloud)
3749

docs/guide/natural-language-commands.md

Lines changed: 21 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,27 +4,39 @@ DSPy Code supports **natural language for all commands**! You don't need to reme
44

55
## 🎯 How It Works
66

7-
Instead of typing `/connect ollama llama3.1:8b`, you can simply say:
7+
Instead of remembering exact slash commands, you can either:
8+
9+
- Use the **interactive selector**: `/model`
10+
- Or just describe what you want in natural language.
11+
12+
For example, instead of typing `/connect ollama gpt-oss:120b`, you can say:
813

914
```
10-
connect to ollama llama3.1:8b
15+
connect to ollama gpt-oss:120b
1116
```
1217

13-
DSPy Code automatically understands your intent and routes it to the appropriate command.
18+
DSPy Code automatically understands your intent and routes it to the appropriate command (for example, `/model` or `/connect`).
1419

1520
---
1621

1722
## 📋 Supported Natural Language Commands
1823

1924
### Connection Commands
2025

21-
**Instead of:** `/connect ollama llama3.1:8b`
26+
**Instead of:** `/model` (interactive selector)
27+
**You can say:**
28+
- "help me pick a model"
29+
- "connect a model for me"
30+
- "set up a model"
31+
- "select a model to use"
32+
33+
**Instead of:** `/connect ollama gpt-oss:120b`
2234
**You can say:**
23-
- "connect to ollama llama3.1:8b"
24-
- "use model ollama llama3.1:8b"
25-
- "switch to ollama llama3.1:8b"
26-
- "set up model ollama llama3.1:8b"
27-
- "configure model ollama llama3.1:8b"
35+
- "connect to ollama gpt-oss:120b"
36+
- "use model ollama gpt-oss:120b"
37+
- "switch to ollama gpt-oss:120b"
38+
- "set up model ollama gpt-oss:120b"
39+
- "configure model ollama gpt-oss:120b"
2840

2941
**Instead of:** `/disconnect`
3042
**You can say:**

docs/guide/slash-commands.md

Lines changed: 26 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -189,43 +189,53 @@ Browse DSPy templates and patterns.
189189

190190
## Model Connection Commands
191191

192-
### /connect
192+
### /model
193193

194-
Connect to a language model.
194+
Interactively select and connect to a model.
195195

196-
**Ollama (Local):**
196+
**Basic usage:**
197197

198+
```bash
199+
/model
198200
```
199-
/connect ollama llama3.1:8b
200-
```
201201

202+
You will be prompted to choose:
203+
204+
- Local models via **Ollama** (auto-detected from your Ollama installation)
205+
- Cloud providers: **OpenAI**, **Anthropic**, **Gemini** (you then type the model name, e.g. `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`)
206+
207+
**Shortcuts:**
208+
209+
```bash
210+
/model ollama # Only show local Ollama models and pick one by number
211+
/model cloud # Pick a cloud provider, then type the model name
202212
```
213+
214+
### /connect
215+
216+
Directly connect to a specific model if you already know the name.
217+
218+
**Ollama (Local):**
219+
220+
```bash
203221
/connect ollama gpt-oss:120b
204222
```
205223

206224
**OpenAI:**
207225

208-
```
209-
/connect openai gpt-4
210-
```
211-
212-
```
226+
```bash
213227
/connect openai gpt-5-nano
214228
```
215229

216230
**Anthropic Claude:**
217231

218-
```
232+
```bash
219233
/connect anthropic claude-sonnet-4.5
220234
```
221235

222-
```
223-
/connect anthropic claude-opus-4.5
224-
```
225-
226236
**Google Gemini:**
227237

228-
```
238+
```bash
229239
/connect gemini gemini-2.5-flash
230240
```
231241

0 commit comments

Comments
 (0)