You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+92-1Lines changed: 92 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,8 +8,99 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
8
8
## [Unreleased]
9
9
10
10
### Added
11
+
- Planned:
12
+
-**Plan / Code modes** in interactive CLI (explicit “planning” vs “coding” flows for complex tasks).
13
+
- First‑class support for **open‑source models via third‑party providers** (e.g. OpenRouter, Groq and similar gateways), alongside existing Ollama + cloud integrations.
14
+
15
+
### Changed
16
+
- Intent routing to further reduce/eliminate **duplicate code generation**, especially with large open‑source models and remote providers.
17
+
18
+
### Fixed
11
19
- TBC
12
20
21
+
---
22
+
23
+
## [0.1.0] - 2025-11-26
24
+
25
+
### Overview
26
+
-**First public release** of DSPy Code: an AI-powered, interactive development and optimization assistant for DSPy (think "Claude Code for DSPy").
27
+
28
+
### Added
29
+
-**Interactive CLI & Workflows**
30
+
- Rich TUI with animated thinking indicators, status panels, and history-aware prompts.
31
+
- Fully conversational flow: describe what you want in natural language, get DSPy code, ask follow‑ups.
- Default Ollama generation timeout increased to 120 seconds to better support large models.
90
+
- Examples across README and docs updated to use modern models (e.g. `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`, `gpt-oss:120b`) and to recommend `/model` as the primary way to connect.
91
+
- Quick Start and model‑connection docs now make model connection mandatory and show clear virtual‑env + provider‑SDK installation flows using `dspy-code[...]` extras and `uv`/`pip`.
92
+
- Interactive UI improved with modern Rich versions and a `DSPY_CODE_SIMPLE_UI` mode for environments with limited emoji/spinner support.
93
+
- Natural language intent routing in interactive mode refined to:
94
+
- Prefer natural‑language answers for questions.
95
+
- Avoid double code generation and incorrect `/explain` follow‑ups.
96
+
- MkDocs navigation configuration tuned (tabs, sections) to keep the left nav stable and highlight the active page correctly.
97
+
98
+
### Fixed
99
+
- OpenAI deprecation issues (`APIRemovedInV1`) by migrating from `ChatCompletion` to the new client API, and removing unsupported `max_tokens`/`temperature` parameters for models like `gpt-5-nano`.
100
+
- Interactive mode errors:
101
+
-`name 'explanations' is not defined` during `/explain`.
102
+
- Syntax errors in `nl_command_router` debug logging.
103
+
- Ollama timeout handling for large models, with clearer error messages on connection/generation failures.
104
+
- Documentation glitches:
105
+
- Stray `\n` in callouts.
106
+
- Navigation behavior that caused pages to disappear or not highlight correctly.
-`/model` - Interactive model selection (local via Ollama or cloud providers)
343
+
-`/connect <provider> <model>` - Directly connect to LLM when you know the model name
343
344
-`/disconnect` - Disconnect current model
344
345
-`/models` - List available models
345
346
-`/status` - Show current connection status
@@ -411,7 +412,7 @@ DSPy Code is **interactive-only** - all commands are slash commands. Here are th
411
412
```bash
412
413
dspy-code
413
414
/init
414
-
/connect ollama llama3.1:8b
415
+
/model
415
416
Create a RAG system for document Q&A
416
417
/save rag_system.py
417
418
/validate
@@ -481,20 +482,25 @@ dspy-code
481
482
Connect to any LLM provider:
482
483
483
484
```bash
485
+
# Recommended: interactive model selector
486
+
/model
487
+
488
+
# Or connect directly if you know the model name:
489
+
484
490
# Ollama (local, free)
485
-
/connect ollama llama3.1:8b
491
+
/connect ollama gpt-oss:120b
486
492
487
493
# OpenAI (example small model)
488
494
/connect openai gpt-5-nano
489
495
490
496
# Anthropic (paid key required)
491
-
/connect anthropic claude-3-5-sonnet-20241022
497
+
/connect anthropic claude-sonnet-4.5
492
498
493
499
# Google Gemini (example model)
494
500
/connect gemini gemini-2.5-flash
495
501
```
496
502
497
-
> 💡 **Tip:** These are just starting points. Check your provider docs for the **latest models** (for example gpt-4o / gpt‑5 family, Gemini 2.5, latest Claude Sonnet/Opus) and plug them into `/connect`.
503
+
> 💡 **Tip:** These are just starting points. Check your provider docs for the **latest models** (for example gpt-4o / gpt‑5 family, Gemini 2.5, latest Claude Sonnet/Opus) and either pick them via `/model` or plug them into `/connect`.
498
504
499
505
## 🧬 GEPA Optimization
500
506
@@ -555,8 +561,11 @@ pip install -e .
555
561
### With uv (Faster)
556
562
557
563
```bash
558
-
# Always get the latest version
564
+
# Always get the latest version into your current environment
559
565
uv pip install --upgrade dspy-code
566
+
567
+
# Or add it to your project's pyproject.toml in one step
Copy file name to clipboardExpand all lines: docs/getting-started/quick-start.md
+14-1Lines changed: 14 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -92,9 +92,22 @@ You'll see a beautiful welcome screen with the DSPy version and helpful tips.
92
92
93
93
Before you do anything else in the CLI, you **must connect to a model**. DSPy Code relies on an LLM for code generation and understanding.
94
94
95
+
**Easiest (recommended): use the interactive selector**
96
+
97
+
```bash
98
+
/model
99
+
```
100
+
101
+
This lets you:
102
+
103
+
- Choose **Ollama** local models from a numbered list
104
+
- Choose a **cloud provider** (OpenAI, Anthropic, Gemini) and then type a model name (for example `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`)
Copy file name to clipboardExpand all lines: docs/guide/model-connection.md
+15-3Lines changed: 15 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,10 +17,22 @@ DSPy Code supports both **local** and **cloud** LLMs:
17
17
18
18
## Quick Connect
19
19
20
-
### Ollama (Local - Recommended for Beginners)
20
+
### Easiest: Interactive Model Selector
21
21
22
+
```bash
23
+
/model
22
24
```
23
-
/connect ollama llama3.1:8b
25
+
26
+
This walks you through:
27
+
28
+
- Picking **Ollama** (local) vs **cloud** providers
29
+
- For Ollama: selecting from detected models (for example `gpt-oss:120b`, `llama3.2`) by number
30
+
- For cloud: picking **OpenAI**, **Anthropic**, or **Gemini** and then typing a model name (for example `gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`)
31
+
32
+
### Ollama (Local - Recommended for Beginners)
33
+
34
+
```bash
35
+
/connect ollama gpt-oss:120b
24
36
```
25
37
26
38
**Advantages:**
@@ -31,7 +43,7 @@ DSPy Code supports both **local** and **cloud** LLMs:
0 commit comments