Skip to content

Commit 278e9ba

Browse files
Merge update at 1.30 (PR:lfnovo#75)
2 parents 8eb1a1e + c1416da commit 278e9ba

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

67 files changed

+15417
-1265
lines changed

CHANGELOG.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,91 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8+
## [Unreleased]
9+
10+
### Changed
11+
12+
- **Proxy Configuration** - Simplified proxy handling by delegating entirely to httpx
13+
- Esperanto now uses standard environment variables: `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY`
14+
- **BREAKING**: Removed `ESPERANTO_PROXY` environment variable support
15+
- **BREAKING**: Removed `config={"proxy": "..."}` parameter support
16+
- Migration: Replace `ESPERANTO_PROXY` with `HTTP_PROXY` and `HTTPS_PROXY`
17+
18+
## [2.17.2] - 2026-01-24
19+
20+
### Fixed
21+
22+
- **LangChain Connection Error from Garbage Collection** - Fixed "Connection error" when using `to_langchain()` (#73)
23+
- When Esperanto model was garbage collected, shared httpx clients were closed, breaking LangChain
24+
- Now creates fresh httpx clients for LangChain with same configuration (timeout, SSL, proxy)
25+
- Affected providers: OpenAI, Groq, Perplexity, OpenAI-compatible, Azure
26+
- Fixes: lfnovo/open-notebook#460
27+
28+
## [2.17.1] - 2026-01-24
29+
30+
### Fixed
31+
32+
- **Config Dict API Key Not Unpacked** - Fixed providers ignoring `api_key` passed via config dict (#68)
33+
- Affected providers: OpenRouter, DeepSeek, xAI (LLM), Groq (STT)
34+
- These providers inherit from OpenAI-compatible parent classes and were checking for `api_key` before the config dict was unpacked
35+
- Now correctly extracts `api_key` and `base_url` from config dict before setting provider defaults
36+
- Example that now works:
37+
```python
38+
model = AIFactory.create_language(
39+
"openrouter",
40+
"anthropic/claude-3.5-sonnet",
41+
config={"api_key": "sk-or-v1-xxxxx"}
42+
)
43+
```
44+
45+
## [2.17.0] - 2026-01-23
46+
47+
### Added
48+
49+
- **Unified Tool Calling** - Added tool/function calling support across all LLM providers (#67)
50+
- Define tools once using `Tool` and `ToolFunction` types, use with any provider
51+
- Consistent interface: `chat_complete(messages, tools=tools)`
52+
- Support for `tool_choice` parameter: `"auto"`, `"required"`, `"none"`, or specific tool
53+
- Support for `parallel_tool_calls` parameter
54+
- Multi-turn conversations with tool results (`role="tool"` messages)
55+
- Tool call validation with `validate_tool_calls=True` parameter
56+
- New types: `Tool`, `ToolFunction`, `ToolCall`, `FunctionCall`, `ToolCallValidationError`
57+
- Validation utilities: `validate_tool_call()`, `validate_tool_calls()`, `find_tool_by_name()`
58+
- Tested providers: OpenAI, Anthropic, Google, Groq, Mistral, DeepSeek, xAI, OpenRouter, Azure, Ollama
59+
- Full documentation at `docs/features/tool-calling.md`
60+
- Examples at `examples/tool_calling/`
61+
62+
- **Real Integration Tests for Tool Calling** - Added tests that call actual APIs (#71)
63+
- Validates tool calling works correctly across 10 providers
64+
- Tests both basic tool calls and multi-turn conversations
65+
- Perplexity skipped (doesn't support tool calling)
66+
67+
### Fixed
68+
69+
- **Streaming Validation Warning** - Added warning when `validate_tool_calls=True` is used with streaming (#71)
70+
- Tool call validation requires the complete response
71+
- Now emits `UserWarning` instead of silently ignoring the parameter
72+
- Affects all providers consistently
73+
74+
### Changed
75+
76+
- Moved mocked tool calling tests from `tests/integration/` to `tests/unit/`
77+
78+
## [2.16.0] - 2026-01-21
79+
80+
### Added
81+
82+
- **Ollama Context Window Configuration** - Added `num_ctx` support for Ollama provider
83+
- Default context window increased to 128,000 tokens (Ollama's default of 2,048 was causing context truncation)
84+
- Configurable via `config={"num_ctx": 32768}`
85+
- Passed to LangChain's ChatOllama via `to_langchain()`
86+
87+
- **Ollama Keep Alive Configuration** - Added `keep_alive` support for Ollama provider
88+
- Controls how long models stay loaded in memory
89+
- No default set (doesn't force memory usage on users)
90+
- Examples: `"5m"` (5 minutes), `"0"` (unload immediately), `"-1"` (keep indefinitely)
91+
- Configurable via `config={"keep_alive": "10m"}`
92+
893
## [2.15.0] - 2026-01-16
994

1095
### Added

CLAUDE.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,11 +148,44 @@ Factory imports providers dynamically via `_import_provider_class()`:
148148
All providers convert API responses to Esperanto's common types:
149149

150150
- Language: `ChatCompletion` / `ChatCompletionChunk`
151+
- Language (tools): `Tool`, `ToolFunction`, `ToolCall`, `FunctionCall`
151152
- Embedding: `List[List[float]]`
152153
- Reranker: `RerankResponse`
153154
- STT: `TranscriptionResponse`
154155
- TTS: `AudioResponse`
155156

157+
### Tool Calling
158+
159+
Esperanto provides unified tool/function calling across all LLM providers:
160+
161+
```python
162+
from esperanto import AIFactory
163+
from esperanto.common_types import Tool, ToolFunction
164+
165+
# Define tools once - works with any provider
166+
tools = [
167+
Tool(
168+
type="function",
169+
function=ToolFunction(
170+
name="get_weather",
171+
description="Get weather for a city",
172+
parameters={"type": "object", "properties": {"city": {"type": "string"}}, "required": ["city"]}
173+
)
174+
)
175+
]
176+
177+
# Use with any provider - identical code
178+
model = AIFactory.create_language("openai", "gpt-4o") # or "anthropic", "google", etc.
179+
response = model.chat_complete(messages, tools=tools)
180+
181+
# Tool calls in response
182+
if response.choices[0].message.tool_calls:
183+
for tc in response.choices[0].message.tool_calls:
184+
print(f"{tc.function.name}: {tc.function.arguments}")
185+
```
186+
187+
See [docs/features/tool-calling.md](docs/features/tool-calling.md) for full documentation.
188+
156189
### Providers ↔ Utils
157190

158191
All providers use utility mixins:

docs/capabilities/llm.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -189,6 +189,7 @@ response = model.chat_complete(messages)
189189

190190
## Advanced Topics
191191

192+
- **Tool/Function Calling**: [docs/features/tool-calling.md](../features/tool-calling.md) - Let models call functions
192193
- **Timeout Configuration**: [docs/advanced/timeout-configuration.md](../advanced/timeout-configuration.md)
193194
- **LangChain Integration**: [docs/advanced/langchain-integration.md](../advanced/langchain-integration.md)
194195
- **Model Discovery**: [docs/advanced/model-discovery.md](../advanced/model-discovery.md)
@@ -305,9 +306,47 @@ if msg.thinking:
305306
print(f"Model's reasoning: {msg.thinking}")
306307
```
307308

309+
### Tool Calling
310+
311+
```python
312+
from esperanto import AIFactory
313+
from esperanto.common_types import Tool, ToolFunction
314+
315+
# Define a tool
316+
tools = [
317+
Tool(
318+
type="function",
319+
function=ToolFunction(
320+
name="get_weather",
321+
description="Get weather for a location",
322+
parameters={
323+
"type": "object",
324+
"properties": {"city": {"type": "string"}},
325+
"required": ["city"]
326+
}
327+
)
328+
)
329+
]
330+
331+
# Use tools with any provider
332+
model = AIFactory.create_language("openai", "gpt-4o")
333+
response = model.chat_complete(
334+
[{"role": "user", "content": "What's the weather in Tokyo?"}],
335+
tools=tools
336+
)
337+
338+
# Check for tool calls
339+
if response.choices[0].message.tool_calls:
340+
for tc in response.choices[0].message.tool_calls:
341+
print(f"Tool: {tc.function.name}, Args: {tc.function.arguments}")
342+
```
343+
344+
See [Tool Calling Guide](../features/tool-calling.md) for complete documentation.
345+
308346
## See Also
309347

310348
- [Provider Setup Guides](../providers/README.md)
349+
- [Tool Calling](../features/tool-calling.md)
311350
- [Embedding Models](./embedding.md)
312351
- [Speech-to-Text](./speech-to-text.md)
313352
- [Text-to-Speech](./text-to-speech.md)

docs/configuration.md

Lines changed: 32 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -418,51 +418,48 @@ All provider types that use HTTP clients:
418418

419419
## Proxy Configuration
420420

421-
Configure HTTP proxy for all provider connections. Useful for corporate networks, VPNs, or routing traffic through specific endpoints.
421+
Esperanto uses the standard HTTP proxy environment variables supported by most tools and libraries. Proxy configuration is handled automatically by the underlying httpx library.
422422

423-
### Setting a Proxy
424-
425-
**Via environment variable (recommended):**
423+
### Environment Variables
426424

427425
```bash
428-
ESPERANTO_PROXY=http://proxy.example.com:8080
429-
```
426+
# HTTP proxy (for http:// requests)
427+
HTTP_PROXY=http://proxy.example.com:8080
428+
http_proxy=http://proxy.example.com:8080
430429

431-
**Via config parameter:**
430+
# HTTPS proxy (for https:// requests)
431+
HTTPS_PROXY=http://proxy.example.com:8080
432+
https_proxy=http://proxy.example.com:8080
432433

433-
```python
434-
model = AIFactory.create_language(
435-
"openai", "gpt-4",
436-
config={"proxy": "http://proxy.example.com:8080"}
437-
)
434+
# Hosts to bypass proxy (comma-separated)
435+
NO_PROXY=localhost,127.0.0.1,.internal.com
436+
no_proxy=localhost,127.0.0.1,.internal.com
438437
```
439438

439+
Both uppercase and lowercase versions are supported.
440+
440441
### Proxy URL Formats
441442

442443
```bash
443444
# HTTP proxy
444-
ESPERANTO_PROXY=http://proxy.example.com:8080
445+
HTTP_PROXY=http://proxy.example.com:8080
445446

446-
# HTTPS proxy
447-
ESPERANTO_PROXY=https://secure-proxy.example.com:443
447+
# HTTPS proxy (note: proxy URL is usually http://, not https://)
448+
HTTPS_PROXY=http://proxy.example.com:8080
448449

449450
# Proxy with authentication
450-
ESPERANTO_PROXY=http://username:password@proxy.example.com:8080
451+
HTTP_PROXY=http://username:password@proxy.example.com:8080
451452
```
452453

453-
### Priority Order
454-
455-
1. **Config parameter** `proxy` (highest priority)
456-
2. **Environment variable** `ESPERANTO_PROXY`
457-
3. **Default** `None` (no proxy)
458-
459454
### Common Use Cases
460455

461456
**Corporate network with proxy:**
462457

463458
```bash
464459
# In .env
465-
ESPERANTO_PROXY=http://corporate-proxy.internal:3128
460+
HTTP_PROXY=http://corporate-proxy.internal:3128
461+
HTTPS_PROXY=http://corporate-proxy.internal:3128
462+
NO_PROXY=localhost,127.0.0.1,.internal.com
466463
```
467464

468465
```python
@@ -471,17 +468,21 @@ model = AIFactory.create_language("openai", "gpt-4")
471468
embedder = AIFactory.create_embedding("openai", "text-embedding-3-small")
472469
```
473470

474-
**Different proxy per instance:**
471+
**Bypass proxy for local services:**
472+
473+
```bash
474+
# In .env
475+
HTTP_PROXY=http://proxy.example.com:8080
476+
HTTPS_PROXY=http://proxy.example.com:8080
477+
NO_PROXY=localhost,127.0.0.1,ollama.local
478+
```
475479

476480
```python
477-
# Use specific proxy for this instance
478-
model = AIFactory.create_language(
479-
"openai", "gpt-4",
480-
config={"proxy": "http://special-proxy.example.com:8080"}
481-
)
481+
# External APIs go through proxy
482+
model = AIFactory.create_language("openai", "gpt-4")
482483

483-
# Another instance without proxy (if ESPERANTO_PROXY is not set)
484-
model_no_proxy = AIFactory.create_language("ollama", "llama3")
484+
# Local Ollama bypasses proxy (if in NO_PROXY)
485+
local_model = AIFactory.create_language("ollama", "llama3")
485486
```
486487

487488
### Proxy Configuration Applies To

0 commit comments

Comments
 (0)