You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
// Method 1: Using Model enum (recommended - type-safe)
36
40
auto response = client.sendRequest(OpenAI::Model::GPT_4o_Mini, "Hello! How are you?");
@@ -146,6 +150,187 @@ auto response2 = client.sendRequest(OpenAI::Model::GPT_4_1_Mini, "Summarize this
146
150
147
151
---
148
152
153
+
## Anthropic Claude
154
+
155
+
**llmcpp** now includes full support for [Anthropic's Claude models](https://docs.anthropic.com/en/docs/about-claude/models/overview) via the Messages API.
- CLAUDE_HAIKU_3_5 (claude-3-5-haiku-20241022) - Fastest model
222
+
223
+
**Claude 3 series (Legacy):**
224
+
- CLAUDE_OPUS_3 (claude-3-opus-20240229) - Legacy opus
225
+
- CLAUDE_HAIKU_3 (claude-3-haiku-20240307) - Fast and compact legacy model
226
+
227
+
### Using ClientFactory with Anthropic
228
+
229
+
```cpp
230
+
// Create client via factory
231
+
auto client = llmcpp::ClientFactory::createClient("anthropic", "your-api-key");
232
+
233
+
// Use common LLMRequest interface
234
+
LLMRequestConfig config;
235
+
config.model = "claude-3-5-haiku-20241022";
236
+
config.maxTokens = 100;
237
+
238
+
LLMRequest request(config, "Hello, Claude!");
239
+
auto response = client->sendRequest(request);
240
+
```
241
+
242
+
> **Note:** For the latest Claude model recommendations and capabilities, consult the [Anthropic documentation](https://docs.anthropic.com/en/docs/about-claude/models/overview).
243
+
244
+
---
245
+
246
+
## 🚀 Performance Benchmarks
247
+
248
+
The `llmcpp` library includes comprehensive benchmarks comparing OpenAI and Anthropic models across different tasks. Run benchmarks with:
249
+
250
+
```bash
251
+
# Set environment variables
252
+
export OPENAI_API_KEY="your-openai-key"
253
+
export ANTHROPIC_API_KEY="your-anthropic-key"
254
+
export LLMCPP_RUN_BENCHMARKS=1
255
+
256
+
# Run unified benchmarks
257
+
./tests/llmcpp_tests "[unified][benchmark]"
258
+
```
259
+
260
+
### 🏆 Performance Leaders
261
+
262
+
Based on real API testing with consistent Responses API usage:
-**Consistent API Usage:** All tests use OpenAI Responses API for standardization
317
+
-**Real-World Conditions:** Actual API calls with network latency
318
+
-**Multiple Runs:** Results averaged across multiple test executions
319
+
-**Task Variety:** Simple text, structured output, and reasoning scenarios
320
+
-**Cost Analysis:** Based on current provider pricing (as of 2025)
321
+
322
+
### ⚡ Quick Performance Tips
323
+
324
+
1.**For Speed:** Use `gpt-4o-mini` for fastest responses
325
+
2.**For Cost:** Choose `claude-3-5-haiku` for budget-friendly options
326
+
3.**For Quality:** Select `claude-opus-4-1` when quality matters most
327
+
4.**For JSON:** Use OpenAI models with strict schema validation
328
+
5.**For Reasoning:** Enable `reasoning: {"effort": "low"}` for reasoning models
329
+
330
+
> **Note:** Benchmark results may vary based on network conditions, API load, and specific use cases. Run your own benchmarks for mission-critical applications.
0 commit comments