- CMake option
LLMCPP_USE_OPENSSLto optionally use OpenSSL for HTTPS (defaults to OFF) - Native SSL support via platform-specific implementations:
- macOS: SecureTransport (via Security framework)
- Windows: WinSSL/Schannel (via crypt32)
- Linux: Can enable OpenSSL with
-DLLMCPP_USE_OPENSSL=ON
- Default SSL/TLS implementation changed from OpenSSL to native platform SSL
- OpenSSL is now an optional dependency instead of required
- Faster builds and smaller binaries due to removed OpenSSL dependency
- Simplified dependency management by using native SSL implementations
- MCP (Model Context Protocol) tools integration with OpenAI Responses API
OpenAI::McpToolstruct for configuring MCP serversOpenAIMcpUtilsnamespace with helper functions:extractMcpCalls()- Extract all MCP tool calls from responsewasToolCalled()- Check if specific tool was calledgetToolOutput()- Get output from specific toolgetAllToolOutputs()- Get all tool outputswereMcpToolsListed()- Check if tools were listedgetAvailableMcpTools()- Get list of available toolswereAllToolsCalled()- Check if all expected tools were calledgetMcpUsageStats()- Get usage statistics
- Helper functions:
setTools(),hasTools(),getToolsJson() - Integration tests for MCP functionality using public DeepWiki MCP server
- Generic
extensionsfield inLLMRequestConfigfor provider-specific data JsonUtils.hwith safe JSON parsing utilities
- Refactored core LLM types to be provider-agnostic
- Updated to support gpt-5-mini model
- Merge pull request #60 from lucaromagnoli/feature/mcp-tools-integration
- Merge pull request #57 from lucaromagnoli/release/v1.2.0
- Include ResponseParser.h in main llmcpp.h header
- Merge pull request #58 from lucaromagnoli/fix/expose-response-parser
- Add ResponseParser for provider-agnostic structured response parsing
- implement comprehensive LLM response parsing
- enhance ResponseParser to handle direct function tags
- Merge pull request #55 from lucaromagnoli/release/v1.1.1
- security: remove anthropic_real_response.json
- Fix compilation issue in ResponseParser.cpp
- Implement parseDirectFunctionTags with simplified string-based approach
- Fix parseDirectFunctionTags to be generic and handle unclosed tags
- Fix parseDirectFunctionTags to throw exception for missing functionName
- Fix isAnthropicResponse to detect any XML-like tags
- Fix formatting after pre-commit hooks
- SECURITY: Remove dangerous test files that leaked aideas implementation details
- Fix formatting after pre-commit hooks
- Remove unnecessary text wrapping in LLM response conversion
- Fix Anthropic MessagesResponse parsing to handle tool_use content
- Update default model in AnthropicConfig to CLAUDE_SONNET_3_7
- Update default Anthropic model to Claude 4 Sonnet
- Update AnthropicConfig to use the latest API version
- Add tool_choice parameter to Anthropic MessagesRequest
- Replace literal model names with enum usage in getAvailableModels()
- Update formatting and anthropic version
- Fix anthropic-version to valid 2023-06-01
- Fix tool_choice format - should be object not array
- Fix toLLMResponse with smart parsing parameter
- Merge pull request #56 from lucaromagnoli/feature/response-parser
- prevent creation of branch-named tags in release workflow
- Merge pull request #51 from lucaromagnoli/release/v1.1.0
- Merge pull request #52 from lucaromagnoli/fix/release-workflow-tag-bug
- Merge branch 'main' into release/v1.1.0
- Merge pull request #53 from lucaromagnoli/release/v1.1.0
- Style: apply pre-commit fixes (clang-format, EOF)
- Merge pull request #54 from lucaromagnoli/fix/windows-ssl-guards
- Complete Anthropic Claude API integration
- Add unified OpenAI vs Anthropic benchmark tests
- Anthropic API integration bug fixes
- resolve macOS CI build issues
- Add comprehensive benchmark analysis to README
- fix Anthropic unit tests to match corrected message ordering
- trigger CI workflow
- remove paths-ignore rule that skips CI on documentation changes
- refine paths-ignore to only skip docs/ directory
- skip CI on documentation branches (docs/**)
- bump version to 1.1.0 and update changelog
- Merge pull request #50 from lucaromagnoli/feat/anthropic-integration
- Revert "fix: resolve macOS CI build issues"
- Complete Anthropic Claude API integration
- Add unified OpenAI vs Anthropic benchmark tests
- Anthropic API integration bug fixes
- Add comprehensive benchmark analysis to README
- fix Anthropic unit tests to match corrected message ordering
- trigger CI workflow
- remove paths-ignore rule that skips CI on documentation changes
- refine paths-ignore to only skip docs/ directory
- skip CI on documentation branches (docs/**)
- Merge pull request #50 from lucaromagnoli/feat/anthropic-integration
- add GPT-5 mini/nano; usage note for reasoning effort; enums updated
- add CodeQL, gitleaks, dependency review, and secret-file guard
- switch to manual gitleaks invocation; set GITHUB_TOKEN; ensure SARIF path exists
- trigger only on push/schedule; drop PR triggers; make gitleaks non-blocking off main
- v1.0.23
- Merge pull request #41 from lucaromagnoli/release/v1.0.22
- tests(bench): add model-comparison integration benchmark and internal microbenchmarks
- Merge pull request #45 from lucaromagnoli/ci/security-workflow
- Merge pull request #46 from lucaromagnoli/docs/readme-gpt5
- tests(bench): add test_benchmarks.cpp to the test suite and remove obsolete integration benchmark file
- bench: exclude unsupported models; boost caps for reasoning models; add run_model_benchmarks.sh; CSV output
- bench: move run_model_benchmarks.sh under tests/bench
- bench: mark run_model_benchmarks.sh executable
- bench: avoid max_output_tokens for GPT-5 family; keep caps for others
- bench: include token usage (input/output/total) in CSV output
- models: add gpt-5-mini and gpt-5-nano to RESPONSES_MODELS list
- Merge pull request #47 from lucaromagnoli/feat/model-benchmarks
- add GPT-5 model support (enum, mappings, responses list); tests cover model and integration stub
- remove branch restrictions from CI workflow - tests should run everywhere
- remove pull_request trigger to avoid duplicate expensive builds
- release workflow should only trigger on merge to main
- Merge pull request #39 from lucaromagnoli/release/v1.0.21
- openai: gpt-5 integration hardening — client auto-polls incomplete responses; remove explicit maxTokens in integration tests; drop manual polling in tests; minor debug output tweaks
- Merge pull request #40 from lucaromagnoli/feat/gpt5
- Windows compilation error with std::unique_ptr
- remove pull_request triggers to avoid duplicate CI jobs
- replace std::unique_ptr with proper httplib::Client
- add Windows dependencies and configure httplib properly
- Merge pull request #38 from lucaromagnoli/fix/windows-unique-ptr
- Fixed Windows MSVC compilation error with std::unique_ptr
- Replaced std::unique_ptr with raw pointer for SSL placeholder
- skip release build-and-test if CI has already passed for the commit
- simplify CI condition to prevent tag builds
- Merge pull request #30 from lucaromagnoli/feat/add-models
- Remove o3 model integration test; only test o3-mini (no temperature or maxTokens)
- Commit remaining changes: core, openai, and unit test updates for optional params and o3-mini logic
- Fix maxTokens test: now optional instead of defaulting to 200
- Bump version to 1.0.18 for patch release
- add explicit condition to skip CI on tag pushes
- add checkout step to check-open-prs job in release workflow
- do not run CI workflow on version tags (only release workflow runs on tags)
- optimize workflows - merge code quality into CI and skip release on open PRs
- correct GPT-4.5 model status - it's a current preview model, not deprecated
- update tests to use correct GPT-4.5 model string (gpt-4.5-preview)
- implement parameter filtering for reasoning models (O-series)
- update README to clarify model enum usage and remove recommendation references
- remove duplicate release-workflow.sh script
- Merge pull request #29 from lucaromagnoli/feat/release-test
- Remove getRecommendedModel and related tests, clean up OpenAI model enum logic
- remove macOS from release builds for cost optimization
- use Python script for robust release notes extraction
- resolve awk syntax error in release notes generation
- improve release notes generation and changelog handling
- bump version to 1.0.10 and update changelog
- Merge pull request #27 from lucaromagnoli/feat/release_process
- Merge pull request #28 from lucaromagnoli/fix/pipeline-merge
- Merge pull request #27 from lucaromagnoli/feat/release_process
- improve release notes generation and changelog handling
- bump version to 1.0.10 and update changelog
- Merge pull request #27 from lucaromagnoli/feat/release_process
- Merge pull request #28 from lucaromagnoli/fix/pipeline-merge
- add interactive release workflow with changelog editing
- Merge pull request #26 from lucaromagnoli/feat/cost-opt
- improve release notes generation and changelog handling
- bump version to 1.0.10 and update changelog
- Merge pull request #27 from lucaromagnoli/feat/release_process
- Merge pull request #28 from lucaromagnoli/fix/pipeline-merge
- optimize GitHub Actions costs by removing macOS from regular CI
- streamline release process to eliminate double PR workflow
- Merge pull request #23 from lucaromagnoli/improve-async
- Merge branch 'main' into release/v1.0.7
- Merge pull request #24 from lucaromagnoli/release/v1.0.7
- Merge pull request #25 from lucaromagnoli/cost-optimise-pipeline
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- add type-safe Model enum support with convenience methods
- modernize CMake for proper install and export
- Merge pull request #17 from lucaromagnoli/fix/llmrequest