Skip to content

Commit 6c82ede

Browse files
Deprecate Langchain::LLM::LlamaCpp class (#961)
1 parent 57689b7 commit 6c82ede

File tree

6 files changed

+3
-30
lines changed

6 files changed

+3
-30
lines changed

.env.example

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,6 @@ GOOGLE_VERTEX_AI_PROJECT_ID=
1212
GOOGLE_CLOUD_CREDENTIALS=
1313
GOOGLE_GEMINI_API_KEY=
1414
HUGGING_FACE_API_KEY=
15-
LLAMACPP_MODEL_PATH=
16-
LLAMACPP_N_THREADS=
17-
LLAMACPP_N_GPU_LAYERS=
1815
MILVUS_URL=http://localhost:19530
1916
MISTRAL_AI_API_KEY=
2017
NEWS_API_KEY=

CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,7 @@
1414
- [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/939] Fix Langchain::Vectorsearch::Milvus initializer by passing :api_key
1515
- [BUGFIX] [https://github.com/patterns-ai-core/langchainrb/pull/953] Handle nil response in OpenAI LLM streaming
1616
- [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/956] Deprecate `Langchain::Vectorsearch::Epsilla` class
17+
- [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/961] Deprecate `Langchain::LLM::LlamaCpp` class
1718

1819
## [0.19.4] - 2025-02-17
1920
- [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/894] Tools can now output image_urls, and all tool output must be wrapped by a tool_response() method

README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,6 @@ The `Langchain::LLM` module provides a unified interface for interacting with va
6565
- Google Gemini
6666
- Google Vertex AI
6767
- HuggingFace
68-
- LlamaCpp
6968
- Mistral AI
7069
- Ollama
7170
- OpenAI

examples/llama_cpp.rb

Lines changed: 0 additions & 25 deletions
This file was deleted.

lib/langchain/llm/base.rb

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,6 @@ class ApiError < StandardError; end
1414
# - {Langchain::LLM::GoogleGemini}
1515
# - {Langchain::LLM::GoogleVertexAI}
1616
# - {Langchain::LLM::HuggingFace}
17-
# - {Langchain::LLM::LlamaCpp}
1817
# - {Langchain::LLM::OpenAI}
1918
# - {Langchain::LLM::Replicate}
2019
#

lib/langchain/llm/llama_cpp.rb

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,8 @@ class LlamaCpp < Base
2323
# @param n_threads [Integer] The CPU number of threads to use
2424
# @param seed [Integer] The seed to use
2525
def initialize(model_path:, n_gpu_layers: 1, n_ctx: 2048, n_threads: 1, seed: 0)
26+
Langchain.logger.warn "DEPRECATED: `Langchain::LLM::LlamaCpp` is deprecated, and will be removed in the next major version. Please use `Langchain::LLM::Ollama` for self-hosted LLM inference."
27+
2628
depends_on "llama_cpp"
2729

2830
@model_path = model_path

0 commit comments

Comments
 (0)