diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md index d0000b5cd..69a703bff 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -25,7 +25,7 @@ If applicable, add screenshots to help explain your problem. - OS: [e.g. OS X, Linux, Ubuntu, Windows] - Ruby version [e.g. 3.1, 3.2, 3.3] -- Langchain.rb version [e.g. 0.13.0] +- LangChain.rb version [e.g. 0.13.0] **Additional context** -Add any other context about the problem here. \ No newline at end of file +Add any other context about the problem here. diff --git a/.standard.yml b/.standard.yml index 64cdfb364..35cec7507 100644 --- a/.standard.yml +++ b/.standard.yml @@ -2,5 +2,5 @@ ignore: - "**/*": - Style/ArgumentsForwarding -# Specify the minimum supported Ruby version supported by Langchain.rb. +# Specify the minimum supported Ruby version supported by LangChain.rb. ruby_version: 3.1 diff --git a/CHANGELOG.md b/CHANGELOG.md index 3bc613688..aaff358c9 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -10,6 +10,7 @@ - [SECURITY]: A change which fixes a security vulnerability. ## [Unreleased] +- [COMPAT] [https://github.com/patterns-ai-core/langchainrb/pull/1027] Rename `Langchain` to `LangChain`. - [COMPAT] [https://github.com/patterns-ai-core/langchainrb/pull/980] Suppress a Ruby 3.4 warning for URI parser. - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/997] Remove `Langchain::Vectorsearch::Epsilla` class - [BREAKING] [https://github.com/patterns-ai-core/langchainrb/pull/1003] Response classes are now namespaced under `Langchain::LLM::Response`, converted to Rails engine diff --git a/README.md b/README.md index e1475a872..7d961e58e 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -💎🔗 Langchain.rb +💎🔗 LangChain.rb --- ⚡ Building LLM-powered applications in Ruby ⚡ @@ -53,7 +53,7 @@ require "langchain" # Unified Interface for LLMs -The `Langchain::LLM` module provides a unified interface for interacting with various Large Language Model (LLM) providers. This abstraction allows you to easily switch between different LLM backends without changing your application code. +The `LangChain::LLM` module provides a unified interface for interacting with various Large Language Model (LLM) providers. This abstraction allows you to easily switch between different LLM backends without changing your application code. ## Supported LLM Providers @@ -71,7 +71,7 @@ The `Langchain::LLM` module provides a unified interface for interacting with va ## Usage -All LLM classes inherit from `Langchain::LLM::Base` and provide a consistent interface for common operations: +All LLM classes inherit from `LangChain::LLM::Base` and provide a consistent interface for common operations: 1. Generating embeddings 2. Generating prompt completions @@ -82,7 +82,7 @@ All LLM classes inherit from `Langchain::LLM::Base` and provide a consistent int Most LLM classes can be initialized with an API key and optional default options: ```ruby -llm = Langchain::LLM::OpenAI.new( +llm = LangChain::LLM::OpenAI.new( api_key: ENV["OPENAI_API_KEY"], default_options: { temperature: 0.7, chat_model: "gpt-4o" } ) @@ -159,13 +159,13 @@ Thanks to the unified interface, you can easily switch between different LLM pro ```ruby # Using Anthropic -anthropic_llm = Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) +anthropic_llm = LangChain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) # Using Google Gemini -gemini_llm = Langchain::LLM::GoogleGemini.new(api_key: ENV["GOOGLE_GEMINI_API_KEY"]) +gemini_llm = LangChain::LLM::GoogleGemini.new(api_key: ENV["GOOGLE_GEMINI_API_KEY"]) # Using OpenAI -openai_llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +openai_llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ``` ## Response Objects @@ -190,14 +190,14 @@ Each LLM method returns a response object that provides a consistent interface f Create a prompt with input variables: ```ruby -prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) +prompt = LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens." ``` Creating a PromptTemplate using just a prompt and no input_variables: ```ruby -prompt = Langchain::Prompt::PromptTemplate.from_template("Tell me a funny joke about chickens.") +prompt = LangChain::Prompt::PromptTemplate.from_template("Tell me a funny joke about chickens.") prompt.input_variables # [] prompt.format # "Tell me a funny joke about chickens." ``` @@ -211,7 +211,7 @@ prompt.save(file_path: "spec/fixtures/prompt/prompt_template.json") Loading a new prompt template using a JSON file: ```ruby -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") prompt.input_variables # ["adjective", "content"] ``` @@ -220,10 +220,10 @@ prompt.input_variables # ["adjective", "content"] Create a prompt with a few shot examples: ```ruby -prompt = Langchain::Prompt::FewShotPromptTemplate.new( +prompt = LangChain::Prompt::FewShotPromptTemplate.new( prefix: "Write antonyms for the following words.", suffix: "Input: {adjective}\nOutput:", - example_prompt: Langchain::Prompt::PromptTemplate.new( + example_prompt: LangChain::Prompt::PromptTemplate.new( input_variables: ["input", "output"], template: "Input: {input}\nOutput: {output}" ), @@ -257,14 +257,14 @@ prompt.save(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") Loading a new prompt template using a JSON file: ```ruby -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") prompt.prefix # "Write antonyms for the following words." ``` Loading a new prompt template using a YAML file: ```ruby -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") prompt.input_variables #=> ["adjective", "content"] ``` @@ -314,8 +314,8 @@ json_schema = { required: ["name", "age", "interests"], additionalProperties: false } -parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) -prompt = Langchain::Prompt::PromptTemplate.new(template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"]) +parser = LangChain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) +prompt = LangChain::Prompt::PromptTemplate.new(template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"]) prompt_text = prompt.format(description: "Korean chemistry student", format_instructions: parser.get_format_instructions) # Generate details of a fictional character. # You must format your output as a JSON value that adheres to a given "JSON Schema" instance. @@ -325,7 +325,7 @@ prompt_text = prompt.format(description: "Korean chemistry student", format_inst Then parse the llm response: ```ruby -llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) llm_response = llm.chat(messages: [{role: "user", content: prompt_text}]).completion parser.parse(llm_response) # { @@ -346,8 +346,8 @@ If the parser fails to parse the LLM response, you can use the `OutputFixingPars ```ruby begin parser.parse(llm_response) -rescue Langchain::OutputParsers::OutputParserException => e - fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm( +rescue LangChain::OutputParsers::OutputParserException => e + fix_parser = LangChain::OutputParsers::OutputFixingParser.from_llm( llm: llm, parser: parser ) @@ -359,8 +359,8 @@ Alternatively, if you don't need to handle the `OutputParserException`, you can ```ruby # we already have the `OutputFixingParser`: -# parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) -fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm( +# parser = LangChain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) +fix_parser = LangChain::OutputParsers::OutputFixingParser.from_llm( llm: llm, parser: parser ) @@ -378,7 +378,7 @@ A typical RAG workflow follows the 3 steps below: Most common use-case for a RAG system is powering Q&A systems where users pose natural language questions and receive answers in natural language. ### Vector search databases -Langchain.rb provides a convenient unified interface on top of supported vectorsearch databases that make it easy to configure your index, add data, query and retrieve from it. +LangChain.rb provides a convenient unified interface on top of supported vectorsearch databases that make it easy to configure your index, add data, query and retrieve from it. #### Supported vector search databases and features: @@ -402,11 +402,11 @@ gem "weaviate-ruby", "~> 0.8.9" Choose and instantiate the LLM provider you'll be using to generate embeddings ```ruby -llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ``` ```ruby -client = Langchain::Vectorsearch::Weaviate.new( +client = LangChain::Vectorsearch::Weaviate.new( url: ENV["WEAVIATE_URL"], api_key: ENV["WEAVIATE_API_KEY"], index_name: "Documents", @@ -416,13 +416,13 @@ client = Langchain::Vectorsearch::Weaviate.new( You can instantiate any other supported vector search database: ```ruby -client = Langchain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"` -client = Langchain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"` -client = Langchain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.3"` -client = Langchain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 1.0"` -client = Langchain::Vectorsearch::Pgvector.new(...) # `gem "pgvector", "~> 0.2"` -client = Langchain::Vectorsearch::Qdrant.new(...) # `gem "qdrant-ruby", "~> 0.9.3"` -client = Langchain::Vectorsearch::Elasticsearch.new(...) # `gem "elasticsearch", "~> 8.2.0"` +client = LangChain::Vectorsearch::Chroma.new(...) # `gem "chroma-db", "~> 0.6.0"` +client = LangChain::Vectorsearch::Hnswlib.new(...) # `gem "hnswlib", "~> 0.8.1"` +client = LangChain::Vectorsearch::Milvus.new(...) # `gem "milvus", "~> 0.9.3"` +client = LangChain::Vectorsearch::Pinecone.new(...) # `gem "pinecone", "~> 1.0"` +client = LangChain::Vectorsearch::Pgvector.new(...) # `gem "pgvector", "~> 0.2"` +client = LangChain::Vectorsearch::Qdrant.new(...) # `gem "qdrant-ruby", "~> 0.9.3"` +client = LangChain::Vectorsearch::Elasticsearch.new(...) # `gem "elasticsearch", "~> 8.2.0"` ``` Create the default schema: @@ -442,9 +442,9 @@ client.add_texts( Or use the file parsers to load, parse and index data into your database: ```ruby -my_pdf = Langchain.root.join("path/to/my.pdf") -my_text = Langchain.root.join("path/to/my.txt") -my_docx = Langchain.root.join("path/to/my.docx") +my_pdf = LangChain.root.join("path/to/my.pdf") +my_text = LangChain.root.join("path/to/my.txt") +my_docx = LangChain.root.join("path/to/my.docx") client.add_data(paths: [my_pdf, my_text, my_docx]) ``` @@ -477,7 +477,7 @@ client.ask(question: "...") ``` ## Assistants -`Langchain::Assistant` is a powerful and flexible class that combines Large Language Models (LLMs), tools, and conversation management to create intelligent, interactive assistants. It's designed to handle complex conversations, execute tools, and provide coherent responses based on the context of the interaction. +`LangChain::Assistant` is a powerful and flexible class that combines Large Language Models (LLMs), tools, and conversation management to create intelligent, interactive assistants. It's designed to handle complex conversations, execute tools, and provide coherent responses based on the context of the interaction. ### Features * Supports multiple LLM providers (OpenAI, Google Gemini, Anthropic, Mistral AI and open-source models via Ollama) @@ -488,11 +488,11 @@ client.ask(question: "...") ### Usage ```ruby -llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) -assistant = Langchain::Assistant.new( +llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +assistant = LangChain::Assistant.new( llm: llm, instructions: "You're a helpful AI assistant", - tools: [Langchain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] + tools: [LangChain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] ) # Add a user message and run the assistant @@ -511,10 +511,10 @@ messages = assistant.messages assistant.run(auto_tool_execution: true) # If you want to stream the response, you can add a response handler -assistant = Langchain::Assistant.new( +assistant = LangChain::Assistant.new( llm: llm, instructions: "You're a helpful AI assistant", - tools: [Langchain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] + tools: [LangChain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] ) do |response_chunk| # ...handle the response stream # print(response_chunk.inspect) @@ -548,22 +548,22 @@ assistant.tool_execution_callback = -> (tool_call_id, tool_name, method_name, to * `messages`: Returns a list of ongoing messages ### Built-in Tools 🛠️ -* `Langchain::Tool::Calculator`: Useful for evaluating math expressions. Requires `gem "eqn"`. -* `Langchain::Tool::Database`: Connect your SQL database. Requires `gem "sequel"`. -* `Langchain::Tool::FileSystem`: Interact with the file system (read & write). -* `Langchain::Tool::GoogleSearch`: Wrapper around SerpApi's Google Search API. Requires `gem "google_search_results"`. -* `Langchain::Tool::NewsRetriever`: A wrapper around [NewsApi.org](https://newsapi.org) to fetch news articles. -* `Langchain::Tool::RubyCodeInterpreter`: Useful for evaluating generated Ruby code. Requires `gem "safe_ruby"` (In need of a better solution). -* `Langchain::Tool::Tavily`: A wrapper around [Tavily AI](https://tavily.com). -* `Langchain::Tool::Vectorsearch`: A wrapper for vector search classes. -* `Langchain::Tool::Weather`: Calls [Open Weather API](https://home.openweathermap.org) to retrieve the current weather. -* `Langchain::Tool::Wikipedia`: Calls Wikipedia API. Requires `gem "wikipedia-client"`. +* `LangChain::Tool::Calculator`: Useful for evaluating math expressions. Requires `gem "eqn"`. +* `LangChain::Tool::Database`: Connect your SQL database. Requires `gem "sequel"`. +* `LangChain::Tool::FileSystem`: Interact with the file system (read & write). +* `LangChain::Tool::GoogleSearch`: Wrapper around SerpApi's Google Search API. Requires `gem "google_search_results"`. +* `LangChain::Tool::NewsRetriever`: A wrapper around [NewsApi.org](https://newsapi.org) to fetch news articles. +* `LangChain::Tool::RubyCodeInterpreter`: Useful for evaluating generated Ruby code. Requires `gem "safe_ruby"` (In need of a better solution). +* `LangChain::Tool::Tavily`: A wrapper around [Tavily AI](https://tavily.com). +* `LangChain::Tool::Vectorsearch`: A wrapper for vector search classes. +* `LangChain::Tool::Weather`: Calls [Open Weather API](https://home.openweathermap.org) to retrieve the current weather. +* `LangChain::Tool::Wikipedia`: Calls Wikipedia API. Requires `gem "wikipedia-client"`. ### Creating custom Tools -The Langchain::Assistant can be easily extended with custom tools by creating classes that `extend Langchain::ToolDefinition` module and implement required methods. +The LangChain::Assistant can be easily extended with custom tools by creating classes that `extend LangChain::ToolDefinition` module and implement required methods. ```ruby class MovieInfoTool - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :search_movie, description: "MovieInfoTool: Search for a movie by title" do property :query, type: "string", description: "The movie title to search for", required: true @@ -591,7 +591,7 @@ end ```ruby movie_tool = MovieInfoTool.new(api_key: "...") -assistant = Langchain::Assistant.new( +assistant = LangChain::Assistant.new( llm: llm, instructions: "You're a helpful AI assistant that can provide movie information", tools: [movie_tool] @@ -607,8 +607,8 @@ The assistant includes error handling for invalid inputs, unsupported LLM types, ### Demos 1. [Building an AI Assistant that operates a simulated E-commerce Store](https://www.loom.com/share/83aa4fd8dccb492aad4ca95da40ed0b2) -2. [New Langchain.rb Assistants interface](https://www.loom.com/share/e883a4a49b8746c1b0acf9d58cf6da36) -3. [Langchain.rb Assistant demo with NewsRetriever and function calling on Gemini](https://youtu.be/-ieyahrpDpM&t=1477s) - [code](https://github.com/palladius/gemini-news-crawler) +2. [New LangChain.rb Assistants interface](https://www.loom.com/share/e883a4a49b8746c1b0acf9d58cf6da36) +3. [LangChain.rb Assistant demo with NewsRetriever and function calling on Gemini](https://youtu.be/-ieyahrpDpM&t=1477s) - [code](https://github.com/palladius/gemini-news-crawler) ## Evaluations (Evals) The Evaluations module is a collection of tools that can be used to evaluate and track the performance of the output products by LLM and your RAG (Retrieval Augmented Generation) pipelines. @@ -620,8 +620,8 @@ Ragas helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. Th * Answer Relevance - the generated answer addresses the actual question that was provided. ```ruby -# We recommend using Langchain::LLM::OpenAI as your llm for Ragas -ragas = Langchain::Evals::Ragas::Main.new(llm: llm) +# We recommend using LangChain::LLM::OpenAI as your llm for Ragas +ragas = LangChain::Evals::Ragas::Main.new(llm: llm) # The answer that the LLM generated # The question (or the original prompt) that was asked @@ -641,18 +641,18 @@ Additional examples available: [/examples](https://github.com/patterns-ai-core/l ## Logging -Langchain.rb uses the standard Ruby [Logger](https://ruby-doc.org/stdlib-2.4.0/libdoc/logger/rdoc/Logger.html) mechanism and defaults to same `level` value (currently `Logger::DEBUG`). +LangChain.rb uses the standard Ruby [Logger](https://ruby-doc.org/stdlib-2.4.0/libdoc/logger/rdoc/Logger.html) mechanism and defaults to same `level` value (currently `Logger::DEBUG`). To show all log messages: ```ruby -Langchain.logger.level = Logger::DEBUG +LangChain.logger.level = Logger::DEBUG ``` The logger logs to `STDOUT` by default. In order to configure the log destination (ie. log to a file) do: ```ruby -Langchain.logger = Logger.new("path/to/file", **Langchain::LOGGER_OPTIONS) +LangChain.logger = Logger.new("path/to/file", **LangChain::LOGGER_OPTIONS) ``` ## Problems @@ -670,7 +670,7 @@ gem install unicode -- --with-cflags="-Wno-incompatible-function-pointer-types" 5. Optionally, install lefthook git hooks for pre-commit to auto lint: `gem install lefthook && lefthook install -f` ## Discord -Join us in the [Langchain.rb](https://discord.gg/WDARp7J2n8) Discord server. +Join us in the [LangChain.rb](https://discord.gg/WDARp7J2n8) Discord server. ## Star History diff --git a/config/routes.rb b/config/routes.rb index cfe336ce9..0735cb5cf 100644 --- a/config/routes.rb +++ b/config/routes.rb @@ -1,2 +1,2 @@ -Langchain::Engine.routes.draw do +LangChain::Engine.routes.draw do end diff --git a/examples/assistant_chat.rb b/examples/assistant_chat.rb index 681c687fd..e2d6804d4 100644 --- a/examples/assistant_chat.rb +++ b/examples/assistant_chat.rb @@ -6,12 +6,12 @@ # gem install reline # or add `gem "reline"` to your Gemfile -openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) -assistant = Langchain::Assistant.new( +openai = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +assistant = LangChain::Assistant.new( llm: openai, instructions: "You are a Meteorologist Assistant that is able to pull the weather for any location", tools: [ - Langchain::Tool::Weather.new(api_key: ENV["OPEN_WEATHER_API_KEY"]) + LangChain::Tool::Weather.new(api_key: ENV["OPEN_WEATHER_API_KEY"]) ] ) diff --git a/examples/create_and_manage_few_shot_prompt_templates.rb b/examples/create_and_manage_few_shot_prompt_templates.rb index 0dfe675f0..c8264ed37 100644 --- a/examples/create_and_manage_few_shot_prompt_templates.rb +++ b/examples/create_and_manage_few_shot_prompt_templates.rb @@ -1,10 +1,10 @@ require "langchain" # Create a prompt with a few shot examples -prompt = Langchain::Prompt::FewShotPromptTemplate.new( +prompt = LangChain::Prompt::FewShotPromptTemplate.new( prefix: "Write antonyms for the following words.", suffix: "Input: {adjective}\nOutput:", - example_prompt: Langchain::Prompt::PromptTemplate.new( + example_prompt: LangChain::Prompt::PromptTemplate.new( input_variables: ["input", "output"], template: "Input: {input}\nOutput: {output}" ), @@ -32,5 +32,5 @@ prompt.save(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") # Loading a new prompt template using a JSON file -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") prompt.prefix # "Write antonyms for the following words." diff --git a/examples/create_and_manage_prompt_templates.rb b/examples/create_and_manage_prompt_templates.rb index d564113e7..4b9428dc2 100644 --- a/examples/create_and_manage_prompt_templates.rb +++ b/examples/create_and_manage_prompt_templates.rb @@ -1,15 +1,15 @@ require "langchain" # Create a prompt with one input variable -prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) +prompt = LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) prompt.format(adjective: "funny") # "Tell me a funny joke." # Create a prompt with multiple input variables -prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) +prompt = LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens." # Creating a PromptTemplate using just a prompt and no input_variables -prompt = Langchain::Prompt::PromptTemplate.from_template("Tell me a {adjective} joke about {content}.") +prompt = LangChain::Prompt::PromptTemplate.from_template("Tell me a {adjective} joke about {content}.") prompt.input_variables # ["adjective", "content"] prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens." @@ -17,9 +17,9 @@ prompt.save(file_path: "spec/fixtures/prompt/prompt_template.json") # Loading a new prompt template using a JSON file -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") prompt.input_variables # ["adjective", "content"] # Loading a new prompt template using a YAML file -prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") +prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") prompt.input_variables # ["adjective", "content"] diff --git a/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb b/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb index 9a3f81d49..20dc9c333 100644 --- a/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb +++ b/examples/create_and_manage_prompt_templates_using_structured_output_parser.rb @@ -37,8 +37,8 @@ required: ["name", "age", "interests"], additionalProperties: false } -parser = Langchain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) -prompt = Langchain::Prompt::PromptTemplate.new(template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"]) +parser = LangChain::OutputParsers::StructuredOutputParser.from_json_schema(json_schema) +prompt = LangChain::Prompt::PromptTemplate.new(template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"]) prompt.format(description: "Korean chemistry student", format_instructions: parser.get_format_instructions) # Generate details of a fictional character. # You must format your output as a JSON value that adheres to a given "JSON Schema" instance. @@ -58,7 +58,7 @@ # Character description: Korean chemistry student -llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) llm_response = llm.chat( messages: [{ role: "user", @@ -91,7 +91,7 @@ # ``` # RESPONSE -fix_parser = Langchain::OutputParsers::OutputFixingParser.from_llm( +fix_parser = LangChain::OutputParsers::OutputFixingParser.from_llm( llm: llm, parser: parser ) diff --git a/examples/ollama_inquire_about_image.rb b/examples/ollama_inquire_about_image.rb index c11d440f8..cf81cc4b1 100644 --- a/examples/ollama_inquire_about_image.rb +++ b/examples/ollama_inquire_about_image.rb @@ -1,9 +1,9 @@ require_relative "../lib/langchain" require "faraday" -llm = Langchain::LLM::Ollama.new(default_options: {chat_model: "llava"}) +llm = LangChain::LLM::Ollama.new(default_options: {chat_model: "llava"}) -assistant = Langchain::Assistant.new(llm: llm) +assistant = LangChain::Assistant.new(llm: llm) response = assistant.add_message_and_run( image_url: "https://gist.githubusercontent.com/andreibondarev/b6f444194d0ee7ab7302a4d83184e53e/raw/099e10af2d84638211e25866f71afa7308226365/sf-cable-car.jpg", diff --git a/examples/openai_qdrant_function_calls.rb b/examples/openai_qdrant_function_calls.rb index 684603313..18a606e15 100644 --- a/examples/openai_qdrant_function_calls.rb +++ b/examples/openai_qdrant_function_calls.rb @@ -18,14 +18,14 @@ } ] -openai = Langchain::LLM::OpenAI.new( +openai = LangChain::LLM::OpenAI.new( api_key: ENV["OPENAI_API_KEY"], default_options: { chat_model: "gpt-3.5-turbo-16k" } ) -client = Langchain::Vectorsearch::Qdrant.new( +client = LangChain::Vectorsearch::Qdrant.new( url: ENV["QDRANT_URL"], api_key: ENV["QDRANT_API_KEY"], index_name: ENV["QDRANT_INDEX"], diff --git a/examples/pdf_store_and_query_with_chroma.rb b/examples/pdf_store_and_query_with_chroma.rb index 02571c5a4..c3eef9650 100644 --- a/examples/pdf_store_and_query_with_chroma.rb +++ b/examples/pdf_store_and_query_with_chroma.rb @@ -5,10 +5,10 @@ # or add `gem "chroma-db", "~> 0.6.0"` to your Gemfile # Instantiate the Chroma client -chroma = Langchain::Vectorsearch::Chroma.new( +chroma = LangChain::Vectorsearch::Chroma.new( url: ENV["CHROMA_URL"], index_name: "documents", - llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) + llm: LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ) # Create the default schema. @@ -20,9 +20,9 @@ # Set up an array of PDF and TXT documents docs = [ - Langchain.root.join("/docs/document.pdf"), - Langchain.root.join("/docs/document.txt"), - Langchain.root.join("/docs/document.docx") + LangChain.root.join("/docs/document.pdf"), + LangChain.root.join("/docs/document.txt"), + LangChain.root.join("/docs/document.docx") ] # Add data to the index. Weaviate will use OpenAI to generate embeddings behind the scene. diff --git a/examples/store_and_query_with_milvus.rb b/examples/store_and_query_with_milvus.rb index 5e7c7d2f0..b03b3a779 100644 --- a/examples/store_and_query_with_milvus.rb +++ b/examples/store_and_query_with_milvus.rb @@ -4,10 +4,10 @@ # or add `gem "milvus"` to your Gemfile # Instantiate the OpenAI client -openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +openai = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) # Instantiate the Milvus client -milvus = Langchain::Vectorsearch::Milvus.new( +milvus = LangChain::Vectorsearch::Milvus.new( url: ENV["MILVUS_URL"], index_name: "recipes", llm: openai diff --git a/examples/store_and_query_with_pgvector_using_metadata.rb b/examples/store_and_query_with_pgvector_using_metadata.rb index 0f526b9ef..66c0a2638 100644 --- a/examples/store_and_query_with_pgvector_using_metadata.rb +++ b/examples/store_and_query_with_pgvector_using_metadata.rb @@ -5,10 +5,10 @@ require "ruby/openai" # Initialize the Pgvector client -pgvector = Langchain::Vectorsearch::Pgvector.new( +pgvector = LangChain::Vectorsearch::Pgvector.new( url: ENV["POSTGRES_URL"], index_name: "documents", - llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) + llm: LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ) # Create the default schema if it doesn't exist diff --git a/examples/store_and_query_with_pinecone.rb b/examples/store_and_query_with_pinecone.rb index 78fb1b489..30bd4e8d7 100644 --- a/examples/store_and_query_with_pinecone.rb +++ b/examples/store_and_query_with_pinecone.rb @@ -5,11 +5,11 @@ # or add `gem "pinecone"` to your Gemfile # Instantiate the Pinecone client -pinecone = Langchain::Vectorsearch::Pinecone.new( +pinecone = LangChain::Vectorsearch::Pinecone.new( environment: ENV["PINECONE_ENVIRONMENT"], api_key: ENV["PINECONE_API_KEY"], index_name: "recipes", - llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) + llm: LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ) # Create the default schema. @@ -39,7 +39,7 @@ ) # Generate an embedding and search by it -openai = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) +openai = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) embedding = openai.embed(text: "veggie").embedding pinecone.similarity_search_by_vector( diff --git a/examples/store_and_query_with_qdrant.rb b/examples/store_and_query_with_qdrant.rb index 928aef4f6..6cc74d506 100644 --- a/examples/store_and_query_with_qdrant.rb +++ b/examples/store_and_query_with_qdrant.rb @@ -5,11 +5,11 @@ # or add `gem "qdrant-ruby"` to your Gemfile # Instantiate the Qdrant client -qdrant = Langchain::Vectorsearch::Qdrant.new( +qdrant = LangChain::Vectorsearch::Qdrant.new( url: ENV["QDRANT_URL"], api_key: ENV["QDRANT_API_KEY"], index_name: "recipes", - llm: Langchain::LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"]) + llm: LangChain::LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"]) ) # Create the default schema. diff --git a/examples/store_and_query_with_weaviate.rb b/examples/store_and_query_with_weaviate.rb index 49e77beca..a598c00e2 100644 --- a/examples/store_and_query_with_weaviate.rb +++ b/examples/store_and_query_with_weaviate.rb @@ -5,11 +5,11 @@ # or add `gem "weaviate-ruby"` to your Gemfile # Instantiate the Weaviate client -weaviate = Langchain::Vectorsearch::Weaviate.new( +weaviate = LangChain::Vectorsearch::Weaviate.new( url: ENV["WEAVIATE_URL"], api_key: ENV["WEAVIATE_API_KEY"], index_name: "Recipes", - llm: Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) + llm: LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) ) # Create the default schema. A text field `content` will be used. diff --git a/langchain.gemspec b/langchain.gemspec index 0309b8dbe..2763d544b 100644 --- a/langchain.gemspec +++ b/langchain.gemspec @@ -4,12 +4,12 @@ require_relative "lib/langchain/version" Gem::Specification.new do |spec| spec.name = "langchainrb" - spec.version = Langchain::VERSION + spec.version = LangChain::VERSION spec.authors = ["Andrei Bondarev"] spec.email = ["andrei.bondarev13@gmail.com"] - spec.summary = "Build LLM-backed Ruby applications with Ruby's Langchain.rb" - spec.description = "Build LLM-backed Ruby applications with Ruby's Langchain.rb" + spec.summary = "Build LLM-backed Ruby applications with Ruby's LangChain.rb" + spec.description = "Build LLM-backed Ruby applications with Ruby's LangChain.rb" spec.homepage = "https://rubygems.org/gems/langchainrb" spec.license = "MIT" spec.required_ruby_version = ">= 3.1.0" diff --git a/lib/langchain.rb b/lib/langchain.rb index d03d8388e..0168efe55 100644 --- a/lib/langchain.rb +++ b/lib/langchain.rb @@ -2,7 +2,7 @@ require "langchain/version" require "langchain/engine" -module Langchain +module LangChain class << self # @return [Logger] attr_accessor :logger @@ -43,7 +43,7 @@ def colorize_logger_msg(msg, severity) end LOGGER_OPTIONS = { - progname: "Langchain.rb", + progname: "LangChain.rb", formatter: ->(severity, time, progname, msg) do Logger::Formatter.new.call( @@ -58,3 +58,12 @@ def colorize_logger_msg(msg, severity) self.logger ||= ::Logger.new($stdout, **LOGGER_OPTIONS) @root = Pathname.new(__dir__) end + +module Langchain + def self.const_missing(name) + message = LangChain::Colorizer.yellow("`Langchain` is deprecated. Use `LangChain` instead.") + warn(message, uplevel: 1) + + LangChain.const_get(name) + end +end diff --git a/lib/langchain/application_record.rb b/lib/langchain/application_record.rb index bbe69d186..3b941d748 100644 --- a/lib/langchain/application_record.rb +++ b/lib/langchain/application_record.rb @@ -1,4 +1,4 @@ -module Langchain +module LangChain class ApplicationRecord < ActiveRecord::Base self.abstract_class = true end diff --git a/lib/langchain/assistant.rb b/lib/langchain/assistant.rb index 92ec49139..8bfa01f65 100644 --- a/lib/langchain/assistant.rb +++ b/lib/langchain/assistant.rb @@ -1,15 +1,15 @@ # frozen_string_literal: true -module Langchain +module LangChain # Assistants are Agent-like objects that leverage helpful instructions, LLMs, tools and knowledge to respond to user queries. # Assistants can be configured with an LLM of your choice, any vector search database and easily extended with additional tools. # # Usage: - # llm = Langchain::LLM::GoogleGemini.new(api_key: ENV["GOOGLE_GEMINI_API_KEY"]) - # assistant = Langchain::Assistant.new( + # llm = LangChain::LLM::GoogleGemini.new(api_key: ENV["GOOGLE_GEMINI_API_KEY"]) + # assistant = LangChain::Assistant.new( # llm: llm, # instructions: "You're a News Reporter AI", - # tools: [Langchain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] + # tools: [LangChain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"])] # ) class Assistant attr_reader :llm, @@ -29,12 +29,12 @@ class Assistant # Create a new assistant # - # @param llm [Langchain::LLM::Base] LLM instance that the assistant will use - # @param tools [Array] Tools that the assistant has access to + # @param llm [LangChain::LLM::Base] LLM instance that the assistant will use + # @param tools [Array] Tools that the assistant has access to # @param instructions [String] The system instructions # @param tool_choice [String] Specify how tools should be selected. Options: "auto", "any", "none", or # @param parallel_tool_calls [Boolean] Whether or not to run tools in parallel - # @param messages [Array] The messages + # @param messages [Array] The messages # @param add_message_callback [Proc] A callback function (Proc or lambda) that is called when any message is added to the conversation # @param tool_execution_callback [Proc] A callback function (Proc or lambda) that is called right before a tool function is executed def initialize( @@ -49,8 +49,8 @@ def initialize( tool_execution_callback: nil, &block ) - unless tools.is_a?(Array) && tools.all? { |tool| tool.class.singleton_class.included_modules.include?(Langchain::ToolDefinition) } - raise ArgumentError, "Tools must be an array of objects extending Langchain::ToolDefinition" + unless tools.is_a?(Array) && tools.all? { |tool| tool.class.singleton_class.included_modules.include?(LangChain::ToolDefinition) } + raise ArgumentError, "Tools must be an array of objects extending LangChain::ToolDefinition" end @llm = llm @@ -79,7 +79,7 @@ def initialize( # @param image_url [String] The URL of the image to include in the message # @param tool_calls [Array] The tool calls to include in the message # @param tool_call_id [String] The ID of the tool call to include in the message - # @return [Array] The messages + # @return [Array] The messages def add_message(role: "user", content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) message = build_message(role: role, content: content, image_url: image_url, tool_calls: tool_calls, tool_call_id: tool_call_id) @@ -110,10 +110,10 @@ def prompt_of_concatenated_messages # Set multiple messages # - # @param messages [Array] The messages to set - # @return [Array] The messages + # @param messages [Array] The messages to set + # @return [Array] The messages def messages=(messages) - raise ArgumentError, "messages array must only contain Langchain::Message instance(s)" unless messages.is_a?(Array) && messages.all? { |m| m.is_a?(Messages::Base) } + raise ArgumentError, "messages array must only contain LangChain::Message instance(s)" unless messages.is_a?(Array) && messages.all? { |m| m.is_a?(Messages::Base) } @messages = messages end @@ -121,7 +121,7 @@ def messages=(messages) # Add multiple messages # # @param messages [Array] The messages to add - # @return [Array] The messages + # @return [Array] The messages def add_messages(messages:) messages.each do |message_hash| add_message(**message_hash.slice(:content, :role, :tool_calls, :tool_call_id)) @@ -131,10 +131,10 @@ def add_messages(messages:) # Run the assistant # # @param auto_tool_execution [Boolean] Whether or not to automatically run tools - # @return [Array] The messages + # @return [Array] The messages def run(auto_tool_execution: false) if messages.empty? - Langchain.logger.warn("#{self.class} - No messages to process") + LangChain.logger.warn("#{self.class} - No messages to process") @state = :completed return end @@ -147,7 +147,7 @@ def run(auto_tool_execution: false) # Run the assistant with automatic tool execution # - # @return [Array] The messages + # @return [Array] The messages def run! run(auto_tool_execution: true) end @@ -156,7 +156,7 @@ def run! # # @param content [String] The content of the message # @param auto_tool_execution [Boolean] Whether or not to automatically run tools - # @return [Array] The messages + # @return [Array] The messages def add_message_and_run(content: nil, image_url: nil, auto_tool_execution: false) add_message(content: content, image_url: image_url, role: "user") run(auto_tool_execution: auto_tool_execution) @@ -165,7 +165,7 @@ def add_message_and_run(content: nil, image_url: nil, auto_tool_execution: false # Add a user message and run the assistant with automatic tool execution # # @param content [String] The content of the message - # @return [Array] The messages + # @return [Array] The messages def add_message_and_run!(content: nil, image_url: nil) add_message_and_run(content: content, image_url: image_url, auto_tool_execution: true) end @@ -174,7 +174,7 @@ def add_message_and_run!(content: nil, image_url: nil) # # @param tool_call_id [String] The ID of the tool call to submit output for # @param output [String] The output of the tool - # @return [Array] The messages + # @return [Array] The messages def submit_tool_output(tool_call_id:, output:) # TODO: Validate that `tool_call_id` is valid by scanning messages and checking if this tool call ID was invoked add_message(role: @llm_adapter.tool_role, content: output, tool_call_id: tool_call_id) @@ -191,7 +191,7 @@ def clear_messages! # Set new instructions # # @param new_instructions [String] New instructions that will be set as a system message - # @return [Array] The messages + # @return [Array] The messages def instructions=(new_instructions) @instructions = new_instructions @@ -215,7 +215,7 @@ def tool_choice=(new_tool_choice) # Replace old system message with new one # # @param content [String] New system message content - # @return [Array] The messages + # @return [Array] The messages def replace_system_message!(content:) messages.delete_if(&:system?) return if content.nil? @@ -277,7 +277,7 @@ def process_latest_message # # @return [Symbol] The completed state def handle_system_message - Langchain.logger.warn("#{self.class} - At least one user message is required after a system message") + LangChain.logger.warn("#{self.class} - At least one user message is required after a system message") :completed end @@ -292,7 +292,7 @@ def handle_llm_message # # @return [Symbol] The failed state def handle_unexpected_message - Langchain.logger.error("#{self.class} - Unexpected message role encountered: #{messages.last.standard_role}") + LangChain.logger.error("#{self.class} - Unexpected message role encountered: #{messages.last.standard_role}") :failed end @@ -316,7 +316,7 @@ def set_state_for(response:) elsif response.completion # Currently only used by Ollama :completed else - Langchain.logger.error("#{self.class} - LLM response does not contain tool calls, chat or completion response") + LangChain.logger.error("#{self.class} - LLM response does not contain tool calls, chat or completion response") :failed end end @@ -328,15 +328,15 @@ def execute_tools run_tools(messages.last.tool_calls) :in_progress rescue => e - Langchain.logger.error("#{self.class} - Error running tools: #{e.message}; #{e.backtrace.join('\n')}") + LangChain.logger.error("#{self.class} - Error running tools: #{e.message}; #{e.backtrace.join('\n')}") :failed end # Call to the LLM#chat() method # - # @return [Langchain::LLM::BaseResponse] The LLM response object + # @return [LangChain::LLM::BaseResponse] The LLM response object def chat_with_llm - Langchain.logger.debug("#{self.class} - Sending a call to #{llm.class}") + LangChain.logger.debug("#{self.class} - Sending a call to #{llm.class}") params = @llm_adapter.build_chat_params( instructions: @instructions, @@ -389,7 +389,7 @@ def run_tool(tool_call) # @param image_url [String] The URL of the image to include in the message # @param tool_calls [Array] The tool calls to include in the message # @param tool_call_id [String] The ID of the tool call to include in the message - # @return [Langchain::Message] The Message object + # @return [LangChain::Message] The Message object def build_message(role:, content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) @llm_adapter.build_message(role: role, content: content, image_url: image_url, tool_calls: tool_calls, tool_call_id: tool_call_id) end diff --git a/lib/langchain/assistant/llm/adapter.rb b/lib/langchain/assistant/llm/adapter.rb index 6a2e969f9..bc2da98f3 100644 --- a/lib/langchain/assistant/llm/adapter.rb +++ b/lib/langchain/assistant/llm/adapter.rb @@ -1,22 +1,22 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM # TODO: Fix the message truncation when context window is exceeded class Adapter def self.build(llm) - if llm.is_a?(Langchain::LLM::Anthropic) + if llm.is_a?(LangChain::LLM::Anthropic) LLM::Adapters::Anthropic.new - elsif llm.is_a?(Langchain::LLM::AwsBedrock) && llm.defaults[:chat_model].include?("anthropic") + elsif llm.is_a?(LangChain::LLM::AwsBedrock) && llm.defaults[:chat_model].include?("anthropic") LLM::Adapters::AwsBedrockAnthropic.new - elsif llm.is_a?(Langchain::LLM::GoogleGemini) || llm.is_a?(Langchain::LLM::GoogleVertexAI) + elsif llm.is_a?(LangChain::LLM::GoogleGemini) || llm.is_a?(LangChain::LLM::GoogleVertexAI) LLM::Adapters::GoogleGemini.new - elsif llm.is_a?(Langchain::LLM::MistralAI) + elsif llm.is_a?(LangChain::LLM::MistralAI) LLM::Adapters::MistralAI.new - elsif llm.is_a?(Langchain::LLM::Ollama) + elsif llm.is_a?(LangChain::LLM::Ollama) LLM::Adapters::Ollama.new - elsif llm.is_a?(Langchain::LLM::OpenAI) + elsif llm.is_a?(LangChain::LLM::OpenAI) LLM::Adapters::OpenAI.new else raise ArgumentError, "Unsupported LLM type: #{llm.class}" diff --git a/lib/langchain/assistant/llm/adapters/anthropic.rb b/lib/langchain/assistant/llm/adapters/anthropic.rb index 9ecf10004..9584b623a 100644 --- a/lib/langchain/assistant/llm/adapters/anthropic.rb +++ b/lib/langchain/assistant/llm/adapters/anthropic.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters diff --git a/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic.rb b/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic.rb index aab32378d..22cb6b495 100644 --- a/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic.rb +++ b/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters @@ -12,7 +12,7 @@ class AwsBedrockAnthropic < Anthropic # @return [Hash] def build_tool_choice(choice, parallel_tool_calls) # Aws Bedrock hosted Anthropic does not support parallel tool calls - Langchain.logger.warn "WARNING: parallel_tool_calls is not supported by AWS Bedrock Anthropic currently" if parallel_tool_calls + LangChain.logger.warn "WARNING: parallel_tool_calls is not supported by AWS Bedrock Anthropic currently" if parallel_tool_calls tool_choice_object = {} diff --git a/lib/langchain/assistant/llm/adapters/base.rb b/lib/langchain/assistant/llm/adapters/base.rb index d74fb1d7b..e884af1c6 100644 --- a/lib/langchain/assistant/llm/adapters/base.rb +++ b/lib/langchain/assistant/llm/adapters/base.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters diff --git a/lib/langchain/assistant/llm/adapters/google_gemini.rb b/lib/langchain/assistant/llm/adapters/google_gemini.rb index de0ee6f93..76ac18850 100644 --- a/lib/langchain/assistant/llm/adapters/google_gemini.rb +++ b/lib/langchain/assistant/llm/adapters/google_gemini.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters @@ -20,7 +20,7 @@ def build_chat_params( tool_choice:, parallel_tool_calls: ) - Langchain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Google Gemini currently" if parallel_tool_calls + LangChain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Google Gemini currently" if parallel_tool_calls params = {messages: messages} if tools.any? @@ -40,7 +40,7 @@ def build_chat_params( # @param tool_call_id [String] The tool call ID # @return [Messages::GoogleGeminiMessage] The Google Gemini message def build_message(role:, content: nil, image_url: nil, tool_calls: [], tool_call_id: nil) - Langchain.logger.warn "Image URL is not supported by Google Gemini" if image_url + LangChain.logger.warn "Image URL is not supported by Google Gemini" if image_url Messages::GoogleGeminiMessage.new(role: role, content: content, tool_calls: tool_calls, tool_call_id: tool_call_id) end @@ -59,7 +59,7 @@ def extract_tool_call_args(tool_call:) # Build the tools for the Google Gemini LLM # - # @param tools [Array] The tools + # @param tools [Array] The tools # @return [Array] The tools in Google Gemini format def build_tools(tools) tools.map { |tool| tool.class.function_schemas.to_google_gemini_format }.flatten diff --git a/lib/langchain/assistant/llm/adapters/mistralai.rb b/lib/langchain/assistant/llm/adapters/mistralai.rb index 675824ae9..6b7589f6d 100644 --- a/lib/langchain/assistant/llm/adapters/mistralai.rb +++ b/lib/langchain/assistant/llm/adapters/mistralai.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters @@ -20,7 +20,7 @@ def build_chat_params( tool_choice:, parallel_tool_calls: ) - Langchain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Mistral AI currently" if parallel_tool_calls + LangChain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Mistral AI currently" if parallel_tool_calls params = {messages: messages} if tools.any? @@ -54,7 +54,7 @@ def extract_tool_call_args(tool_call:) tool_arguments = tool_call.dig("function", "arguments") tool_arguments = if tool_arguments.is_a?(Hash) - Langchain::Utils::HashTransformer.symbolize_keys(tool_arguments) + LangChain::Utils::HashTransformer.symbolize_keys(tool_arguments) else JSON.parse(tool_arguments, symbolize_names: true) end diff --git a/lib/langchain/assistant/llm/adapters/ollama.rb b/lib/langchain/assistant/llm/adapters/ollama.rb index 8e1b3a812..2ed206ae3 100644 --- a/lib/langchain/assistant/llm/adapters/ollama.rb +++ b/lib/langchain/assistant/llm/adapters/ollama.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters @@ -20,8 +20,8 @@ def build_chat_params( tool_choice:, parallel_tool_calls: ) - Langchain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Ollama currently" if parallel_tool_calls - Langchain.logger.warn "WARNING: `tool_choice:` is not supported by Ollama currently" if tool_choice + LangChain.logger.warn "WARNING: `parallel_tool_calls:` is not supported by Ollama currently" if parallel_tool_calls + LangChain.logger.warn "WARNING: `tool_choice:` is not supported by Ollama currently" if tool_choice params = {messages: messages} if tools.any? @@ -54,7 +54,7 @@ def extract_tool_call_args(tool_call:) tool_arguments = tool_call.dig("function", "arguments") tool_arguments = if tool_arguments.is_a?(Hash) - Langchain::Utils::HashTransformer.symbolize_keys(tool_arguments) + LangChain::Utils::HashTransformer.symbolize_keys(tool_arguments) else JSON.parse(tool_arguments, symbolize_names: true) end diff --git a/lib/langchain/assistant/llm/adapters/openai.rb b/lib/langchain/assistant/llm/adapters/openai.rb index 67b734bbb..4979e9d24 100644 --- a/lib/langchain/assistant/llm/adapters/openai.rb +++ b/lib/langchain/assistant/llm/adapters/openai.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module LLM module Adapters @@ -55,7 +55,7 @@ def extract_tool_call_args(tool_call:) tool_arguments = tool_call.dig("function", "arguments") tool_arguments = if tool_arguments.is_a?(Hash) - Langchain::Utils::HashTransformer.symbolize_keys(tool_arguments) + LangChain::Utils::HashTransformer.symbolize_keys(tool_arguments) else JSON.parse(tool_arguments, symbolize_names: true) end diff --git a/lib/langchain/assistant/messages/anthropic_message.rb b/lib/langchain/assistant/messages/anthropic_message.rb index 70f38209e..954337ef2 100644 --- a/lib/langchain/assistant/messages/anthropic_message.rb +++ b/lib/langchain/assistant/messages/anthropic_message.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class AnthropicMessage < Base diff --git a/lib/langchain/assistant/messages/base.rb b/lib/langchain/assistant/messages/base.rb index da28248f7..cc287c56b 100644 --- a/lib/langchain/assistant/messages/base.rb +++ b/lib/langchain/assistant/messages/base.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class Base diff --git a/lib/langchain/assistant/messages/google_gemini_message.rb b/lib/langchain/assistant/messages/google_gemini_message.rb index 29b09363a..ab8913f54 100644 --- a/lib/langchain/assistant/messages/google_gemini_message.rb +++ b/lib/langchain/assistant/messages/google_gemini_message.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class GoogleGeminiMessage < Base diff --git a/lib/langchain/assistant/messages/mistralai_message.rb b/lib/langchain/assistant/messages/mistralai_message.rb index 2c081d948..031662610 100644 --- a/lib/langchain/assistant/messages/mistralai_message.rb +++ b/lib/langchain/assistant/messages/mistralai_message.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class MistralAIMessage < Base diff --git a/lib/langchain/assistant/messages/ollama_message.rb b/lib/langchain/assistant/messages/ollama_message.rb index d2067dc31..2d24a26c0 100644 --- a/lib/langchain/assistant/messages/ollama_message.rb +++ b/lib/langchain/assistant/messages/ollama_message.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class OllamaMessage < Base diff --git a/lib/langchain/assistant/messages/openai_message.rb b/lib/langchain/assistant/messages/openai_message.rb index 44e57a349..2d519f00c 100644 --- a/lib/langchain/assistant/messages/openai_message.rb +++ b/lib/langchain/assistant/messages/openai_message.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain class Assistant module Messages class OpenAIMessage < Base diff --git a/lib/langchain/chunk.rb b/lib/langchain/chunk.rb index 5e9663e63..441e08385 100644 --- a/lib/langchain/chunk.rb +++ b/lib/langchain/chunk.rb @@ -1,14 +1,14 @@ # frozen_string_literal: true -module Langchain +module LangChain class Chunk - # The chunking process is the process of splitting a document into smaller chunks and creating instances of Langchain::Chunk + # The chunking process is the process of splitting a document into smaller chunks and creating instances of LangChain::Chunk attr_reader :text # Initialize a new chunk # @param [String] text - # @return [Langchain::Chunk] + # @return [LangChain::Chunk] def initialize(text:) @text = text end diff --git a/lib/langchain/chunker/base.rb b/lib/langchain/chunker/base.rb index 01e207012..8072db0a2 100644 --- a/lib/langchain/chunker/base.rb +++ b/lib/langchain/chunker/base.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Chunker # = Chunkers # Chunkers are used to split documents into smaller chunks before indexing into vector search databases. @@ -8,12 +8,12 @@ module Chunker # # == Available chunkers # - # - {Langchain::Chunker::RecursiveText} - # - {Langchain::Chunker::Text} - # - {Langchain::Chunker::Semantic} - # - {Langchain::Chunker::Sentence} + # - {LangChain::Chunker::RecursiveText} + # - {LangChain::Chunker::Text} + # - {LangChain::Chunker::Semantic} + # - {LangChain::Chunker::Sentence} class Base - # @return [Array] + # @return [Array] def chunks raise NotImplementedError end diff --git a/lib/langchain/chunker/markdown.rb b/lib/langchain/chunker/markdown.rb index 64b479c99..75bde6729 100644 --- a/lib/langchain/chunker/markdown.rb +++ b/lib/langchain/chunker/markdown.rb @@ -2,12 +2,12 @@ require "baran" -module Langchain +module LangChain module Chunker # Simple text chunker # # Usage: - # Langchain::Chunker::Markdown.new(text).chunks + # LangChain::Chunker::Markdown.new(text).chunks class Markdown < Base attr_reader :text, :chunk_size, :chunk_overlap @@ -21,7 +21,7 @@ def initialize(text, chunk_size: 1000, chunk_overlap: 200) @chunk_overlap = chunk_overlap end - # @return [Array] + # @return [Array] def chunks splitter = Baran::MarkdownSplitter.new( chunk_size: chunk_size, @@ -29,7 +29,7 @@ def chunks ) splitter.chunks(text).map do |chunk| - Langchain::Chunk.new(text: chunk[:text]) + LangChain::Chunk.new(text: chunk[:text]) end end end diff --git a/lib/langchain/chunker/recursive_text.rb b/lib/langchain/chunker/recursive_text.rb index 52a1b6a62..438248ccc 100644 --- a/lib/langchain/chunker/recursive_text.rb +++ b/lib/langchain/chunker/recursive_text.rb @@ -2,12 +2,12 @@ require "baran" -module Langchain +module LangChain module Chunker # Recursive text chunker. Preferentially splits on separators. # # Usage: - # Langchain::Chunker::RecursiveText.new(text).chunks + # LangChain::Chunker::RecursiveText.new(text).chunks class RecursiveText < Base attr_reader :text, :chunk_size, :chunk_overlap, :separators @@ -22,7 +22,7 @@ def initialize(text, chunk_size: 1000, chunk_overlap: 200, separators: ["\n\n"]) @separators = separators end - # @return [Array] + # @return [Array] def chunks splitter = Baran::RecursiveCharacterTextSplitter.new( chunk_size: chunk_size, @@ -31,7 +31,7 @@ def chunks ) splitter.chunks(text).map do |chunk| - Langchain::Chunk.new(text: chunk[:text]) + LangChain::Chunk.new(text: chunk[:text]) end end end diff --git a/lib/langchain/chunker/semantic.rb b/lib/langchain/chunker/semantic.rb index 202a6b225..8407d2fcd 100644 --- a/lib/langchain/chunker/semantic.rb +++ b/lib/langchain/chunker/semantic.rb @@ -1,27 +1,27 @@ # frozen_string_literal: true -module Langchain +module LangChain module Chunker # LLM-powered semantic chunker. # Semantic chunking is a technique of splitting texts by their semantic meaning, e.g.: themes, topics, and ideas. # We use an LLM to accomplish this. The Anthropic LLM is highly recommended for this task as it has the longest context window (100k tokens). # # Usage: - # Langchain::Chunker::Semantic.new( + # LangChain::Chunker::Semantic.new( # text, - # llm: Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) + # llm: LangChain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) # ).chunks class Semantic < Base attr_reader :text, :llm, :prompt_template - # @param [Langchain::LLM::Base] Langchain::LLM::* instance - # @param [Langchain::Prompt::PromptTemplate] Optional custom prompt template + # @param [LangChain::LLM::Base] LangChain::LLM::* instance + # @param [LangChain::Prompt::PromptTemplate] Optional custom prompt template def initialize(text, llm:, prompt_template: nil) @text = text @llm = llm @prompt_template = prompt_template || default_prompt_template end - # @return [Array] + # @return [Array] def chunks prompt = prompt_template.format(text: text) @@ -33,16 +33,16 @@ def chunks .map(&:strip) .reject(&:empty?) .map do |chunk| - Langchain::Chunk.new(text: chunk) + LangChain::Chunk.new(text: chunk) end end private - # @return [Langchain::Prompt::PromptTemplate] Default prompt template for semantic chunking + # @return [LangChain::Prompt::PromptTemplate] Default prompt template for semantic chunking def default_prompt_template - Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/chunker/prompts/semantic_prompt_template.yml") + LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/chunker/prompts/semantic_prompt_template.yml") ) end end diff --git a/lib/langchain/chunker/sentence.rb b/lib/langchain/chunker/sentence.rb index 4e9cb99c7..5f9c77378 100644 --- a/lib/langchain/chunker/sentence.rb +++ b/lib/langchain/chunker/sentence.rb @@ -2,26 +2,26 @@ require "pragmatic_segmenter" -module Langchain +module LangChain module Chunker # This chunker splits text by sentences. # # Usage: - # Langchain::Chunker::Sentence.new(text).chunks + # LangChain::Chunker::Sentence.new(text).chunks class Sentence < Base attr_reader :text # @param text [String] - # @return [Langchain::Chunker::Sentence] + # @return [LangChain::Chunker::Sentence] def initialize(text) @text = text end - # @return [Array] + # @return [Array] def chunks ps = PragmaticSegmenter::Segmenter.new(text: text) ps.segment.map do |chunk| - Langchain::Chunk.new(text: chunk) + LangChain::Chunk.new(text: chunk) end end end diff --git a/lib/langchain/chunker/text.rb b/lib/langchain/chunker/text.rb index f25c902f7..c072312dd 100644 --- a/lib/langchain/chunker/text.rb +++ b/lib/langchain/chunker/text.rb @@ -2,12 +2,12 @@ require "baran" -module Langchain +module LangChain module Chunker # Simple text chunker # # Usage: - # Langchain::Chunker::Text.new(text).chunks + # LangChain::Chunker::Text.new(text).chunks class Text < Base attr_reader :text, :chunk_size, :chunk_overlap, :separator @@ -22,7 +22,7 @@ def initialize(text, chunk_size: 1000, chunk_overlap: 200, separator: "\n\n") @separator = separator end - # @return [Array] + # @return [Array] def chunks splitter = Baran::CharacterTextSplitter.new( chunk_size: chunk_size, @@ -31,7 +31,7 @@ def chunks ) splitter.chunks(text).map do |chunk| - Langchain::Chunk.new(text: chunk[:text]) + LangChain::Chunk.new(text: chunk[:text]) end end end diff --git a/lib/langchain/data.rb b/lib/langchain/data.rb index 721f9c847..c38ece21f 100644 --- a/lib/langchain/data.rb +++ b/lib/langchain/data.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -module Langchain - # Abstraction for data loaded by a {Langchain::Loader} +module LangChain + # Abstraction for data loaded by a {LangChain::Loader} class Data # URL or Path of the data source # @return [String] @@ -9,7 +9,7 @@ class Data # @param data [String] data that was loaded # @option options [String] :source URL or Path of the data source - def initialize(data, source: nil, chunker: Langchain::Chunker::Text) + def initialize(data, source: nil, chunker: LangChain::Chunker::Text) @source = source @data = data @chunker_klass = chunker diff --git a/lib/langchain/dependency_helper.rb b/lib/langchain/dependency_helper.rb index 23814f131..adce3b53f 100644 --- a/lib/langchain/dependency_helper.rb +++ b/lib/langchain/dependency_helper.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module DependencyHelper class LoadError < ::LoadError; end @@ -17,7 +17,7 @@ class VersionError < ScriptError; end def depends_on(gem_name, req: true) gem(gem_name) # require the gem - return(true) unless defined?(Bundler) # If we're in a non-bundler environment, we're no longer able to determine if we'll meet requirements + return true unless defined?(Bundler) # If we're in a non-bundler environment, we're no longer able to determine if we'll meet requirements gem_version = Gem.loaded_specs[gem_name].version gem_requirement = Bundler.load.dependencies.find { |g| g.name == gem_name }&.requirement diff --git a/lib/langchain/engine.rb b/lib/langchain/engine.rb index 5255b5fb6..e0ae60d95 100644 --- a/lib/langchain/engine.rb +++ b/lib/langchain/engine.rb @@ -1,8 +1,14 @@ -module Langchain +module LangChain class Engine < ::Rails::Engine - isolate_namespace Langchain + isolate_namespace LangChain config.autoload_paths << root.join("lib") config.eager_load_paths << root.join("lib") + + initializer "LangChain.inflector" do + Rails.autoloaders.once.inflector.inflect( + "langchain" => "LangChain" + ) + end end end diff --git a/lib/langchain/evals/ragas/answer_relevance.rb b/lib/langchain/evals/ragas/answer_relevance.rb index 02b26d295..f1fc48525 100644 --- a/lib/langchain/evals/ragas/answer_relevance.rb +++ b/lib/langchain/evals/ragas/answer_relevance.rb @@ -2,7 +2,7 @@ require "matrix" -module Langchain +module LangChain module Evals module Ragas # Answer Relevance refers to the idea that the generated answer should address the actual question that was provided. @@ -10,7 +10,7 @@ module Ragas class AnswerRelevance attr_reader :llm, :batch_size - # @param llm [Langchain::LLM::*] Langchain::LLM::* object + # @param llm [LangChain::LLM::*] LangChain::LLM::* object # @param batch_size [Integer] Batch size, i.e., number of generated questions to compare to the original question def initialize(llm:, batch_size: 3) @llm = llm @@ -61,8 +61,8 @@ def generate_embedding(text) # @return [PromptTemplate] PromptTemplate instance def answer_relevance_prompt_template - @template ||= Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/evals/ragas/prompts/answer_relevance.yml") + @template ||= LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/evals/ragas/prompts/answer_relevance.yml") ) end end diff --git a/lib/langchain/evals/ragas/context_relevance.rb b/lib/langchain/evals/ragas/context_relevance.rb index 87afeaab9..40d56dca7 100644 --- a/lib/langchain/evals/ragas/context_relevance.rb +++ b/lib/langchain/evals/ragas/context_relevance.rb @@ -2,14 +2,14 @@ require "pragmatic_segmenter" -module Langchain +module LangChain module Evals module Ragas # Context Relevance refers to the idea that the retrieved context should be focused, containing as little irrelevant information as possible. class ContextRelevance attr_reader :llm - # @param llm [Langchain::LLM::*] Langchain::LLM::* object + # @param llm [LangChain::LLM::*] LangChain::LLM::* object def initialize(llm:) @llm = llm end @@ -36,8 +36,8 @@ def sentence_count(context) # @return [PromptTemplate] PromptTemplate instance def context_relevance_prompt_template - @template ||= Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/evals/ragas/prompts/context_relevance.yml") + @template ||= LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/evals/ragas/prompts/context_relevance.yml") ) end end diff --git a/lib/langchain/evals/ragas/faithfulness.rb b/lib/langchain/evals/ragas/faithfulness.rb index 8c9ee67b6..70476af04 100644 --- a/lib/langchain/evals/ragas/faithfulness.rb +++ b/lib/langchain/evals/ragas/faithfulness.rb @@ -1,6 +1,6 @@ # freeze_string_literal: true -module Langchain +module LangChain module Evals module Ragas # Faithfulness refers to the idea that the answer should be grounded in the given context, @@ -17,7 +17,7 @@ module Ragas class Faithfulness attr_reader :llm - # @param llm [Langchain::LLM::*] Langchain::LLM::* object + # @param llm [LangChain::LLM::*] LangChain::LLM::* object def initialize(llm:) @llm = llm end @@ -68,20 +68,20 @@ def statements_extraction(question:, answer:) # @return [PromptTemplate] PromptTemplate instance def statements_verification_prompt_template - @template_two ||= Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/evals/ragas/prompts/faithfulness_statements_verification.yml") + @template_two ||= LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/evals/ragas/prompts/faithfulness_statements_verification.yml") ) end # @return [PromptTemplate] PromptTemplate instance def statements_extraction_prompt_template - @template_one ||= Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/evals/ragas/prompts/faithfulness_statements_extraction.yml") + @template_one ||= LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/evals/ragas/prompts/faithfulness_statements_extraction.yml") ) end def to_boolean(value) - Langchain::Utils::ToBoolean.new.to_bool(value) + LangChain::Utils::ToBoolean.new.to_bool(value) end end end diff --git a/lib/langchain/evals/ragas/main.rb b/lib/langchain/evals/ragas/main.rb index 64117af4d..38f80aed6 100644 --- a/lib/langchain/evals/ragas/main.rb +++ b/lib/langchain/evals/ragas/main.rb @@ -1,6 +1,6 @@ # freeze_string_literal: true -module Langchain +module LangChain module Evals # The RAGAS (Retrieval Augmented Generative Assessment) is a framework for evaluating RAG (Retrieval Augmented Generation) pipelines. # Based on the following research: https://arxiv.org/pdf/2309.15217.pdf @@ -50,19 +50,19 @@ def ragas_score(answer_relevance_score, context_relevance_score, faithfulness_sc (3 / reciprocal_sum) end - # @return [Langchain::Evals::Ragas::AnswerRelevance] Class instance + # @return [LangChain::Evals::Ragas::AnswerRelevance] Class instance def answer_relevance - @answer_relevance ||= Langchain::Evals::Ragas::AnswerRelevance.new(llm: llm) + @answer_relevance ||= LangChain::Evals::Ragas::AnswerRelevance.new(llm: llm) end - # @return [Langchain::Evals::Ragas::ContextRelevance] Class instance + # @return [LangChain::Evals::Ragas::ContextRelevance] Class instance def context_relevance - @context_relevance ||= Langchain::Evals::Ragas::ContextRelevance.new(llm: llm) + @context_relevance ||= LangChain::Evals::Ragas::ContextRelevance.new(llm: llm) end - # @return [Langchain::Evals::Ragas::Faithfulness] Class instance + # @return [LangChain::Evals::Ragas::Faithfulness] Class instance def faithfulness - @faithfulness ||= Langchain::Evals::Ragas::Faithfulness.new(llm: llm) + @faithfulness ||= LangChain::Evals::Ragas::Faithfulness.new(llm: llm) end end end diff --git a/lib/langchain/llm/ai21.rb b/lib/langchain/llm/ai21.rb index 8cbe629f7..4ee9beae2 100644 --- a/lib/langchain/llm/ai21.rb +++ b/lib/langchain/llm/ai21.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around AI21 Studio APIs. # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "ai21", "~> 0.2.1" # # Usage: - # llm = Langchain::LLM::AI21.new(api_key: ENV["AI21_API_KEY"]) + # llm = LangChain::LLM::AI21.new(api_key: ENV["AI21_API_KEY"]) # # @deprecated Use another LLM provider. class AI21 < Base @@ -18,7 +18,7 @@ class AI21 < Base }.freeze def initialize(api_key:, default_options: {}) - Langchain.logger.warn "DEPRECATED: `Langchain::LLM::AI21` is deprecated, and will be removed in the next major version. Please use another LLM provider." + LangChain.logger.warn "DEPRECATED: `LangChain::LLM::AI21` is deprecated, and will be removed in the next major version. Please use another LLM provider." depends_on "ai21" @@ -31,13 +31,13 @@ def initialize(api_key:, default_options: {}) # # @param prompt [String] The prompt to generate a completion for # @param params [Hash] The parameters to pass to the API - # @return [Langchain::LLM::AI21Response] The completion + # @return [LangChain::LLM::AI21Response] The completion # def complete(prompt:, **params) parameters = complete_parameters params response = client.complete(prompt, parameters) - Langchain::LLM::Response::AI21Response.new response, model: parameters[:model] + LangChain::LLM::Response::AI21Response.new response, model: parameters[:model] end # @@ -50,7 +50,7 @@ def complete(prompt:, **params) def summarize(text:, **params) response = client.summarize(text, "TEXT", params) response.dig(:summary) - # Should we update this to also return a Langchain::LLM::AI21Response? + # Should we update this to also return a LangChain::LLM::AI21Response? end private diff --git a/lib/langchain/llm/anthropic.rb b/lib/langchain/llm/anthropic.rb index 59acb8931..7949d11ff 100644 --- a/lib/langchain/llm/anthropic.rb +++ b/lib/langchain/llm/anthropic.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around Anthropic APIs. # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "anthropic", "~> 0.3.2" # # Usage: - # llm = Langchain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) + # llm = LangChain::LLM::Anthropic.new(api_key: ENV["ANTHROPIC_API_KEY"]) # class Anthropic < Base DEFAULTS = { @@ -23,11 +23,11 @@ class Anthropic < Base # @param api_key [String] The API key to use # @param llm_options [Hash] Options to pass to the Anthropic client # @param default_options [Hash] Default options to use on every call to LLM, e.g.: { temperature:, completion_model:, chat_model:, max_tokens:, thinking: } - # @return [Langchain::LLM::Anthropic] Langchain::LLM::Anthropic instance + # @return [LangChain::LLM::Anthropic] LangChain::LLM::Anthropic instance def initialize(api_key:, llm_options: {}, default_options: {}) begin depends_on "ruby-anthropic", req: "anthropic" - rescue Langchain::DependencyHelper::LoadError + rescue LangChain::DependencyHelper::LoadError # Falls back to the older `anthropic` gem if `ruby-anthropic` gem cannot be loaded. depends_on "anthropic" end @@ -57,7 +57,7 @@ def initialize(api_key:, llm_options: {}, default_options: {}) # @param top_k [Integer] The top k value to use # @param metadata [Hash] The metadata to use # @param stream [Boolean] Whether to stream the response - # @return [Langchain::LLM::Response::AnthropicResponse] The completion + # @return [LangChain::LLM::Response::AnthropicResponse] The completion def complete( prompt:, model: @defaults[:completion_model], @@ -88,12 +88,12 @@ def complete( client.complete(parameters: parameters) end - Langchain::LLM::Response::AnthropicResponse.new(response) + LangChain::LLM::Response::AnthropicResponse.new(response) end # Generate a chat completion for given messages # - # @param [Hash] params unified chat parmeters from [Langchain::LLM::Parameters::Chat::SCHEMA] + # @param [Hash] params unified chat parmeters from [LangChain::LLM::Parameters::Chat::SCHEMA] # @option params [Array] :messages Input messages # @option params [String] :model The model that will complete your prompt # @option params [Integer] :max_tokens Maximum number of tokens to generate before stopping @@ -106,7 +106,7 @@ def complete( # @option params [Hash] :thinking Enable extended thinking mode, e.g. { type: "enabled", budget_tokens: 4000 } # @option params [Integer] :top_k Only sample from the top K options for each subsequent token # @option params [Float] :top_p Use nucleus sampling. - # @return [Langchain::LLM::Response::AnthropicResponse] The chat completion + # @return [LangChain::LLM::Response::AnthropicResponse] The chat completion def chat(params = {}, &block) set_extra_headers! if params[:tools] @@ -129,14 +129,14 @@ def chat(params = {}, &block) response = response_from_chunks if block reset_response_chunks - Langchain::LLM::Response::AnthropicResponse.new(response) + LangChain::LLM::Response::AnthropicResponse.new(response) end def with_api_error_handling response = yield return if response.empty? - raise Langchain::LLM::ApiError.new "Anthropic API error: #{response.dig("error", "message")}" if response&.dig("error") + raise LangChain::LLM::ApiError.new "Anthropic API error: #{response.dig("error", "message")}" if response&.dig("error") response end diff --git a/lib/langchain/llm/aws_bedrock.rb b/lib/langchain/llm/aws_bedrock.rb index 6f433a632..9635253a4 100644 --- a/lib/langchain/llm/aws_bedrock.rb +++ b/lib/langchain/llm/aws_bedrock.rb @@ -1,13 +1,13 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # LLM interface for Aws Bedrock APIs: https://docs.aws.amazon.com/bedrock/ # # Gem requirements: # gem 'aws-sdk-bedrockruntime', '~> 1.1' # # Usage: - # llm = Langchain::LLM::AwsBedrock.new(default_options: {}) + # llm = LangChain::LLM::AwsBedrock.new(default_options: {}) # class AwsBedrock < Base DEFAULTS = { @@ -64,7 +64,7 @@ def initialize(aws_client_options: {}, default_options: {}) # # @param text [String] The text to generate an embedding for # @param params extra parameters passed to Aws::BedrockRuntime::Client#invoke_model - # @return [Langchain::LLM::AwsTitanResponse] Response object + # @return [LangChain::LLM::AwsTitanResponse] Response object # def embed(text:, **params) raise "Completion provider #{embedding_provider} is not supported." unless SUPPORTED_EMBEDDING_PROVIDERS.include?(embedding_provider) @@ -86,7 +86,7 @@ def embed(text:, **params) # # @param prompt [String] The prompt to generate a completion for # @param params extra parameters passed to Aws::BedrockRuntime::Client#invoke_model - # @return [Langchain::LLM::Response::AnthropicResponse], [Langchain::LLM::Response::CohereResponse] or [Langchain::LLM::AI21Response] Response object + # @return [LangChain::LLM::Response::AnthropicResponse], [LangChain::LLM::Response::CohereResponse] or [LangChain::LLM::AI21Response] Response object # def complete( prompt:, @@ -113,7 +113,7 @@ def complete( # Currently only configured to work with the Anthropic provider and # the claude-3 model family # - # @param [Hash] params unified chat parmeters from [Langchain::LLM::Parameters::Chat::SCHEMA] + # @param [Hash] params unified chat parmeters from [LangChain::LLM::Parameters::Chat::SCHEMA] # @option params [Array] :messages The messages to generate a completion for # @option params [String] :system The system prompt to provide instructions # @option params [String] :model The model to use for completion defaults to @defaults[:chat_model] @@ -124,7 +124,7 @@ def complete( # @option params [Float] :top_p Use nucleus sampling. # @option params [Integer] :top_k Only sample from the top K options for each subsequent token # @yield [Hash] Provides chunks of the response as they are received - # @return [Langchain::LLM::Response::AnthropicResponse] Response object + # @return [LangChain::LLM::Response::AnthropicResponse] Response object def chat(params = {}, &block) parameters = chat_parameters.to_params(params) parameters = compose_parameters(parameters, parameters[:model]) @@ -229,15 +229,15 @@ def compose_embedding_parameters(params) def parse_response(response, model_id) if provider_name(model_id) == :anthropic - Langchain::LLM::Response::AnthropicResponse.new(JSON.parse(response.body.string)) + LangChain::LLM::Response::AnthropicResponse.new(JSON.parse(response.body.string)) elsif provider_name(model_id) == :cohere - Langchain::LLM::Response::CohereResponse.new(JSON.parse(response.body.string)) + LangChain::LLM::Response::CohereResponse.new(JSON.parse(response.body.string)) elsif provider_name(model_id) == :ai21 - Langchain::LLM::Response::AI21Response.new(JSON.parse(response.body.string, symbolize_names: true)) + LangChain::LLM::Response::AI21Response.new(JSON.parse(response.body.string, symbolize_names: true)) elsif provider_name(model_id) == :meta - Langchain::LLM::Response::AwsBedrockMetaResponse.new(JSON.parse(response.body.string)) + LangChain::LLM::Response::AwsBedrockMetaResponse.new(JSON.parse(response.body.string)) elsif provider_name(model_id) == :mistral - Langchain::LLM::Response::MistralAIResponse.new(JSON.parse(response.body.string)) + LangChain::LLM::Response::MistralAIResponse.new(JSON.parse(response.body.string)) end end @@ -245,9 +245,9 @@ def parse_embedding_response(response) json_response = JSON.parse(response.body.string) if embedding_provider == :amazon - Langchain::LLM::Response::AwsTitanResponse.new(json_response) + LangChain::LLM::Response::AwsTitanResponse.new(json_response) elsif embedding_provider == :cohere - Langchain::LLM::Response::CohereResponse.new(json_response) + LangChain::LLM::Response::CohereResponse.new(json_response) end end @@ -317,7 +317,7 @@ def response_from_chunks(chunks) end end - Langchain::LLM::Response::AnthropicResponse.new(raw_response) + LangChain::LLM::Response::AnthropicResponse.new(raw_response) end end end diff --git a/lib/langchain/llm/azure.rb b/lib/langchain/llm/azure.rb index d92d77919..8a5a7f0f5 100644 --- a/lib/langchain/llm/azure.rb +++ b/lib/langchain/llm/azure.rb @@ -1,13 +1,13 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # LLM interface for Azure OpenAI Service APIs: https://learn.microsoft.com/en-us/azure/ai-services/openai/ # # Gem requirements: # gem "ruby-openai", "~> 6.3.0" # # Usage: - # llm = Langchain::LLM::Azure.new(api_key:, llm_options: {}, embedding_deployment_url: chat_deployment_url:) + # llm = LangChain::LLM::Azure.new(api_key:, llm_options: {}, embedding_deployment_url: chat_deployment_url:) # class Azure < OpenAI attr_reader :embed_client diff --git a/lib/langchain/llm/base.rb b/lib/langchain/llm/base.rb index e6b1f4d5d..11b2efab6 100644 --- a/lib/langchain/llm/base.rb +++ b/lib/langchain/llm/base.rb @@ -1,29 +1,29 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM class ApiError < StandardError; end # A LLM is a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabeled text using self-supervised learning or semi-supervised learning. # - # Langchain.rb provides a common interface to interact with all supported LLMs: + # LangChain.rb provides a common interface to interact with all supported LLMs: # - # - {Langchain::LLM::Anthropic} - # - {Langchain::LLM::Azure} - # - {Langchain::LLM::Cohere} - # - {Langchain::LLM::GoogleGemini} - # - {Langchain::LLM::GoogleVertexAI} - # - {Langchain::LLM::HuggingFace} - # - {Langchain::LLM::OpenAI} - # - {Langchain::LLM::Replicate} + # - {LangChain::LLM::Anthropic} + # - {LangChain::LLM::Azure} + # - {LangChain::LLM::Cohere} + # - {LangChain::LLM::GoogleGemini} + # - {LangChain::LLM::GoogleVertexAI} + # - {LangChain::LLM::HuggingFace} + # - {LangChain::LLM::OpenAI} + # - {LangChain::LLM::Replicate} # # @abstract class Base - include Langchain::DependencyHelper + include LangChain::DependencyHelper # A client for communicating with the LLM attr_accessor :client - # Default LLM options. Can be overridden by passing `default_options: {}` to the Langchain::LLM::* constructors. + # Default LLM options. Can be overridden by passing `default_options: {}` to the LangChain::LLM::* constructors. attr_reader :defaults # Ensuring backward compatibility for {default_dimensions}. @@ -31,7 +31,7 @@ class Base # @deprecated Use {default_dimensions} instead. # @see https://github.com/patterns-ai-core/langchainrb/pull/586 def default_dimension - Langchain.logger.warn "DEPRECATED: `default_dimension` is deprecated, and will be removed in the next major version. Please use `default_dimensions` instead." + LangChain.logger.warn "DEPRECATED: `default_dimension` is deprecated, and will be removed in the next major version. Please use `default_dimensions` instead." default_dimensions end @@ -78,10 +78,10 @@ def summarize(...) end # - # Returns an instance of Langchain::LLM::Parameters::Chat + # Returns an instance of LangChain::LLM::Parameters::Chat # def chat_parameters(params = {}) - @chat_parameters ||= Langchain::LLM::Parameters::Chat.new( + @chat_parameters ||= LangChain::LLM::Parameters::Chat.new( parameters: params ) end diff --git a/lib/langchain/llm/cohere.rb b/lib/langchain/llm/cohere.rb index b0bb46c44..619b1a07e 100644 --- a/lib/langchain/llm/cohere.rb +++ b/lib/langchain/llm/cohere.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around the Cohere API. # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "cohere-ruby", "~> 0.9.6" # # Usage: - # llm = Langchain::LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"]) + # llm = LangChain::LLM::Cohere.new(api_key: ENV["COHERE_API_KEY"]) # class Cohere < Base DEFAULTS = { @@ -43,7 +43,7 @@ def initialize(api_key:, default_options: {}) # Generate an embedding for a given text # # @param text [String] The text to generate an embedding for - # @return [Langchain::LLM::Response::CohereResponse] Response object + # @return [LangChain::LLM::Response::CohereResponse] Response object # def embed(text:) response = client.embed( @@ -51,7 +51,7 @@ def embed(text:) model: @defaults[:embedding_model] ) - Langchain::LLM::Response::CohereResponse.new response, model: @defaults[:embedding_model] + LangChain::LLM::Response::CohereResponse.new response, model: @defaults[:embedding_model] end # @@ -59,7 +59,7 @@ def embed(text:) # # @param prompt [String] The prompt to generate a completion for # @param params[:stop_sequences] - # @return [Langchain::LLM::Response::CohereResponse] Response object + # @return [LangChain::LLM::Response::CohereResponse] Response object # def complete(prompt:, **params) default_params = { @@ -76,12 +76,12 @@ def complete(prompt:, **params) default_params.merge!(params) response = client.generate(**default_params) - Langchain::LLM::Response::CohereResponse.new response, model: @defaults[:completion_model] + LangChain::LLM::Response::CohereResponse.new response, model: @defaults[:completion_model] end # Generate a chat completion for given messages # - # @param [Hash] params unified chat parmeters from [Langchain::LLM::Parameters::Chat::SCHEMA] + # @param [Hash] params unified chat parmeters from [LangChain::LLM::Parameters::Chat::SCHEMA] # @option params [Array] :messages Input messages # @option params [String] :model The model that will complete your prompt # @option params [Integer] :max_tokens Maximum number of tokens to generate before stopping @@ -92,7 +92,7 @@ def complete(prompt:, **params) # @option params [Array] :tools Definitions of tools that the model may use # @option params [Integer] :top_k Only sample from the top K options for each subsequent token # @option params [Float] :top_p Use nucleus sampling. - # @return [Langchain::LLM::Response::CohereResponse] The chat completion + # @return [LangChain::LLM::Response::CohereResponse] The chat completion def chat(params = {}) raise ArgumentError.new("messages argument is required") if Array(params[:messages]).empty? @@ -104,7 +104,7 @@ def chat(params = {}) response = client.chat(**parameters) - Langchain::LLM::Response::CohereResponse.new(response) + LangChain::LLM::Response::CohereResponse.new(response) end # Generate a summary in English for a given text diff --git a/lib/langchain/llm/google_gemini.rb b/lib/langchain/llm/google_gemini.rb index 5fc204e80..2610d71ab 100644 --- a/lib/langchain/llm/google_gemini.rb +++ b/lib/langchain/llm/google_gemini.rb @@ -1,8 +1,8 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # Usage: - # llm = Langchain::LLM::GoogleGemini.new(api_key: ENV['GOOGLE_GEMINI_API_KEY']) + # llm = LangChain::LLM::GoogleGemini.new(api_key: ENV['GOOGLE_GEMINI_API_KEY']) class GoogleGemini < Base DEFAULTS = { chat_model: "gemini-1.5-pro-latest", @@ -61,7 +61,7 @@ def chat(params = {}) parsed_response = http_post(uri, parameters) - wrapped_response = Langchain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: parameters[:model]) + wrapped_response = LangChain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: parameters[:model]) if wrapped_response.chat_completion || Array(wrapped_response.tool_calls).any? wrapped_response @@ -88,7 +88,7 @@ def embed( parsed_response = http_post(uri, params) - Langchain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: model) + LangChain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: model) end private @@ -96,7 +96,7 @@ def embed( def http_post(url, params) http = Net::HTTP.new(url.hostname, url.port) http.use_ssl = url.scheme == "https" - http.set_debug_output(Langchain.logger) if Langchain.logger.debug? + http.set_debug_output(LangChain.logger) if LangChain.logger.debug? request = Net::HTTP::Post.new(url) request.content_type = "application/json" diff --git a/lib/langchain/llm/google_vertexai.rb b/lib/langchain/llm/google_vertexai.rb index a59ba95dd..879f2f8d5 100644 --- a/lib/langchain/llm/google_vertexai.rb +++ b/lib/langchain/llm/google_vertexai.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around the Google Vertex AI APIs: https://cloud.google.com/vertex-ai # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "googleauth" # # Usage: - # llm = Langchain::LLM::GoogleVertexAI.new(project_id: ENV["GOOGLE_VERTEX_AI_PROJECT_ID"], region: "us-central1") + # llm = LangChain::LLM::GoogleVertexAI.new(project_id: ENV["GOOGLE_VERTEX_AI_PROJECT_ID"], region: "us-central1") # class GoogleVertexAI < Base DEFAULTS = { @@ -54,7 +54,7 @@ def initialize(project_id:, region:, default_options: {}) # # @param text [String] The text to generate an embedding for # @param model [String] ID of the model to use - # @return [Langchain::LLM::Response::GoogleGeminiResponse] Response object + # @return [LangChain::LLM::Response::GoogleGeminiResponse] Response object # def embed( text:, @@ -66,7 +66,7 @@ def embed( parsed_response = http_post(uri, params) - Langchain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: model) + LangChain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: model) end # Generate a chat completion for given messages @@ -76,7 +76,7 @@ def embed( # @param tools [Array] The tools to use # @param tool_choice [String] The tool choice to use # @param system [String] The system instruction to use - # @return [Langchain::LLM::Response::GoogleGeminiResponse] Response object + # @return [LangChain::LLM::Response::GoogleGeminiResponse] Response object def chat(params = {}) params[:system] = {parts: [{text: params[:system]}]} if params[:system] params[:tools] = {function_declarations: params[:tools]} if params[:tools] @@ -90,7 +90,7 @@ def chat(params = {}) parsed_response = http_post(uri, parameters) - wrapped_response = Langchain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: parameters[:model]) + wrapped_response = LangChain::LLM::Response::GoogleGeminiResponse.new(parsed_response, model: parameters[:model]) if wrapped_response.chat_completion || Array(wrapped_response.tool_calls).any? wrapped_response @@ -104,7 +104,7 @@ def chat(params = {}) def http_post(url, params) http = Net::HTTP.new(url.hostname, url.port) http.use_ssl = url.scheme == "https" - http.set_debug_output(Langchain.logger) if Langchain.logger.debug? + http.set_debug_output(LangChain.logger) if LangChain.logger.debug? request = Net::HTTP::Post.new(url) request.content_type = "application/json" diff --git a/lib/langchain/llm/hugging_face.rb b/lib/langchain/llm/hugging_face.rb index 6f0986d3c..a2c748c05 100644 --- a/lib/langchain/llm/hugging_face.rb +++ b/lib/langchain/llm/hugging_face.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around the HuggingFace Inference API: https://huggingface.co/inference-api # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "hugging-face", "~> 0.3.4" # # Usage: - # llm = Langchain::LLM::HuggingFace.new(api_key: ENV["HUGGING_FACE_API_KEY"]) + # llm = LangChain::LLM::HuggingFace.new(api_key: ENV["HUGGING_FACE_API_KEY"]) # class HuggingFace < Base DEFAULTS = { @@ -45,14 +45,14 @@ def default_dimensions # Generate an embedding for a given text # # @param text [String] The text to embed - # @return [Langchain::LLM::Response::HuggingFaceResponse] Response object + # @return [LangChain::LLM::Response::HuggingFaceResponse] Response object # def embed(text:) response = client.embedding( input: text, model: @defaults[:embedding_model] ) - Langchain::LLM::Response::HuggingFaceResponse.new(response, model: @defaults[:embedding_model]) + LangChain::LLM::Response::HuggingFaceResponse.new(response, model: @defaults[:embedding_model]) end end end diff --git a/lib/langchain/llm/llama_cpp.rb b/lib/langchain/llm/llama_cpp.rb index b3c855814..e9ddc7a93 100644 --- a/lib/langchain/llm/llama_cpp.rb +++ b/lib/langchain/llm/llama_cpp.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # A wrapper around the LlamaCpp.rb library # # Gem requirements: # gem "llama_cpp" # # Usage: - # llama = Langchain::LLM::LlamaCpp.new( + # llama = LangChain::LLM::LlamaCpp.new( # model_path: ENV["LLAMACPP_MODEL_PATH"], # n_gpu_layers: Integer(ENV["LLAMACPP_N_GPU_LAYERS"]), # n_threads: Integer(ENV["LLAMACPP_N_THREADS"]) # ) # - # @deprecated Use {Langchain::LLM::Ollama} for self-hosted LLM inference. + # @deprecated Use {LangChain::LLM::Ollama} for self-hosted LLM inference. class LlamaCpp < Base attr_accessor :model_path, :n_gpu_layers, :n_ctx, :seed attr_writer :n_threads @@ -24,7 +24,7 @@ class LlamaCpp < Base # @param n_threads [Integer] The CPU number of threads to use # @param seed [Integer] The seed to use def initialize(model_path:, n_gpu_layers: 1, n_ctx: 2048, n_threads: 1, seed: 0) - Langchain.logger.warn "DEPRECATED: `Langchain::LLM::LlamaCpp` is deprecated, and will be removed in the next major version. Please use `Langchain::LLM::Ollama` for self-hosted LLM inference." + LangChain.logger.warn "DEPRECATED: `LangChain::LLM::LlamaCpp` is deprecated, and will be removed in the next major version. Please use `LangChain::LLM::Ollama` for self-hosted LLM inference." depends_on "llama_cpp" @@ -45,7 +45,7 @@ def embed(text:) return unless embedding_input.size.positive? context.eval(tokens: embedding_input, n_past: 0) - Langchain::LLM::Response::LlamaCppResponse.new(context, model: context.model.desc) + LangChain::LLM::Response::LlamaCppResponse.new(context, model: context.model.desc) end # @param prompt [String] The prompt to complete diff --git a/lib/langchain/llm/mistralai.rb b/lib/langchain/llm/mistralai.rb index d173e217d..deab5e6c5 100644 --- a/lib/langchain/llm/mistralai.rb +++ b/lib/langchain/llm/mistralai.rb @@ -1,11 +1,11 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # Gem requirements: # gem "mistral-ai" # # Usage: - # llm = Langchain::LLM::MistralAI.new(api_key: ENV["MISTRAL_AI_API_KEY"]) + # llm = LangChain::LLM::MistralAI.new(api_key: ENV["MISTRAL_AI_API_KEY"]) class MistralAI < Base DEFAULTS = { chat_model: "mistral-large-latest", @@ -39,7 +39,7 @@ def chat(params = {}) response = client.chat_completions(parameters) - Langchain::LLM::Response::MistralAIResponse.new(response.to_h) + LangChain::LLM::Response::MistralAIResponse.new(response.to_h) end def embed( @@ -55,7 +55,7 @@ def embed( response = client.embeddings(params) - Langchain::LLM::Response::MistralAIResponse.new(response.to_h) + LangChain::LLM::Response::MistralAIResponse.new(response.to_h) end end end diff --git a/lib/langchain/llm/ollama.rb b/lib/langchain/llm/ollama.rb index a0d902cb5..f3609c29f 100644 --- a/lib/langchain/llm/ollama.rb +++ b/lib/langchain/llm/ollama.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # Interface to Ollama API. # Available models: https://ollama.ai/library # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "faraday" # # Usage: - # llm = Langchain::LLM::Ollama.new(url: ENV["OLLAMA_URL"], default_options: {}) + # llm = LangChain::LLM::Ollama.new(url: ENV["OLLAMA_URL"], default_options: {}) # class Ollama < Base attr_reader :url, :defaults @@ -74,7 +74,7 @@ def default_dimensions # For a list of valid parameters and values, see: # https://github.com/jmorganca/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values # @option block [Proc] Receive the intermediate responses as a stream of +OllamaResponse+ objects. - # @return [Langchain::LLM::Response::OllamaResponse] Response object + # @return [LangChain::LLM::Response::OllamaResponse] Response object # # Example: # @@ -151,7 +151,7 @@ def complete( req.options.on_data = json_responses_chunk_handler do |parsed_chunk| responses_stream << parsed_chunk - block&.call(Langchain::LLM::Response::OllamaResponse.new(parsed_chunk, model: parameters[:model])) + block&.call(LangChain::LLM::Response::OllamaResponse.new(parsed_chunk, model: parameters[:model])) end end @@ -162,14 +162,14 @@ def complete( # # @param messages [Array] The chat messages # @param model [String] The model to use - # @param params [Hash] Unified chat parmeters from [Langchain::LLM::Parameters::Chat::SCHEMA] + # @param params [Hash] Unified chat parmeters from [LangChain::LLM::Parameters::Chat::SCHEMA] # @option params [Array] :messages Array of messages # @option params [String] :model Model name # @option params [String] :format Format to return a response in. Currently the only accepted value is `json` # @option params [Float] :temperature The temperature to use # @option params [String] :template The prompt template to use (overrides what is defined in the `Modelfile`) # @option block [Proc] Receive the intermediate responses as a stream of +OllamaResponse+ objects. - # @return [Langchain::LLM::Response::OllamaResponse] Response object + # @return [LangChain::LLM::Response::OllamaResponse] Response object # # Example: # @@ -188,7 +188,7 @@ def chat(messages:, model: nil, **params, &block) req.options.on_data = json_responses_chunk_handler do |parsed_chunk| responses_stream << parsed_chunk - block&.call(Langchain::LLM::Response::OllamaResponse.new(parsed_chunk, model: parameters[:model])) + block&.call(LangChain::LLM::Response::OllamaResponse.new(parsed_chunk, model: parameters[:model])) end end @@ -201,7 +201,7 @@ def chat(messages:, model: nil, **params, &block) # @param text [String] The text to generate an embedding for # @param model [String] The model to use # @param options [Hash] The options to use - # @return [Langchain::LLM::Response::OllamaResponse] Response object + # @return [LangChain::LLM::Response::OllamaResponse] Response object # def embed( text:, @@ -253,7 +253,7 @@ def embed( req.body = parameters end - Langchain::LLM::Response::OllamaResponse.new(response.body, model: parameters[:model]) + LangChain::LLM::Response::OllamaResponse.new(response.body, model: parameters[:model]) end # Generate a summary for a given text @@ -261,8 +261,8 @@ def embed( # @param text [String] The text to generate a summary for # @return [String] The summary def summarize(text:) - prompt_template = Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/llm/prompts/ollama/summarize_template.yaml") + prompt_template = LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/llm/prompts/ollama/summarize_template.yaml") ) prompt = prompt_template.format(text: text) @@ -276,7 +276,7 @@ def client conn.request :json conn.response :json conn.response :raise_error - conn.response :logger, Langchain.logger, {headers: true, bodies: true, errors: true} + conn.response :logger, LangChain.logger, {headers: true, bodies: true, errors: true} end end @@ -300,7 +300,7 @@ def generate_final_completion_response(responses_stream, model) "response" => responses_stream.map { |resp| resp["response"] }.join ) - Langchain::LLM::Response::OllamaResponse.new(final_response, model: model) + LangChain::LLM::Response::OllamaResponse.new(final_response, model: model) end # BUG: If streamed, this method does not currently return the tool_calls response. @@ -308,7 +308,7 @@ def generate_final_chat_completion_response(responses_stream, model) final_response = responses_stream.last final_response["message"]["content"] = responses_stream.map { |resp| resp.dig("message", "content") }.join - Langchain::LLM::Response::OllamaResponse.new(final_response, model: model) + LangChain::LLM::Response::OllamaResponse.new(final_response, model: model) end end end diff --git a/lib/langchain/llm/openai.rb b/lib/langchain/llm/openai.rb index add1eacbc..ad42928bb 100644 --- a/lib/langchain/llm/openai.rb +++ b/lib/langchain/llm/openai.rb @@ -2,14 +2,14 @@ require "openai" -module Langchain::LLM +module LangChain::LLM # LLM interface for OpenAI APIs: https://platform.openai.com/overview # # Gem requirements: # gem "ruby-openai", "~> 6.3.0" # # Usage: - # llm = Langchain::LLM::OpenAI.new( + # llm = LangChain::LLM::OpenAI.new( # api_key: ENV["OPENAI_API_KEY"], # llm_options: {}, # Available options: https://github.com/alexrudall/ruby-openai/blob/main/lib/openai/client.rb#L5-L13 # default_options: {} @@ -34,10 +34,10 @@ class OpenAI < Base def initialize(api_key:, llm_options: {}, default_options: {}) depends_on "ruby-openai", req: "openai" - llm_options[:log_errors] = Langchain.logger.debug? unless llm_options.key?(:log_errors) + llm_options[:log_errors] = LangChain.logger.debug? unless llm_options.key?(:log_errors) @client = ::OpenAI::Client.new(access_token: api_key, **llm_options) do |f| - f.response :logger, Langchain.logger, {headers: true, bodies: true, errors: true} + f.response :logger, LangChain.logger, {headers: true, bodies: true, errors: true} end @defaults = DEFAULTS.merge(default_options) @@ -59,7 +59,7 @@ def initialize(api_key:, llm_options: {}, default_options: {}) # @param model [String] ID of the model to use # @param encoding_format [String] The format to return the embeddings in. Can be either float or base64. # @param user [String] A unique identifier representing your end-user - # @return [Langchain::LLM::Response::OpenAIResponse] Response object + # @return [LangChain::LLM::Response::OpenAIResponse] Response object def embed( text:, model: defaults[:embedding_model], @@ -91,7 +91,7 @@ def embed( client.embeddings(parameters: parameters) end - Langchain::LLM::Response::OpenAIResponse.new(response) + LangChain::LLM::Response::OpenAIResponse.new(response) end # rubocop:disable Style/ArgumentsForwarding @@ -99,10 +99,10 @@ def embed( # # @param prompt [String] The prompt to generate a completion for # @param params [Hash] The parameters to pass to the `chat()` method - # @return [Langchain::LLM::Response::OpenAIResponse] Response object + # @return [LangChain::LLM::Response::OpenAIResponse] Response object # @deprecated Use {chat} instead. def complete(prompt:, **params) - Langchain.logger.warn "DEPRECATED: `Langchain::LLM::OpenAI#complete` is deprecated, and will be removed in the next major version. Use `Langchain::LLM::OpenAI#chat` instead." + LangChain.logger.warn "DEPRECATED: `LangChain::LLM::OpenAI#complete` is deprecated, and will be removed in the next major version. Use `LangChain::LLM::OpenAI#chat` instead." if params[:stop_sequences] params[:stop] = params.delete(:stop_sequences) @@ -116,7 +116,7 @@ def complete(prompt:, **params) # Generate a chat completion for given messages. # - # @param [Hash] params unified chat parmeters from [Langchain::LLM::Parameters::Chat::SCHEMA] + # @param [Hash] params unified chat parmeters from [LangChain::LLM::Parameters::Chat::SCHEMA] # @option params [Array] :messages List of messages comprising the conversation so far # @option params [String] :model ID of the model to use def chat(params = {}, &block) @@ -145,7 +145,7 @@ def chat(params = {}, &block) response = response_from_chunks if block reset_response_chunks - Langchain::LLM::Response::OpenAIResponse.new(response) + LangChain::LLM::Response::OpenAIResponse.new(response) end # Generate a summary for a given text @@ -153,8 +153,8 @@ def chat(params = {}, &block) # @param text [String] The text to generate a summary for # @return [String] The summary def summarize(text:) - prompt_template = Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/llm/prompts/summarize_template.yaml") + prompt_template = LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/llm/prompts/summarize_template.yaml") ) prompt = prompt_template.format(text: text) @@ -177,7 +177,7 @@ def with_api_error_handling response = yield return if response.nil? || response.empty? - raise Langchain::LLM::ApiError.new "OpenAI API error: #{response.dig("error", "message")}" if response&.dig("error") + raise LangChain::LLM::ApiError.new "OpenAI API error: #{response.dig("error", "message")}" if response&.dig("error") response end diff --git a/lib/langchain/llm/parameters/chat.rb b/lib/langchain/llm/parameters/chat.rb index 07457bffc..894906416 100644 --- a/lib/langchain/llm/parameters/chat.rb +++ b/lib/langchain/llm/parameters/chat.rb @@ -2,7 +2,7 @@ require "delegate" -module Langchain::LLM::Parameters +module LangChain::LLM::Parameters class Chat < SimpleDelegator # TODO: At the moment, the UnifiedParamters only considers keys. In the # future, we may consider ActiveModel-style validations and further typed @@ -46,7 +46,7 @@ class Chat < SimpleDelegator def initialize(parameters: {}) super( - ::Langchain::LLM::UnifiedParameters.new( + ::LangChain::LLM::UnifiedParameters.new( schema: SCHEMA.dup, parameters: parameters ) diff --git a/lib/langchain/llm/replicate.rb b/lib/langchain/llm/replicate.rb index 370a54e6f..8aef3a0cb 100644 --- a/lib/langchain/llm/replicate.rb +++ b/lib/langchain/llm/replicate.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM # # Wrapper around Replicate.com LLM provider # @@ -8,7 +8,7 @@ module Langchain::LLM # gem "replicate-ruby", "~> 0.2.2" # # Usage: - # llm = Langchain::LLM::Replicate.new(api_key: ENV["REPLICATE_API_KEY"]) + # llm = LangChain::LLM::Replicate.new(api_key: ENV["REPLICATE_API_KEY"]) class Replicate < Base DEFAULTS = { # TODO: Figure out how to send the temperature to the API @@ -39,7 +39,7 @@ def initialize(api_key:, default_options: {}) # Generate an embedding for a given text # # @param text [String] The text to generate an embedding for - # @return [Langchain::LLM::Response::ReplicateResponse] Response object + # @return [LangChain::LLM::Response::ReplicateResponse] Response object # def embed(text:) response = embeddings_model.predict(input: text) @@ -49,14 +49,14 @@ def embed(text:) sleep(0.1) end - Langchain::LLM::Response::ReplicateResponse.new(response, model: @defaults[:embedding_model]) + LangChain::LLM::Response::ReplicateResponse.new(response, model: @defaults[:embedding_model]) end # # Generate a completion for a given prompt # # @param prompt [String] The prompt to generate a completion for - # @return [Langchain::LLM::ReplicateResponse] Response object + # @return [LangChain::LLM::ReplicateResponse] Response object # def complete(prompt:, **params) response = completion_model.predict(prompt: prompt) @@ -66,7 +66,7 @@ def complete(prompt:, **params) sleep(0.1) end - Langchain::LLM::Response::ReplicateResponse.new(response, model: @defaults[:completion_model]) + LangChain::LLM::Response::ReplicateResponse.new(response, model: @defaults[:completion_model]) end # @@ -76,8 +76,8 @@ def complete(prompt:, **params) # @return [String] The summary # def summarize(text:) - prompt_template = Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/llm/prompts/summarize_template.yaml") + prompt_template = LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/llm/prompts/summarize_template.yaml") ) prompt = prompt_template.format(text: text) diff --git a/lib/langchain/llm/response/ai21_response.rb b/lib/langchain/llm/response/ai21_response.rb index 9b511ccaa..7a8260702 100644 --- a/lib/langchain/llm/response/ai21_response.rb +++ b/lib/langchain/llm/response/ai21_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class AI21Response < BaseResponse def completions raw_response.dig(:completions) diff --git a/lib/langchain/llm/response/anthropic_response.rb b/lib/langchain/llm/response/anthropic_response.rb index 71c9b79de..b09b2285c 100644 --- a/lib/langchain/llm/response/anthropic_response.rb +++ b/lib/langchain/llm/response/anthropic_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class AnthropicResponse < BaseResponse def model raw_response.dig("model") diff --git a/lib/langchain/llm/response/aws_bedrock_meta_response.rb b/lib/langchain/llm/response/aws_bedrock_meta_response.rb index 8dad4b14f..9d2bc4388 100644 --- a/lib/langchain/llm/response/aws_bedrock_meta_response.rb +++ b/lib/langchain/llm/response/aws_bedrock_meta_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class AwsBedrockMetaResponse < BaseResponse def completion completions.first diff --git a/lib/langchain/llm/response/aws_titan_response.rb b/lib/langchain/llm/response/aws_titan_response.rb index e86269689..08feb3b16 100644 --- a/lib/langchain/llm/response/aws_titan_response.rb +++ b/lib/langchain/llm/response/aws_titan_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class AwsTitanResponse < BaseResponse def embedding embeddings&.first diff --git a/lib/langchain/llm/response/base_response.rb b/lib/langchain/llm/response/base_response.rb index 4223977ef..dbd690052 100644 --- a/lib/langchain/llm/response/base_response.rb +++ b/lib/langchain/llm/response/base_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module LLM::Response class BaseResponse attr_reader :raw_response, :model diff --git a/lib/langchain/llm/response/cohere_response.rb b/lib/langchain/llm/response/cohere_response.rb index 8edec569f..90caf19cf 100644 --- a/lib/langchain/llm/response/cohere_response.rb +++ b/lib/langchain/llm/response/cohere_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class CohereResponse < BaseResponse def embedding embeddings.first diff --git a/lib/langchain/llm/response/google_gemini_response.rb b/lib/langchain/llm/response/google_gemini_response.rb index 3a974d663..5245e7212 100644 --- a/lib/langchain/llm/response/google_gemini_response.rb +++ b/lib/langchain/llm/response/google_gemini_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class GoogleGeminiResponse < BaseResponse def initialize(raw_response, model: nil) super diff --git a/lib/langchain/llm/response/hugging_face_response.rb b/lib/langchain/llm/response/hugging_face_response.rb index 878acabb8..4fc36e80e 100644 --- a/lib/langchain/llm/response/hugging_face_response.rb +++ b/lib/langchain/llm/response/hugging_face_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class HuggingFaceResponse < BaseResponse def embeddings [raw_response] diff --git a/lib/langchain/llm/response/llama_cpp_response.rb b/lib/langchain/llm/response/llama_cpp_response.rb index 853988450..f29701b01 100644 --- a/lib/langchain/llm/response/llama_cpp_response.rb +++ b/lib/langchain/llm/response/llama_cpp_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class LlamaCppResponse < BaseResponse def embedding embeddings diff --git a/lib/langchain/llm/response/mistralai_response.rb b/lib/langchain/llm/response/mistralai_response.rb index fb0696a1e..0efb67edd 100644 --- a/lib/langchain/llm/response/mistralai_response.rb +++ b/lib/langchain/llm/response/mistralai_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class MistralAIResponse < BaseResponse def model raw_response["model"] diff --git a/lib/langchain/llm/response/ollama_response.rb b/lib/langchain/llm/response/ollama_response.rb index 661371de5..89cd9ecfa 100644 --- a/lib/langchain/llm/response/ollama_response.rb +++ b/lib/langchain/llm/response/ollama_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class OllamaResponse < BaseResponse def initialize(raw_response, model: nil, prompt_tokens: nil) @prompt_tokens = prompt_tokens diff --git a/lib/langchain/llm/response/openai_response.rb b/lib/langchain/llm/response/openai_response.rb index d47936d2a..d9d69e5b6 100644 --- a/lib/langchain/llm/response/openai_response.rb +++ b/lib/langchain/llm/response/openai_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class OpenAIResponse < BaseResponse def model raw_response["model"] diff --git a/lib/langchain/llm/response/replicate_response.rb b/lib/langchain/llm/response/replicate_response.rb index e9d6ec81a..38b00003d 100644 --- a/lib/langchain/llm/response/replicate_response.rb +++ b/lib/langchain/llm/response/replicate_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM::Response +module LangChain::LLM::Response class ReplicateResponse < BaseResponse def completions # Response comes back as an array of strings, e.g.: ["Hi", "how ", "are ", "you?"] diff --git a/lib/langchain/llm/unified_parameters.rb b/lib/langchain/llm/unified_parameters.rb index da8f17afe..8d5a09419 100644 --- a/lib/langchain/llm/unified_parameters.rb +++ b/lib/langchain/llm/unified_parameters.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::LLM +module LangChain::LLM class UnifiedParameters include Enumerable diff --git a/lib/langchain/loader.rb b/lib/langchain/loader.rb index 632f6d16e..c975549f6 100644 --- a/lib/langchain/loader.rb +++ b/lib/langchain/loader.rb @@ -2,7 +2,7 @@ require "open-uri" -module Langchain +module LangChain class Loader class FileNotFound < StandardError; end @@ -10,18 +10,18 @@ class UnknownFormatError < StandardError; end URI_REGEX = %r{\A[A-Za-z][A-Za-z0-9+\-.]*://} - # Load data from a file or URL. Shorthand for `Langchain::Loader.new(path).load` + # Load data from a file or URL. Shorthand for `LangChain::Loader.new(path).load` # # == Examples # # # load a URL - # data = Langchain::Loader.load("https://example.com/docs/README.md") + # data = LangChain::Loader.load("https://example.com/docs/README.md") # # # load a file - # data = Langchain::Loader.load("README.md") + # data = LangChain::Loader.load("README.md") # # # Load data using a custom processor - # data = Langchain::Loader.load("README.md") do |raw_data, options| + # data = LangChain::Loader.load("README.md") do |raw_data, options| # # your processing code goes here # # return data at the end here # end @@ -35,11 +35,11 @@ def self.load(path, options = {}, &block) end # rubocop:enable Style/ArgumentsForwarding - # Initialize Langchain::Loader + # Initialize LangChain::Loader # @param path [String | Pathname] path to file or URL # @param options [Hash] options passed to the processor class used to process the data - # @return [Langchain::Loader] loader instance - def initialize(path, options = {}, chunker: Langchain::Chunker::Text) + # @return [LangChain::Loader] loader instance + def initialize(path, options = {}, chunker: LangChain::Chunker::Text) @options = options @path = path @chunker = chunker @@ -63,7 +63,7 @@ def directory? # Load data from a file or URL # - # loader = Langchain::Loader.new("README.md") + # loader = LangChain::Loader.new("README.md") # # Load data using default processor for the file # loader.load # @@ -109,7 +109,7 @@ def load_from_path def load_from_directory(&block) Dir.glob(File.join(@path, "**/*")).map do |file| # Only load and add to result files with supported extensions - Langchain::Loader.new(file, @options).load(&block) + LangChain::Loader.new(file, @options).load(&block) rescue UnknownFormatError.new("Unknown format: #{source_type}") end.flatten.compact @@ -125,13 +125,13 @@ def process_data(data, &block) processor_klass.new(@options).parse(@raw_data) end - Langchain::Data.new(result, source: @options[:source], chunker: @chunker) + LangChain::Data.new(result, source: @options[:source], chunker: @chunker) end def processor_klass raise UnknownFormatError.new("Unknown format: #{source_type}") unless (kind = find_processor) - Langchain::Processors.const_get(kind) + LangChain::Processors.const_get(kind) end def find_processor @@ -139,11 +139,11 @@ def find_processor end def processor_matches?(constant, value) - Langchain::Processors.const_get(constant).include?(value) + LangChain::Processors.const_get(constant).include?(value) end def processors - Langchain::Processors.constants + LangChain::Processors.constants end def source_type diff --git a/lib/langchain/output_parsers/base.rb b/lib/langchain/output_parsers/base.rb index 7c929b1f4..7f036b05d 100644 --- a/lib/langchain/output_parsers/base.rb +++ b/lib/langchain/output_parsers/base.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::OutputParsers +module LangChain::OutputParsers # Structured output parsers from the LLM. # # @abstract diff --git a/lib/langchain/output_parsers/output_fixing_parser.rb b/lib/langchain/output_parsers/output_fixing_parser.rb index eaf92487f..10167f04b 100644 --- a/lib/langchain/output_parsers/output_fixing_parser.rb +++ b/lib/langchain/output_parsers/output_fixing_parser.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::OutputParsers +module LangChain::OutputParsers # = Output Fixing Parser # class OutputFixingParser < Base @@ -8,13 +8,13 @@ class OutputFixingParser < Base # Initializes a new instance of the class. # - # @param llm [Langchain::LLM] The LLM used in the fixing process - # @param parser [Langchain::OutputParsers] The parser originally used which resulted in parsing error - # @param prompt [Langchain::Prompt::PromptTemplate] + # @param llm [LangChain::LLM] The LLM used in the fixing process + # @param parser [LangChain::OutputParsers] The parser originally used which resulted in parsing error + # @param prompt [LangChain::Prompt::PromptTemplate] def initialize(llm:, parser:, prompt:) - raise ArgumentError.new("llm must be an instance of Langchain::LLM got: #{llm.class}") unless llm.is_a?(Langchain::LLM::Base) - raise ArgumentError.new("parser must be an instance of Langchain::OutputParsers got #{parser.class}") unless parser.is_a?(Langchain::OutputParsers::Base) - raise ArgumentError.new("prompt must be an instance of Langchain::Prompt::PromptTemplate got #{prompt.class}") unless prompt.is_a?(Langchain::Prompt::PromptTemplate) + raise ArgumentError.new("llm must be an instance of LangChain::LLM got: #{llm.class}") unless llm.is_a?(LangChain::LLM::Base) + raise ArgumentError.new("parser must be an instance of LangChain::OutputParsers got #{parser.class}") unless parser.is_a?(LangChain::OutputParsers::Base) + raise ArgumentError.new("prompt must be an instance of LangChain::Prompt::PromptTemplate got #{prompt.class}") unless prompt.is_a?(LangChain::Prompt::PromptTemplate) @llm = llm @parser = parser @prompt = prompt @@ -59,9 +59,9 @@ def parse(completion) # Creates a new instance of the class using the given JSON::Schema. # - # @param llm [Langchain::LLM] The LLM used in the fixing process - # @param parser [Langchain::OutputParsers] The parser originally used which resulted in parsing error - # @param prompt [Langchain::Prompt::PromptTemplate] + # @param llm [LangChain::LLM] The LLM used in the fixing process + # @param parser [LangChain::OutputParsers] The parser originally used which resulted in parsing error + # @param prompt [LangChain::Prompt::PromptTemplate] # # @return [Object] A new instance of the class def self.from_llm(llm:, parser:, prompt: nil) @@ -71,8 +71,8 @@ def self.from_llm(llm:, parser:, prompt: nil) private private_class_method def self.naive_fix_prompt - Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/output_parsers/prompts/naive_fix_prompt.yaml") + LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/output_parsers/prompts/naive_fix_prompt.yaml") ) end end diff --git a/lib/langchain/output_parsers/output_parser_exception.rb b/lib/langchain/output_parsers/output_parser_exception.rb index 7221b3cb6..523dd864a 100644 --- a/lib/langchain/output_parsers/output_parser_exception.rb +++ b/lib/langchain/output_parsers/output_parser_exception.rb @@ -1,4 +1,4 @@ -class Langchain::OutputParsers::OutputParserException < StandardError +class LangChain::OutputParsers::OutputParserException < StandardError def initialize(message, text) @message = message @text = text diff --git a/lib/langchain/output_parsers/structured_output_parser.rb b/lib/langchain/output_parsers/structured_output_parser.rb index fad81ca04..60715b062 100644 --- a/lib/langchain/output_parsers/structured_output_parser.rb +++ b/lib/langchain/output_parsers/structured_output_parser.rb @@ -2,7 +2,7 @@ require "json-schema" -module Langchain::OutputParsers +module LangChain::OutputParsers # = Structured Output Parser class StructuredOutputParser < Base attr_reader :schema diff --git a/lib/langchain/processors/base.rb b/lib/langchain/processors/base.rb index 77724f0c7..25114a757 100644 --- a/lib/langchain/processors/base.rb +++ b/lib/langchain/processors/base.rb @@ -1,10 +1,10 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors # Processors load and parse/process various data types such as CSVs, PDFs, Word documents, HTML pages, and others. class Base - include Langchain::DependencyHelper + include LangChain::DependencyHelper EXTENSIONS = [] CONTENT_TYPES = [] diff --git a/lib/langchain/processors/csv.rb b/lib/langchain/processors/csv.rb index 56994adc7..a6d5529af 100644 --- a/lib/langchain/processors/csv.rb +++ b/lib/langchain/processors/csv.rb @@ -2,7 +2,7 @@ require "csv" -module Langchain +module LangChain module Processors class CSV < Base class InvalidChunkMode < StandardError; end diff --git a/lib/langchain/processors/docx.rb b/lib/langchain/processors/docx.rb index a7ad0589b..850c22c67 100644 --- a/lib/langchain/processors/docx.rb +++ b/lib/langchain/processors/docx.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Docx < Base EXTENSIONS = [".docx"] diff --git a/lib/langchain/processors/eml.rb b/lib/langchain/processors/eml.rb index eb47f7bec..fe37f5f92 100644 --- a/lib/langchain/processors/eml.rb +++ b/lib/langchain/processors/eml.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Eml < Base EXTENSIONS = [".eml"] diff --git a/lib/langchain/processors/html.rb b/lib/langchain/processors/html.rb index bc65fef7c..4717ef4d9 100644 --- a/lib/langchain/processors/html.rb +++ b/lib/langchain/processors/html.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class HTML < Base EXTENSIONS = [".html", ".htm"] diff --git a/lib/langchain/processors/json.rb b/lib/langchain/processors/json.rb index 5d6198d85..66214ef3e 100644 --- a/lib/langchain/processors/json.rb +++ b/lib/langchain/processors/json.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class JSON < Base EXTENSIONS = [".json"] diff --git a/lib/langchain/processors/jsonl.rb b/lib/langchain/processors/jsonl.rb index f552bbfce..fe39d5109 100644 --- a/lib/langchain/processors/jsonl.rb +++ b/lib/langchain/processors/jsonl.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class JSONL < Base EXTENSIONS = [".jsonl"] diff --git a/lib/langchain/processors/markdown.rb b/lib/langchain/processors/markdown.rb index 50901c424..1457e85f8 100644 --- a/lib/langchain/processors/markdown.rb +++ b/lib/langchain/processors/markdown.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Markdown < Base EXTENSIONS = [".markdown", ".md"] diff --git a/lib/langchain/processors/pdf.rb b/lib/langchain/processors/pdf.rb index 67b1f2ce1..35978b5c8 100644 --- a/lib/langchain/processors/pdf.rb +++ b/lib/langchain/processors/pdf.rb @@ -2,7 +2,7 @@ require "pdf-reader" -module Langchain +module LangChain module Processors class PDF < Base EXTENSIONS = [".pdf"] diff --git a/lib/langchain/processors/pptx.rb b/lib/langchain/processors/pptx.rb index 977ed1176..973387474 100644 --- a/lib/langchain/processors/pptx.rb +++ b/lib/langchain/processors/pptx.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Pptx < Base EXTENSIONS = [".pptx"] diff --git a/lib/langchain/processors/text.rb b/lib/langchain/processors/text.rb index 0be7e1f5d..164427833 100644 --- a/lib/langchain/processors/text.rb +++ b/lib/langchain/processors/text.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Text < Base EXTENSIONS = [".txt"] diff --git a/lib/langchain/processors/xls.rb b/lib/langchain/processors/xls.rb index 12dbb8bf3..2c1cd6a4c 100644 --- a/lib/langchain/processors/xls.rb +++ b/lib/langchain/processors/xls.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Xls < Base EXTENSIONS = [".xls"].freeze diff --git a/lib/langchain/processors/xlsx.rb b/lib/langchain/processors/xlsx.rb index 00a56a39c..5da2ccf16 100644 --- a/lib/langchain/processors/xlsx.rb +++ b/lib/langchain/processors/xlsx.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Processors class Xlsx < Base EXTENSIONS = [".xlsx", ".xlsm"].freeze diff --git a/lib/langchain/prompt.rb b/lib/langchain/prompt.rb index 9700027eb..fac2767f0 100644 --- a/lib/langchain/prompt.rb +++ b/lib/langchain/prompt.rb @@ -1,4 +1,4 @@ -module Langchain +module LangChain module Prompt include Loading end diff --git a/lib/langchain/prompt/base.rb b/lib/langchain/prompt/base.rb index 6f82e0ade..f88dc41aa 100644 --- a/lib/langchain/prompt/base.rb +++ b/lib/langchain/prompt/base.rb @@ -3,7 +3,7 @@ require "strscan" require "yaml" -module Langchain::Prompt +module LangChain::Prompt # Prompts are structured inputs to the LLMs. Prompts provide instructions, context and other user input that LLMs use to generate responses. # # @abstract @@ -34,7 +34,7 @@ def to_h # def validate(template:, input_variables:) input_variables_set = input_variables.uniq - variables_from_template = Langchain::Prompt::Base.extract_variables_from_template(template) + variables_from_template = LangChain::Prompt::Base.extract_variables_from_template(template) missing_variables = variables_from_template - input_variables_set extra_variables = input_variables_set - variables_from_template diff --git a/lib/langchain/prompt/few_shot_prompt_template.rb b/lib/langchain/prompt/few_shot_prompt_template.rb index 8b88f0888..d016223be 100644 --- a/lib/langchain/prompt/few_shot_prompt_template.rb +++ b/lib/langchain/prompt/few_shot_prompt_template.rb @@ -1,14 +1,14 @@ # frozen_string_literal: true -module Langchain::Prompt +module LangChain::Prompt # = Few Shot Prompt Templates # # Create a prompt with a few shot examples: # - # prompt = Langchain::Prompt::FewShotPromptTemplate.new( + # prompt = LangChain::Prompt::FewShotPromptTemplate.new( # prefix: "Write antonyms for the following words.", # suffix: "Input: {adjective}\nOutput:", - # example_prompt: Langchain::Prompt::PromptTemplate.new( + # example_prompt: LangChain::Prompt::PromptTemplate.new( # input_variables: ["input", "output"], # template: "Input: {input}\nOutput: {output}" # ), @@ -38,12 +38,12 @@ module Langchain::Prompt # # Loading a new prompt template using a JSON file: # - # prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") + # prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") # prompt.prefix # "Write antonyms for the following words." # # Loading a new prompt template using a YAML file: # - # prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") + # prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") # prompt.input_variables #=> ["adjective", "content"] # class FewShotPromptTemplate < Base diff --git a/lib/langchain/prompt/loading.rb b/lib/langchain/prompt/loading.rb index ba2a89c7f..8158eafe6 100644 --- a/lib/langchain/prompt/loading.rb +++ b/lib/langchain/prompt/loading.rb @@ -4,7 +4,7 @@ require "pathname" require "yaml" -module Langchain::Prompt +module LangChain::Prompt TYPE_TO_LOADER = { "prompt" => ->(config) { load_prompt(config) }, "few_shot" => ->(config) { load_few_shot_prompt(config) } @@ -79,7 +79,7 @@ def load_few_shot_prompt(config) def load_from_config(config) # If `_type` key is not present in the configuration hash, add it with a default value of `prompt` unless config.key?("_type") - Langchain.logger.warn("#{self.class} - No `_type` key found, defaulting to `prompt`") + LangChain.logger.warn("#{self.class} - No `_type` key found, defaulting to `prompt`") config["_type"] = "prompt" end diff --git a/lib/langchain/prompt/prompt_template.rb b/lib/langchain/prompt/prompt_template.rb index 857c074b1..74798eea6 100644 --- a/lib/langchain/prompt/prompt_template.rb +++ b/lib/langchain/prompt/prompt_template.rb @@ -1,21 +1,21 @@ # frozen_string_literal: true -module Langchain::Prompt +module LangChain::Prompt # = Prompt Templates # # Create a prompt with one input variable: # - # prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) + # prompt = LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) # prompt.format(adjective: "funny") # "Tell me a funny joke." # # Create a prompt with multiple input variables: # - # prompt = Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) + # prompt = LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke about {content}.", input_variables: ["adjective", "content"]) # prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens." # # Creating a PromptTemplate using just a prompt and no input_variables: # - # prompt = Langchain::Prompt::PromptTemplate.from_template("Tell me a {adjective} joke about {content}.") + # prompt = LangChain::Prompt::PromptTemplate.from_template("Tell me a {adjective} joke about {content}.") # prompt.input_variables # ["adjective", "content"] # prompt.format(adjective: "funny", content: "chickens") # "Tell me a funny joke about chickens." # @@ -25,11 +25,11 @@ module Langchain::Prompt # # Loading a new prompt template using a JSON file: # - # prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") + # prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") # prompt.input_variables # ["adjective", "content"] # # Loading a new prompt template using a YAML file: - # prompt = Langchain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") + # prompt = LangChain::Prompt.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") # prompt.input_variables #=> ["adjective", "content"] # class PromptTemplate < Base diff --git a/lib/langchain/tool/calculator.rb b/lib/langchain/tool/calculator.rb index 53c7b46ad..c175c26c0 100644 --- a/lib/langchain/tool/calculator.rb +++ b/lib/langchain/tool/calculator.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A calculator tool # @@ -8,11 +8,11 @@ module Langchain::Tool # gem "eqn", "~> 1.6.5" # # Usage: - # calculator = Langchain::Tool::Calculator.new + # calculator = LangChain::Tool::Calculator.new # class Calculator - extend Langchain::ToolDefinition - include Langchain::DependencyHelper + extend LangChain::ToolDefinition + include LangChain::DependencyHelper define_function :execute, description: "Evaluates a pure math expression" do property :input, type: "string", description: "Math expression", required: true @@ -25,9 +25,9 @@ def initialize # Evaluates a pure math expression # # @param input [String] math expression - # @return [Langchain::Tool::Response] Answer + # @return [LangChain::Tool::Response] Answer def execute(input:) - Langchain.logger.debug("#{self.class} - Executing \"#{input}\"") + LangChain.logger.debug("#{self.class} - Executing \"#{input}\"") result = Eqn::Calculator.calc(input) tool_response(content: result) diff --git a/lib/langchain/tool/database.rb b/lib/langchain/tool/database.rb index 0580d2ed2..8f44d8635 100644 --- a/lib/langchain/tool/database.rb +++ b/lib/langchain/tool/database.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # Connects to a SQL database, executes SQL queries, and outputs DB schema for Agents to use # @@ -8,11 +8,11 @@ module Langchain::Tool # gem "sequel", "~> 5.87.0" # # Usage: - # database = Langchain::Tool::Database.new(connection_string: "postgres://user:password@localhost:5432/db_name") + # database = LangChain::Tool::Database.new(connection_string: "postgres://user:password@localhost:5432/db_name") # class Database - extend Langchain::ToolDefinition - include Langchain::DependencyHelper + extend LangChain::ToolDefinition + include LangChain::DependencyHelper define_function :list_tables, description: "Database Tool: Returns a list of tables in the database" @@ -49,7 +49,7 @@ def initialize(connection_string:, tables: [], exclude_tables: []) # Database Tool: Returns a list of tables in the database # - # @return [Langchain::Tool::Response] List of tables in the database + # @return [LangChain::Tool::Response] List of tables in the database def list_tables tool_response(content: db.tables) end @@ -57,11 +57,11 @@ def list_tables # Database Tool: Returns the schema for a list of tables # # @param tables [Array] The tables to describe. - # @return [Langchain::Tool::Response] The schema for the tables + # @return [LangChain::Tool::Response] The schema for the tables def describe_tables(tables: []) return "No tables specified" if tables.empty? - Langchain.logger.debug("#{self.class} - Describing tables: #{tables}") + LangChain.logger.debug("#{self.class} - Describing tables: #{tables}") result = tables .map do |table| @@ -74,9 +74,9 @@ def describe_tables(tables: []) # Database Tool: Returns the database schema # - # @return [Langchain::Tool::Response] Database schema + # @return [LangChain::Tool::Response] Database schema def dump_schema - Langchain.logger.debug("#{self.class} - Dumping schema tables and keys") + LangChain.logger.debug("#{self.class} - Dumping schema tables and keys") schemas = db.tables.map do |table| describe_table(table) @@ -88,13 +88,13 @@ def dump_schema # Database Tool: Executes a SQL query and returns the results # # @param input [String] SQL query to be executed - # @return [Langchain::Tool::Response] Results from the SQL query + # @return [LangChain::Tool::Response] Results from the SQL query def execute(input:) - Langchain.logger.debug("#{self.class} - Executing \"#{input}\"") + LangChain.logger.debug("#{self.class} - Executing \"#{input}\"") tool_response(content: db[input].to_a) rescue Sequel::DatabaseError => e - Langchain.logger.error("#{self.class} - #{e.message}") + LangChain.logger.error("#{self.class} - #{e.message}") tool_response(content: e.message) end @@ -103,7 +103,7 @@ def execute(input:) # Describes a table and its schema # # @param table [String] The table to describe - # @return [Langchain::Tool::Response] The schema for the table + # @return [LangChain::Tool::Response] The schema for the table def describe_table(table) # TODO: There's probably a clear way to do all of this below diff --git a/lib/langchain/tool/file_system.rb b/lib/langchain/tool/file_system.rb index 472a730fd..b935ff37f 100644 --- a/lib/langchain/tool/file_system.rb +++ b/lib/langchain/tool/file_system.rb @@ -1,14 +1,14 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A tool that wraps the Ruby file system classes. # # Usage: - # file_system = Langchain::Tool::FileSystem.new + # file_system = LangChain::Tool::FileSystem.new # class FileSystem - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :list_directory, description: "File System Tool: Lists out the content of a specified directory" do property :directory_path, type: "string", description: "Directory path to list", required: true diff --git a/lib/langchain/tool/google_search.rb b/lib/langchain/tool/google_search.rb index d3d78fd7f..3e4a90478 100644 --- a/lib/langchain/tool/google_search.rb +++ b/lib/langchain/tool/google_search.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # Wrapper around SerpApi's Google Search API # @@ -8,12 +8,12 @@ module Langchain::Tool # gem "google_search_results", "~> 2.0.0" # # Usage: - # search = Langchain::Tool::GoogleSearch.new(api_key: "YOUR_API_KEY") + # search = LangChain::Tool::GoogleSearch.new(api_key: "YOUR_API_KEY") # search.execute(input: "What is the capital of France?") # class GoogleSearch - extend Langchain::ToolDefinition - include Langchain::DependencyHelper + extend LangChain::ToolDefinition + include LangChain::DependencyHelper define_function :execute, description: "Executes Google Search and returns the result" do property :input, type: "string", description: "Search query", required: true @@ -25,7 +25,7 @@ class GoogleSearch # Initializes the Google Search tool # # @param api_key [String] Search API key - # @return [Langchain::Tool::GoogleSearch] Google search tool + # @return [LangChain::Tool::GoogleSearch] Google search tool # def initialize(api_key:) depends_on "google_search_results" @@ -36,9 +36,9 @@ def initialize(api_key:) # Executes Google Search and returns the result # # @param input [String] search query - # @return [Langchain::Tool::Response] Answer + # @return [LangChain::Tool::Response] Answer def execute(input:) - Langchain.logger.debug("#{self.class} - Executing \"#{input}\"") + LangChain.logger.debug("#{self.class} - Executing \"#{input}\"") results = execute_search(input: input) diff --git a/lib/langchain/tool/news_retriever.rb b/lib/langchain/tool/news_retriever.rb index 3a7993b5f..827fd4401 100644 --- a/lib/langchain/tool/news_retriever.rb +++ b/lib/langchain/tool/news_retriever.rb @@ -1,15 +1,15 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A tool that retrieves latest news from various sources via https://newsapi.org/. # An API key needs to be obtained from https://newsapi.org/ to use this tool. # # Usage: - # news_retriever = Langchain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"]) + # news_retriever = LangChain::Tool::NewsRetriever.new(api_key: ENV["NEWS_API_KEY"]) # class NewsRetriever - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :get_everything, description: "News Retriever: Search through millions of articles from over 150,000 large and small news sources and blogs" do property :q, type: "string", description: 'Keywords or phrases to search for in the article title and body. Surround phrases with quotes (") for exact match. Alternatively you can use the AND / OR / NOT keywords, and optionally group these with parenthesis. Must be URL-encoded' @@ -57,7 +57,7 @@ def initialize(api_key: ENV["NEWS_API_KEY"]) # @param page_size [Integer] The number of results to return per page. 20 is the API's default, 100 is the maximum. Our default is 5. # @param page [Integer] Use this to page through the results. # - # @return [Langchain::Tool::Response] JSON response + # @return [LangChain::Tool::Response] JSON response def get_everything( q: nil, search_in: nil, @@ -71,7 +71,7 @@ def get_everything( page_size: 5, # The API default is 20 but that's too many. page: nil ) - Langchain.logger.debug("#{self.class} - Retrieving all news") + LangChain.logger.debug("#{self.class} - Retrieving all news") params = {apiKey: @api_key} params[:q] = q if q @@ -99,7 +99,7 @@ def get_everything( # @param page_size [Integer] The number of results to return per page. 20 is the API's default, 100 is the maximum. Our default is 5. # @param page [Integer] Use this to page through the results. # - # @return [Langchain::Tool::Response] JSON response + # @return [LangChain::Tool::Response] JSON response def get_top_headlines( country: nil, category: nil, @@ -108,7 +108,7 @@ def get_top_headlines( page_size: 5, page: nil ) - Langchain.logger.debug("#{self.class} - Retrieving top news headlines") + LangChain.logger.debug("#{self.class} - Retrieving top news headlines") params = {apiKey: @api_key} params[:country] = country if country @@ -128,13 +128,13 @@ def get_top_headlines( # @param language [String] The 2-letter ISO-639-1 code of the language you want to get headlines for. Possible options: ar, de, en, es, fr, he, it, nl, no, pt, ru, se, ud, zh. # @param country [String] The 2-letter ISO 3166-1 code of the country you want to get headlines for. Possible options: ae, ar, at, au, be, bg, br, ca, ch, cn, co, cu, cz, de, eg, fr, gb, gr, hk, hu, id, ie, il, in, it, jp, kr, lt, lv, ma, mx, my, ng, nl, no, nz, ph, pl, pt, ro, rs, ru, sa, se, sg, si, sk, th, tr, tw, ua, us, ve, za. # - # @return [Langchain::Tool::Response] JSON response + # @return [LangChain::Tool::Response] JSON response def get_sources( category: nil, language: nil, country: nil ) - Langchain.logger.debug("#{self.class} - Retrieving news sources") + LangChain.logger.debug("#{self.class} - Retrieving news sources") params = {apiKey: @api_key} params[:country] = country if country diff --git a/lib/langchain/tool/ruby_code_interpreter.rb b/lib/langchain/tool/ruby_code_interpreter.rb index 4531d5f06..5d95d41b1 100644 --- a/lib/langchain/tool/ruby_code_interpreter.rb +++ b/lib/langchain/tool/ruby_code_interpreter.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A tool that execute Ruby code in a sandboxed environment. # @@ -8,11 +8,11 @@ module Langchain::Tool # gem "safe_ruby", "~> 1.0.5" # # Usage: - # interpreter = Langchain::Tool::RubyCodeInterpreter.new + # interpreter = LangChain::Tool::RubyCodeInterpreter.new # class RubyCodeInterpreter - extend Langchain::ToolDefinition - include Langchain::DependencyHelper + extend LangChain::ToolDefinition + include LangChain::DependencyHelper define_function :execute, description: "Executes Ruby code in a sandboxes environment" do property :input, type: "string", description: "Ruby code expression", required: true @@ -27,9 +27,9 @@ def initialize(timeout: 30) # Executes Ruby code in a sandboxes environment. # # @param input [String] ruby code expression - # @return [Langchain::Tool::Response] Answer + # @return [LangChain::Tool::Response] Answer def execute(input:) - Langchain.logger.debug("#{self.class} - Executing \"#{input}\"") + LangChain.logger.debug("#{self.class} - Executing \"#{input}\"") tool_response(content: safe_eval(input)) end diff --git a/lib/langchain/tool/tavily.rb b/lib/langchain/tool/tavily.rb index 38ff5c1cb..36d994461 100644 --- a/lib/langchain/tool/tavily.rb +++ b/lib/langchain/tool/tavily.rb @@ -1,15 +1,15 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # Tavily Search is a robust search API tailored specifically for LLM Agents. # It seamlessly integrates with diverse data sources to ensure a superior, relevant search experience. # # Usage: - # tavily = Langchain::Tool::Tavily.new(api_key: ENV["TAVILY_API_KEY"]) + # tavily = LangChain::Tool::Tavily.new(api_key: ENV["TAVILY_API_KEY"]) # class Tavily - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :search, description: "Tavily Tool: Robust search API" do property :query, type: "string", description: "The search query string", required: true @@ -41,7 +41,7 @@ def initialize(api_key:) # @param include_domains [Array] A list of domains to specifically include in the search results. Default is None, which includes all domains. # @param exclude_domains [Array] A list of domains to specifically exclude from the search results. Default is None, which doesn't exclude any domains. # - # @return [Langchain::Tool::Response] The search results in JSON format. + # @return [LangChain::Tool::Response] The search results in JSON format. def search( query:, search_depth: "basic", diff --git a/lib/langchain/tool/vectorsearch.rb b/lib/langchain/tool/vectorsearch.rb index 347526e01..e9228f37d 100644 --- a/lib/langchain/tool/vectorsearch.rb +++ b/lib/langchain/tool/vectorsearch.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A tool wraps vectorsearch classes # # Usage: # # Initialize the LLM that will be used to generate embeddings - # ollama = Langchain::LLM::Ollama.new(url: ENV["OLLAMA_URL"] - # chroma = Langchain::Vectorsearch::Chroma.new(url: ENV["CHROMA_URL"], index_name: "my_index", llm: ollama) + # ollama = LangChain::LLM::Ollama.new(url: ENV["OLLAMA_URL"] + # chroma = LangChain::Vectorsearch::Chroma.new(url: ENV["CHROMA_URL"], index_name: "my_index", llm: ollama) # # # This tool can now be used by the Assistant - # vectorsearch_tool = Langchain::Tool::Vectorsearch.new(vectorsearch: chroma) + # vectorsearch_tool = LangChain::Tool::Vectorsearch.new(vectorsearch: chroma) # class Vectorsearch - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :similarity_search, description: "Vectorsearch: Retrieves relevant document for the query" do property :query, type: "string", description: "Query to find similar documents for", required: true @@ -24,7 +24,7 @@ class Vectorsearch # Initializes the Vectorsearch tool # - # @param vectorsearch [Langchain::Vectorsearch::Base] Vectorsearch instance to use + # @param vectorsearch [LangChain::Vectorsearch::Base] Vectorsearch instance to use def initialize(vectorsearch:) @vectorsearch = vectorsearch end @@ -33,7 +33,7 @@ def initialize(vectorsearch:) # # @param query [String] The query to search for # @param k [Integer] The number of results to return - # @return [Langchain::Tool::Response] The response from the server + # @return [LangChain::Tool::Response] The response from the server def similarity_search(query:, k: 4) result = vectorsearch.similarity_search(query:, k: 4) tool_response(content: result) diff --git a/lib/langchain/tool/weather.rb b/lib/langchain/tool/weather.rb index ad9d8dfc9..ad97d1fef 100644 --- a/lib/langchain/tool/weather.rb +++ b/lib/langchain/tool/weather.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # A weather tool that gets current weather data # @@ -8,14 +8,14 @@ module Langchain::Tool # Forecast and historical data require registration with credit card, so not supported yet. # # Usage: - # weather = Langchain::Tool::Weather.new(api_key: ENV["OPEN_WEATHER_API_KEY"]) - # assistant = Langchain::Assistant.new( + # weather = LangChain::Tool::Weather.new(api_key: ENV["OPEN_WEATHER_API_KEY"]) + # assistant = LangChain::Assistant.new( # llm: llm, # tools: [weather] # ) # class Weather - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition define_function :get_current_weather, description: "Returns current weather for a city" do property :city, @@ -44,7 +44,7 @@ def initialize(api_key:) def get_current_weather(city:, state_code:, country_code: nil, units: "imperial") validate_input(city: city, state_code: state_code, country_code: country_code, units: units) - Langchain.logger.debug("#{self.class} - get_current_weather #{{city:, state_code:, country_code:, units:}}") + LangChain.logger.debug("#{self.class} - get_current_weather #{{city:, state_code:, country_code:, units:}}") fetch_current_weather(city: city, state_code: state_code, country_code: country_code, units: units) end @@ -74,9 +74,9 @@ def send_request(path:, params:) request = Net::HTTP::Get.new(uri.request_uri) request["Content-Type"] = "application/json" - Langchain.logger.debug("#{self.class} - Sending request to OpenWeatherMap API #{{path: path, params: params.except(:appid)}}") + LangChain.logger.debug("#{self.class} - Sending request to OpenWeatherMap API #{{path: path, params: params.except(:appid)}}") response = http.request(request) - Langchain.logger.debug("#{self.class} - Received response from OpenWeatherMap API #{{status: response.code}}") + LangChain.logger.debug("#{self.class} - Received response from OpenWeatherMap API #{{status: response.code}}") if response.code == "200" JSON.parse(response.body) diff --git a/lib/langchain/tool/wikipedia.rb b/lib/langchain/tool/wikipedia.rb index d521d3676..f67f957d1 100644 --- a/lib/langchain/tool/wikipedia.rb +++ b/lib/langchain/tool/wikipedia.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Tool +module LangChain::Tool # # Tool that adds the capability to search using the Wikipedia API # @@ -8,12 +8,12 @@ module Langchain::Tool # gem "wikipedia-client", "~> 1.17.0" # # Usage: - # wikipedia = Langchain::Tool::Wikipedia.new + # wikipedia = LangChain::Tool::Wikipedia.new # wikipedia.execute(input: "The Roman Empire") # class Wikipedia - extend Langchain::ToolDefinition - include Langchain::DependencyHelper + extend LangChain::ToolDefinition + include LangChain::DependencyHelper define_function :execute, description: "Executes Wikipedia API search and returns the answer" do property :input, type: "string", description: "Search query", required: true @@ -27,9 +27,9 @@ def initialize # Executes Wikipedia API search and returns the answer # # @param input [String] search query - # @return [Langchain::Tool::Response] Answer + # @return [LangChain::Tool::Response] Answer def execute(input:) - Langchain.logger.debug("#{self.class} - Executing \"#{input}\"") + LangChain.logger.debug("#{self.class} - Executing \"#{input}\"") page = ::Wikipedia.find(input) # It would be nice to figure out a way to provide page.content but the LLM token limit is an issue diff --git a/lib/langchain/tool_definition.rb b/lib/langchain/tool_definition.rb index 000da594c..aa88da85d 100644 --- a/lib/langchain/tool_definition.rb +++ b/lib/langchain/tool_definition.rb @@ -6,7 +6,7 @@ # # == Usage # -# 1. Extend your class with {Langchain::ToolDefinition} +# 1. Extend your class with {LangChain::ToolDefinition} # 2. Use {#define_function} to define each function of the tool # # == Key Concepts @@ -34,7 +34,7 @@ # end # end # -module Langchain::ToolDefinition +module LangChain::ToolDefinition # Defines a function for the tool # # @param method_name [Symbol] Name of the method to define @@ -57,6 +57,7 @@ def function_schemas def tool_name @tool_name ||= name .gsub("::", "_") + .gsub("LangChain", "Langchain") .gsub(/(?<=[A-Z])(?=[A-Z][a-z])|(?<=[a-z\d])(?=[A-Z])/, "_") .downcase end @@ -69,9 +70,9 @@ module InstanceMethods # Create a tool response # @param content [String, nil] The content of the tool response # @param image_url [String, nil] The URL of an image - # @return [Langchain::ToolResponse] The tool response + # @return [LangChain::ToolResponse] The tool response def tool_response(content: nil, image_url: nil) - Langchain::ToolResponse.new(content: content, image_url: image_url) + LangChain::ToolResponse.new(content: content, image_url: image_url) end end diff --git a/lib/langchain/tool_response.rb b/lib/langchain/tool_response.rb index f0a8c62d4..69d8c9e49 100644 --- a/lib/langchain/tool_response.rb +++ b/lib/langchain/tool_response.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain # ToolResponse represents the standardized output of a tool. # It can contain either text content or an image URL. class ToolResponse diff --git a/lib/langchain/utils/cosine_similarity.rb b/lib/langchain/utils/cosine_similarity.rb index 2c9cbd4c4..c3cdb02a7 100644 --- a/lib/langchain/utils/cosine_similarity.rb +++ b/lib/langchain/utils/cosine_similarity.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Utils class CosineSimilarity attr_reader :vector_a, :vector_b diff --git a/lib/langchain/utils/hash_transformer.rb b/lib/langchain/utils/hash_transformer.rb index b70d0deaa..c0443c847 100644 --- a/lib/langchain/utils/hash_transformer.rb +++ b/lib/langchain/utils/hash_transformer.rb @@ -1,4 +1,4 @@ -module Langchain +module LangChain module Utils class HashTransformer def self.symbolize_keys(hash) diff --git a/lib/langchain/utils/image_wrapper.rb b/lib/langchain/utils/image_wrapper.rb index 347167b6a..eb89bad7a 100644 --- a/lib/langchain/utils/image_wrapper.rb +++ b/lib/langchain/utils/image_wrapper.rb @@ -3,7 +3,7 @@ require "open-uri" require "base64" -module Langchain +module LangChain module Utils class ImageWrapper attr_reader :image_url diff --git a/lib/langchain/utils/to_boolean.rb b/lib/langchain/utils/to_boolean.rb index bfe3beeeb..1e451e017 100644 --- a/lib/langchain/utils/to_boolean.rb +++ b/lib/langchain/utils/to_boolean.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain module Utils class ToBoolean TRUTHABLE_STRINGS = %w[1 true t yes y] diff --git a/lib/langchain/vectorsearch/base.rb b/lib/langchain/vectorsearch/base.rb index f9b6c9706..feab2b612 100644 --- a/lib/langchain/vectorsearch/base.rb +++ b/lib/langchain/vectorsearch/base.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # = Vector Databases # A vector database a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes. Each vector has a certain number of dimensions, which can range from tens to thousands, depending on the complexity and granularity of the data. # # == Available vector databases # - # - {Langchain::Vectorsearch::Chroma} - # - {Langchain::Vectorsearch::Elasticsearch} - # - {Langchain::Vectorsearch::Hnswlib} - # - {Langchain::Vectorsearch::Milvus} - # - {Langchain::Vectorsearch::Pgvector} - # - {Langchain::Vectorsearch::Pinecone} - # - {Langchain::Vectorsearch::Qdrant} - # - {Langchain::Vectorsearch::Weaviate} + # - {LangChain::Vectorsearch::Chroma} + # - {LangChain::Vectorsearch::Elasticsearch} + # - {LangChain::Vectorsearch::Hnswlib} + # - {LangChain::Vectorsearch::Milvus} + # - {LangChain::Vectorsearch::Pgvector} + # - {LangChain::Vectorsearch::Pinecone} + # - {LangChain::Vectorsearch::Qdrant} + # - {LangChain::Vectorsearch::Weaviate} # # == Usage # @@ -21,19 +21,19 @@ module Langchain::Vectorsearch # 2. Review its documentation to install the required gems, and create an account, get an API key, etc # 3. Instantiate the vector database class: # - # weaviate = Langchain::Vectorsearch::Weaviate.new( + # weaviate = LangChain::Vectorsearch::Weaviate.new( # url: ENV["WEAVIATE_URL"], # api_key: ENV["WEAVIATE_API_KEY"], # index_name: "Documents", - # llm: Langchain::LLM::OpenAI.new(api_key:) + # llm: LangChain::LLM::OpenAI.new(api_key:) # ) # # # You can instantiate other supported vector databases the same way: - # milvus = Langchain::Vectorsearch::Milvus.new(...) - # qdrant = Langchain::Vectorsearch::Qdrant.new(...) - # pinecone = Langchain::Vectorsearch::Pinecone.new(...) - # chroma = Langchain::Vectorsearch::Chroma.new(...) - # pgvector = Langchain::Vectorsearch::Pgvector.new(...) + # milvus = LangChain::Vectorsearch::Milvus.new(...) + # qdrant = LangChain::Vectorsearch::Qdrant.new(...) + # pinecone = LangChain::Vectorsearch::Pinecone.new(...) + # chroma = LangChain::Vectorsearch::Chroma.new(...) + # pgvector = LangChain::Vectorsearch::Pgvector.new(...) # # == Schema Creation # @@ -48,10 +48,10 @@ module Langchain::Vectorsearch # You can add data with: # 1. `add_data(path:, paths:)` to add any kind of data type # - # my_pdf = Langchain.root.join("path/to/my.pdf") - # my_text = Langchain.root.join("path/to/my.txt") - # my_docx = Langchain.root.join("path/to/my.docx") - # my_csv = Langchain.root.join("path/to/my.csv") + # my_pdf = LangChain.root.join("path/to/my.pdf") + # my_text = LangChain.root.join("path/to/my.txt") + # my_docx = LangChain.root.join("path/to/my.docx") + # my_csv = LangChain.root.join("path/to/my.csv") # # search.add_data(paths: [my_pdf, my_text, my_docx, my_csv]) # @@ -85,7 +85,7 @@ module Langchain::Vectorsearch # search.ask(question: "What is lorem ipsum?") # class Base - include Langchain::DependencyHelper + include LangChain::DependencyHelper attr_reader :client, :index_name, :llm @@ -158,9 +158,9 @@ def ask(...) # @param [String] User's question # @return [String] Prompt def generate_hyde_prompt(question:) - prompt_template = Langchain::Prompt.load_from_path( + prompt_template = LangChain::Prompt.load_from_path( # Zero-shot prompt to generate a hypothetical document based on a given question - file_path: Langchain.root.join("langchain/vectorsearch/prompts/hyde.yaml") + file_path: LangChain.root.join("langchain/vectorsearch/prompts/hyde.yaml") ) prompt_template.format(question: question) end @@ -171,19 +171,19 @@ def generate_hyde_prompt(question:) # @param context [String] The context to synthesize the answer from # @return [String] Prompt def generate_rag_prompt(question:, context:) - prompt_template = Langchain::Prompt.load_from_path( - file_path: Langchain.root.join("langchain/vectorsearch/prompts/rag.yaml") + prompt_template = LangChain::Prompt.load_from_path( + file_path: LangChain.root.join("langchain/vectorsearch/prompts/rag.yaml") ) prompt_template.format(question: question, context: context) end - def add_data(paths:, options: {}, chunker: Langchain::Chunker::Text) + def add_data(paths:, options: {}, chunker: LangChain::Chunker::Text) raise ArgumentError, "Paths must be provided" if Array(paths).empty? texts = Array(paths) .flatten .map do |path| - data = Langchain::Loader.new(path, options, chunker: chunker)&.load&.chunks + data = LangChain::Loader.new(path, options, chunker: chunker)&.load&.chunks data.map { |chunk| chunk.text } end diff --git a/lib/langchain/vectorsearch/chroma.rb b/lib/langchain/vectorsearch/chroma.rb index d7604a602..c0b190e36 100644 --- a/lib/langchain/vectorsearch/chroma.rb +++ b/lib/langchain/vectorsearch/chroma.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Chroma DB # @@ -8,7 +8,7 @@ module Langchain::Vectorsearch # gem "chroma-db", "~> 0.6.0" # # Usage: - # chroma = Langchain::Vectorsearch::Chroma.new(url:, index_name:, llm:, api_key: nil) + # chroma = LangChain::Vectorsearch::Chroma.new(url:, index_name:, llm:, api_key: nil) # class Chroma < Base # Initialize the Chroma client @@ -19,8 +19,8 @@ def initialize(url:, index_name:, llm:) depends_on "chroma-db" ::Chroma.connect_host = url - ::Chroma.logger = Langchain.logger - ::Chroma.log_level = Langchain.logger.level + ::Chroma.logger = LangChain.logger + ::Chroma.log_level = LangChain.logger.level @index_name = index_name diff --git a/lib/langchain/vectorsearch/elasticsearch.rb b/lib/langchain/vectorsearch/elasticsearch.rb index a7111ba05..9075f59cc 100644 --- a/lib/langchain/vectorsearch/elasticsearch.rb +++ b/lib/langchain/vectorsearch/elasticsearch.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Elasticsearch vector search capabilities. # @@ -13,8 +13,8 @@ module Langchain::Vectorsearch # gem "elasticsearch", "~> 8.0.0" # # Usage: - # llm = Langchain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) - # es = Langchain::Vectorsearch::Elasticsearch.new( + # llm = LangChain::LLM::OpenAI.new(api_key: ENV["OPENAI_API_KEY"]) + # es = LangChain::Vectorsearch::Elasticsearch.new( # url: ENV["ELASTICSEARCH_URL"], # index_name: "docs", # llm: llm, @@ -37,7 +37,7 @@ def initialize(url:, index_name:, llm:, api_key: nil, es_options: {}) @options = { url: url, request_timeout: 20, - logger: Langchain.logger + logger: LangChain.logger }.merge(es_options) @es_client = ::Elasticsearch::Client.new(**options) diff --git a/lib/langchain/vectorsearch/hnswlib.rb b/lib/langchain/vectorsearch/hnswlib.rb index 6befb8e2d..9cb8bdec5 100644 --- a/lib/langchain/vectorsearch/hnswlib.rb +++ b/lib/langchain/vectorsearch/hnswlib.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around HNSW (Hierarchical Navigable Small World) library. # HNSWLib is an in-memory vectorstore that can be saved to a file on disk. @@ -9,7 +9,7 @@ module Langchain::Vectorsearch # gem "hnswlib", "~> 0.8.1" # # Usage: - # hnsw = Langchain::Vectorsearch::Hnswlib.new(llm:, path_to_index:) + # hnsw = LangChain::Vectorsearch::Hnswlib.new(llm:, path_to_index:) # class Hnswlib < Base attr_reader :client, :path_to_index @@ -19,7 +19,7 @@ class Hnswlib < Base # # @param llm [Object] The LLM client to use # @param path_to_index [String] The local path to the index file, e.g.: "/storage/index.ann" - # @return [Langchain::Vectorsearch::Hnswlib] Class instance + # @return [LangChain::Vectorsearch::Hnswlib] Class instance # def initialize(llm:, path_to_index:) depends_on "hnswlib" @@ -114,12 +114,12 @@ def initialize_index if File.exist?(path_to_index) client.load_index(path_to_index) - Langchain.logger.debug("#{self.class} - Successfully loaded the index at \"#{path_to_index}\"") + LangChain.logger.debug("#{self.class} - Successfully loaded the index at \"#{path_to_index}\"") else # Default max_elements: 100, but we constantly resize the index as new data is written to it client.init_index(max_elements: 100) - Langchain.logger.debug("#{self.class} - Creating a new index at \"#{path_to_index}\"") + LangChain.logger.debug("#{self.class} - Creating a new index at \"#{path_to_index}\"") end end end diff --git a/lib/langchain/vectorsearch/milvus.rb b/lib/langchain/vectorsearch/milvus.rb index c59d74240..0d1479dae 100644 --- a/lib/langchain/vectorsearch/milvus.rb +++ b/lib/langchain/vectorsearch/milvus.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Milvus REST APIs. # @@ -8,7 +8,7 @@ module Langchain::Vectorsearch # gem "milvus", "~> 0.10.3" # # Usage: - # milvus = Langchain::Vectorsearch::Milvus.new(url:, index_name:, llm:, api_key:) + # milvus = LangChain::Vectorsearch::Milvus.new(url:, index_name:, llm:, api_key:) # class Milvus < Base def initialize(url:, index_name:, llm:, api_key: nil) @@ -17,7 +17,7 @@ def initialize(url:, index_name:, llm:, api_key: nil) @client = ::Milvus::Client.new( url: url, api_key: api_key, - logger: Langchain.logger + logger: LangChain.logger ) @index_name = index_name diff --git a/lib/langchain/vectorsearch/pgvector.rb b/lib/langchain/vectorsearch/pgvector.rb index c88977833..1e638faa3 100644 --- a/lib/langchain/vectorsearch/pgvector.rb +++ b/lib/langchain/vectorsearch/pgvector.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # The PostgreSQL vector search adapter # @@ -9,7 +9,7 @@ module Langchain::Vectorsearch # gem "pgvector", "~> 0.2" # # Usage: - # pgvector = Langchain::Vectorsearch::Pgvector.new(url:, index_name:, llm:, namespace: nil) + # pgvector = LangChain::Vectorsearch::Pgvector.new(url:, index_name:, llm:, namespace: nil) # class Pgvector < Base # The operators supported by the PostgreSQL vector search adapter diff --git a/lib/langchain/vectorsearch/pinecone.rb b/lib/langchain/vectorsearch/pinecone.rb index 476298cc2..aff32ba0b 100644 --- a/lib/langchain/vectorsearch/pinecone.rb +++ b/lib/langchain/vectorsearch/pinecone.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Pinecone API. # @@ -8,7 +8,7 @@ module Langchain::Vectorsearch # gem "pinecone", "~> 0.1" # # Usage: - # pinecone = Langchain::Vectorsearch::Pinecone.new(environment:, api_key:, index_name:, llm:) + # pinecone = LangChain::Vectorsearch::Pinecone.new(environment:, api_key:, index_name:, llm:) # class Pinecone < Base # Initialize the Pinecone client @@ -64,13 +64,13 @@ def add_texts(texts:, ids: [], namespace: "", metadata: nil) index.upsert(vectors: vectors, namespace: namespace) end - def add_data(paths:, namespace: "", options: {}, chunker: Langchain::Chunker::Text) + def add_data(paths:, namespace: "", options: {}, chunker: LangChain::Chunker::Text) raise ArgumentError, "Paths must be provided" if Array(paths).empty? texts = Array(paths) .flatten .map do |path| - data = Langchain::Loader.new(path, options, chunker: chunker)&.load&.chunks + data = LangChain::Loader.new(path, options, chunker: chunker)&.load&.chunks data.map { |chunk| chunk.text } end diff --git a/lib/langchain/vectorsearch/qdrant.rb b/lib/langchain/vectorsearch/qdrant.rb index 517ce0ee4..4bc66c5a4 100644 --- a/lib/langchain/vectorsearch/qdrant.rb +++ b/lib/langchain/vectorsearch/qdrant.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Qdrant # @@ -8,7 +8,7 @@ module Langchain::Vectorsearch # gem "qdrant-ruby", "~> 0.9.8" # # Usage: - # qdrant = Langchain::Vectorsearch::Qdrant.new(url:, api_key:, index_name:, llm:) + # qdrant = LangChain::Vectorsearch::Qdrant.new(url:, api_key:, index_name:, llm:) # class Qdrant < Base # Initialize the Qdrant client @@ -22,7 +22,7 @@ def initialize(url:, api_key:, index_name:, llm:) @client = ::Qdrant::Client.new( url: url, api_key: api_key, - logger: Langchain.logger + logger: LangChain.logger ) @index_name = index_name diff --git a/lib/langchain/vectorsearch/weaviate.rb b/lib/langchain/vectorsearch/weaviate.rb index 5d810bf00..f2acd8a4d 100644 --- a/lib/langchain/vectorsearch/weaviate.rb +++ b/lib/langchain/vectorsearch/weaviate.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain::Vectorsearch +module LangChain::Vectorsearch # # Wrapper around Weaviate # @@ -8,7 +8,7 @@ module Langchain::Vectorsearch # gem "weaviate-ruby", "~> 0.9.2" # # Usage: - # weaviate = Langchain::Vectorsearch::Weaviate.new(url: ENV["WEAVIATE_URL"], api_key: ENV["WEAVIATE_API_KEY"], index_name: "Docs", llm: llm) + # weaviate = LangChain::Vectorsearch::Weaviate.new(url: ENV["WEAVIATE_URL"], api_key: ENV["WEAVIATE_API_KEY"], index_name: "Docs", llm: llm) # class Weaviate < Base # Initialize the Weaviate adapter @@ -22,7 +22,7 @@ def initialize(url:, index_name:, llm:, api_key: nil) @client = ::Weaviate::Client.new( url: url, api_key: api_key, - logger: Langchain.logger + logger: LangChain.logger ) # Weaviate requires the class name to be Capitalized: https://weaviate.io/developers/weaviate/configuration/schema-configuration#create-a-class diff --git a/lib/langchain/version.rb b/lib/langchain/version.rb index debbf1ab5..e2c380aff 100644 --- a/lib/langchain/version.rb +++ b/lib/langchain/version.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -module Langchain +module LangChain VERSION = "0.19.5" Version = VERSION end diff --git a/spec/dependency_helper_spec.rb b/spec/dependency_helper_spec.rb index 6bcc754f9..75f5327b2 100644 --- a/spec/dependency_helper_spec.rb +++ b/spec/dependency_helper_spec.rb @@ -3,7 +3,7 @@ RSpec.describe "depends_on" do subject do object = Object.new - object.extend(Langchain::DependencyHelper) + object.extend(LangChain::DependencyHelper) object end @@ -12,20 +12,20 @@ end it "raises an error if the gem isn't included" do - expect { subject.depends_on("random-gem") }.to raise_error(Langchain::DependencyHelper::LoadError, /Could not load random-gem/) + expect { subject.depends_on("random-gem") }.to raise_error(LangChain::DependencyHelper::LoadError, /Could not load random-gem/) end it "raises an error when it doesn't have it as a bundler dependency" do bundler_load = double(:load, dependencies: []) allow(Bundler).to receive(:load).and_return(bundler_load) - expect { subject.depends_on("rspec") }.to raise_error(Langchain::DependencyHelper::LoadError, /Could not load rspec/) + expect { subject.depends_on("rspec") }.to raise_error(LangChain::DependencyHelper::LoadError, /Could not load rspec/) end it "raises an error when it doesn't match gem version requirement" do gem_loaded_spec = double(:specs, "[]": double(:version, version: Gem::Version.new("0.1"))) allow(Gem).to receive(:loaded_specs).and_return(gem_loaded_spec) - expect { subject.depends_on("rspec") }.to raise_error(Langchain::DependencyHelper::VersionError, /The rspec gem is installed.*You have 0.1/) + expect { subject.depends_on("rspec") }.to raise_error(LangChain::DependencyHelper::VersionError, /The rspec gem is installed.*You have 0.1/) end end diff --git a/spec/langchain_spec.rb b/spec/langchain_spec.rb index 9d57916b0..7c483c705 100644 --- a/spec/langchain_spec.rb +++ b/spec/langchain_spec.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -RSpec.describe Langchain do +RSpec.describe LangChain do it "has a version number" do - expect(Langchain::VERSION).not_to be nil + expect(LangChain::VERSION).not_to be nil end end diff --git a/spec/lib/langchain/assistant/assistant_spec.rb b/spec/lib/langchain/assistant/assistant_spec.rb index c7173cbc1..3bc476613 100644 --- a/spec/lib/langchain/assistant/assistant_spec.rb +++ b/spec/lib/langchain/assistant/assistant_spec.rb @@ -3,12 +3,12 @@ require "spec_helper" require "googleauth" -RSpec.describe Langchain::Assistant do +RSpec.describe LangChain::Assistant do context "initialization" do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } - it "raises an error if tools array contains non-Langchain::Tool instance(s)" do - expect { described_class.new(tools: [Langchain::Tool::Calculator.new, "foo"]) }.to raise_error(ArgumentError) + it "raises an error if tools array contains non-LangChain::Tool instance(s)" do + expect { described_class.new(tools: [LangChain::Tool::Calculator.new, "foo"]) }.to raise_error(ArgumentError) end describe "#add_message_callback" do @@ -32,12 +32,12 @@ end it "raises an error if LLM class does not implement `chat()` method" do - llm = Langchain::LLM::Replicate.new(api_key: "123") + llm = LangChain::LLM::Replicate.new(api_key: "123") expect { described_class.new(llm: llm) }.to raise_error(ArgumentError) end - it "raises an error if messages array contains non-Langchain::Message instance(s)" do - expect { described_class.new(llm: llm, messages: [Langchain::Assistant::Messages::OpenAIMessage.new, "foo"]) }.to raise_error(ArgumentError) + it "raises an error if messages array contains non-LangChain::Message instance(s)" do + expect { described_class.new(llm: llm, messages: [LangChain::Assistant::Messages::OpenAIMessage.new, "foo"]) }.to raise_error(ArgumentError) end it "parallel_tool_calls defaults to true" do @@ -46,7 +46,7 @@ end context "methods" do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } describe "#clear_messages!" do it "clears the thread" do @@ -68,8 +68,8 @@ describe "#array_of_message_hashes" do let(:messages) { [ - Langchain::Assistant::Messages::OpenAIMessage.new(role: "user", content: "hello"), - Langchain::Assistant::Messages::OpenAIMessage.new(role: "assistant", content: "hi") + LangChain::Assistant::Messages::OpenAIMessage.new(role: "user", content: "hello"), + LangChain::Assistant::Messages::OpenAIMessage.new(role: "assistant", content: "hi") ] } @@ -91,13 +91,13 @@ describe "#add_message" do let(:message) { {role: "user", content: "hello"} } - it "adds a Langchain::Message instance to the messages array" do + it "adds a LangChain::Message instance to the messages array" do subject = described_class.new(llm: llm, messages: []) expect { subject.add_message(**message) }.to change { subject.messages.count }.from(0).to(1) - expect(subject.messages.first).to be_a(Langchain::Assistant::Messages::OpenAIMessage) + expect(subject.messages.first).to be_a(LangChain::Assistant::Messages::OpenAIMessage) expect(subject.messages.first.role).to eq("user") expect(subject.messages.first.content).to eq("hello") end @@ -109,7 +109,7 @@ expect { subject.add_message(**message_with_image) }.to change { subject.messages.count }.from(0).to(1) - expect(subject.messages.first).to be_a(Langchain::Assistant::Messages::OpenAIMessage) + expect(subject.messages.first).to be_a(LangChain::Assistant::Messages::OpenAIMessage) expect(subject.messages.first.role).to eq("user") expect(subject.messages.first.content).to eq("hello") expect(subject.messages.first.image_url).to eq("https://example.com/image.jpg") @@ -119,7 +119,7 @@ callback = double("callback", call: true) subject = described_class.new(llm: llm, messages: [], add_message_callback: callback) - expect(callback).to receive(:call).with(instance_of(Langchain::Assistant::Messages::OpenAIMessage)) + expect(callback).to receive(:call).with(instance_of(LangChain::Assistant::Messages::OpenAIMessage)) subject.add_message(**message) end @@ -127,8 +127,8 @@ end context "when llm is OpenAI" do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:instructions) { "You are an expert assistant" } subject { @@ -190,7 +190,7 @@ callback = double("callback", call: true) thread = described_class.new(llm: llm, instructions: instructions, add_message_callback: callback) - expect(callback).to receive(:call).with(instance_of(Langchain::Assistant::Messages::OpenAIMessage)) + expect(callback).to receive(:call).with(instance_of(LangChain::Assistant::Messages::OpenAIMessage)) thread.add_message(role: "user", content: "foo") end @@ -246,7 +246,7 @@ tool_choice: "auto", parallel_tool_calls: true ) - .and_return(Langchain::LLM::Response::OpenAIResponse.new(raw_openai_response)) + .and_return(LangChain::LLM::Response::OpenAIResponse.new(raw_openai_response)) subject.add_message(role: "user", content: "Please calculate 2+2") end @@ -299,7 +299,7 @@ tool_choice: "auto", parallel_tool_calls: true ) - .and_return(Langchain::LLM::Response::OpenAIResponse.new(raw_openai_response2)) + .and_return(LangChain::LLM::Response::OpenAIResponse.new(raw_openai_response2)) allow(subject.tools[0]).to receive(:execute).with( input: "2+2" @@ -333,15 +333,15 @@ it "logs a warning" do expect(subject.messages).to be_empty - expect(Langchain.logger).to receive(:warn).with("#{described_class} - No messages to process") + expect(LangChain.logger).to receive(:warn).with("#{described_class} - No messages to process") subject.run end end end describe "#handle_tool_call" do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:assistant) { described_class.new(llm: llm, tools: [calculator]) } context "when tool returns a ToolResponse" do @@ -355,10 +355,10 @@ } } end - let(:tool_response) { Langchain::ToolResponse.new(content: "4", image_url: "http://example.com/image.jpg") } + let(:tool_response) { LangChain::ToolResponse.new(content: "4", image_url: "http://example.com/image.jpg") } before do - allow_any_instance_of(Langchain::Tool::Calculator).to receive(:execute).and_return(tool_response) + allow_any_instance_of(LangChain::Tool::Calculator).to receive(:execute).and_return(tool_response) end it "adds a message with the ToolResponse content and image_url" do @@ -386,7 +386,7 @@ end before do - allow_any_instance_of(Langchain::Tool::Calculator).to receive(:execute).and_return("4") + allow_any_instance_of(LangChain::Tool::Calculator).to receive(:execute).and_return("4") end it "adds a message with the simple value as content" do @@ -405,7 +405,7 @@ let(:tool_call) { {"id" => "call_9TewGANaaIjzY31UCpAAGLeV", "type" => "function", "function" => {"name" => "langchain_tool_calculator__execute", "arguments" => "{\"input\":\"2+2\"}"}} } it "returns correct data" do - expect(Langchain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["call_9TewGANaaIjzY31UCpAAGLeV", "langchain_tool_calculator", "execute", {input: "2+2"}]) + expect(LangChain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["call_9TewGANaaIjzY31UCpAAGLeV", "langchain_tool_calculator", "execute", {input: "2+2"}]) end end @@ -566,8 +566,8 @@ end context "when llm is MistralAI" do - let(:llm) { Langchain::LLM::MistralAI.new(api_key: "123") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::MistralAI.new(api_key: "123") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:instructions) { "You are an expert assistant" } subject { @@ -606,7 +606,7 @@ callback = double("callback", call: true) thread = described_class.new(llm: llm, instructions: instructions, add_message_callback: callback) - expect(callback).to receive(:call).with(instance_of(Langchain::Assistant::Messages::MistralAIMessage)) + expect(callback).to receive(:call).with(instance_of(LangChain::Assistant::Messages::MistralAIMessage)) thread.add_message(role: "user", content: "foo") end @@ -618,7 +618,7 @@ expect { subject.add_message(**message_with_image) }.to change { subject.messages.count }.from(0).to(1) - expect(subject.messages.first).to be_a(Langchain::Assistant::Messages::MistralAIMessage) + expect(subject.messages.first).to be_a(LangChain::Assistant::Messages::MistralAIMessage) expect(subject.messages.first.role).to eq("user") expect(subject.messages.first.content).to eq("hello") expect(subject.messages.first.image_url).to eq("https://example.com/image.jpg") @@ -674,7 +674,7 @@ tools: calculator.class.function_schemas.to_openai_format, tool_choice: "auto" ) - .and_return(Langchain::LLM::Response::MistralAIResponse.new(raw_mistralai_response)) + .and_return(LangChain::LLM::Response::MistralAIResponse.new(raw_mistralai_response)) subject.add_message(role: "user", content: "Please calculate 2+2") end @@ -726,7 +726,7 @@ tools: calculator.class.function_schemas.to_openai_format, tool_choice: "auto" ) - .and_return(Langchain::LLM::Response::MistralAIResponse.new(raw_mistralai_response2)) + .and_return(LangChain::LLM::Response::MistralAIResponse.new(raw_mistralai_response2)) allow(subject.tools[0]).to receive(:execute).with( input: "2+2" @@ -760,7 +760,7 @@ it "logs a warning" do expect(subject.messages).to be_empty - expect(Langchain.logger).to receive(:warn).with("#{described_class} - No messages to process") + expect(LangChain.logger).to receive(:warn).with("#{described_class} - No messages to process") subject.run end end @@ -770,7 +770,7 @@ let(:tool_call) { {"id" => "call_9TewGANaaIjzY31UCpAAGLeV", "type" => "function", "function" => {"name" => "langchain_tool_calculator__execute", "arguments" => "{\"input\":\"2+2\"}"}} } it "returns correct data" do - expect(Langchain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["call_9TewGANaaIjzY31UCpAAGLeV", "langchain_tool_calculator", "execute", {input: "2+2"}]) + expect(LangChain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["call_9TewGANaaIjzY31UCpAAGLeV", "langchain_tool_calculator", "execute", {input: "2+2"}]) end end @@ -924,8 +924,8 @@ end context "when llm is GoogleVertexAI" do - let(:llm) { Langchain::LLM::GoogleVertexAI.new(project_id: "123", region: "us-central1") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::GoogleVertexAI.new(project_id: "123", region: "us-central1") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:instructions) { "You are an expert assistant" } before do @@ -950,8 +950,8 @@ end context "when llm is GoogleGemini" do - let(:llm) { Langchain::LLM::GoogleGemini.new(api_key: "123") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::GoogleGemini.new(api_key: "123") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:instructions) { "You are an expert assistant" } subject { @@ -973,7 +973,7 @@ callback = double("callback", call: true) thread = described_class.new(llm: llm, instructions: instructions, add_message_callback: callback) - expect(callback).to receive(:call).with(instance_of(Langchain::Assistant::Messages::GoogleGeminiMessage)) + expect(callback).to receive(:call).with(instance_of(LangChain::Assistant::Messages::GoogleGeminiMessage)) thread.add_message(role: "user", content: "foo") end @@ -1020,7 +1020,7 @@ tool_choice: {function_calling_config: {mode: "auto"}}, system: instructions ) - .and_return(Langchain::LLM::Response::GoogleGeminiResponse.new(raw_google_gemini_response)) + .and_return(LangChain::LLM::Response::GoogleGeminiResponse.new(raw_google_gemini_response)) end it "runs the assistant" do @@ -1061,7 +1061,7 @@ tool_choice: {function_calling_config: {mode: "auto"}}, system: instructions ) - .and_return(Langchain::LLM::Response::GoogleGeminiResponse.new(raw_google_gemini_response2)) + .and_return(LangChain::LLM::Response::GoogleGeminiResponse.new(raw_google_gemini_response2)) end it "runs the assistant and automatically executes tool calls" do @@ -1087,7 +1087,7 @@ it "logs a warning" do expect(subject.messages).to be_empty - expect(Langchain.logger).to receive(:warn).with("#{described_class} - No messages to process") + expect(LangChain.logger).to receive(:warn).with("#{described_class} - No messages to process") subject.run end end @@ -1097,7 +1097,7 @@ let(:tool_call) { {"functionCall" => {"name" => "langchain_tool_calculator__execute", "args" => {"input" => "2+2"}}} } it "returns correct data" do - expect(Langchain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["langchain_tool_calculator__execute", "langchain_tool_calculator", "execute", {input: "2+2"}]) + expect(LangChain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["langchain_tool_calculator__execute", "langchain_tool_calculator", "execute", {input: "2+2"}]) end end @@ -1132,8 +1132,8 @@ end context "when llm is Anthropic" do - let(:llm) { Langchain::LLM::Anthropic.new(api_key: "123") } - let(:calculator) { Langchain::Tool::Calculator.new } + let(:llm) { LangChain::LLM::Anthropic.new(api_key: "123") } + let(:calculator) { LangChain::Tool::Calculator.new } let(:instructions) { "You are an expert assistant" } subject { @@ -1155,7 +1155,7 @@ callback = double("callback", call: true) thread = described_class.new(llm: llm, instructions: instructions, add_message_callback: callback) - expect(callback).to receive(:call).with(instance_of(Langchain::Assistant::Messages::AnthropicMessage)) + expect(callback).to receive(:call).with(instance_of(LangChain::Assistant::Messages::AnthropicMessage)) thread.add_message(role: "user", content: "foo") end @@ -1213,7 +1213,7 @@ hash_including( system: instructions ) - ).and_return(Langchain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) + ).and_return(LangChain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) subject.add_message content: "Please calculate 2+2" subject.run end @@ -1228,7 +1228,7 @@ tool_choice: {disable_parallel_tool_use: false, type: "auto"}, system: instructions ) - .and_return(Langchain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) + .and_return(LangChain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) end it "runs the assistant" do @@ -1245,7 +1245,7 @@ hash_including( system: instructions ) - ).and_return(Langchain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) + ).and_return(LangChain::LLM::Response::AnthropicResponse.new(raw_anthropic_response)) subject.add_message content: "Please calculate 2+2" subject.run end @@ -1287,7 +1287,7 @@ tool_choice: {disable_parallel_tool_use: false, type: "auto"}, system: instructions ) - .and_return(Langchain::LLM::Response::AnthropicResponse.new(raw_anthropic_response2)) + .and_return(LangChain::LLM::Response::AnthropicResponse.new(raw_anthropic_response2)) end it "runs the assistant and automatically executes tool calls" do @@ -1317,7 +1317,7 @@ it "logs a warning" do expect(subject.messages).to be_empty - expect(Langchain.logger).to receive(:warn).with("#{described_class} - No messages to process") + expect(LangChain.logger).to receive(:warn).with("#{described_class} - No messages to process") subject.run end end @@ -1337,7 +1337,7 @@ } it "returns correct data" do - expect(Langchain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["toolu_01TjusbFApEbwKPRWTRwzadR", "langchain_tool_news_retriever", "get_top_headlines", {country: "us", page_size: 10}]) + expect(LangChain::Assistant::LLM::Adapter.build(llm).extract_tool_call_args(tool_call: tool_call)).to eq(["toolu_01TjusbFApEbwKPRWTRwzadR", "langchain_tool_news_retriever", "get_top_headlines", {country: "us", page_size: 10}]) end end diff --git a/spec/lib/langchain/assistant/llm/adapter_spec.rb b/spec/lib/langchain/assistant/llm/adapter_spec.rb index 1e83fcb99..fff9a5bb7 100644 --- a/spec/lib/langchain/assistant/llm/adapter_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapter_spec.rb @@ -1,9 +1,9 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapter do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } +RSpec.describe LangChain::Assistant::LLM::Adapter do + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } it "initialize a new OpenAI adapter" do - expect(described_class.build(llm)).to be_a(Langchain::Assistant::LLM::Adapters::OpenAI) + expect(described_class.build(llm)).to be_a(LangChain::Assistant::LLM::Adapters::OpenAI) end end diff --git a/spec/lib/langchain/assistant/llm/adapters/anthropic_spec.rb b/spec/lib/langchain/assistant/llm/adapters/anthropic_spec.rb index bcb7b603e..56020e853 100644 --- a/spec/lib/langchain/assistant/llm/adapters/anthropic_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/anthropic_spec.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::Anthropic do +RSpec.describe LangChain::Assistant::LLM::Adapters::Anthropic do describe "#build_chat_params" do it "returns the chat parameters" do expect( subject.build_chat_params( messages: [{role: "user", content: "Hello"}], instructions: "Instructions", - tools: [Langchain::Tool::Calculator.new], + tools: [LangChain::Tool::Calculator.new], tool_choice: "langchain_tool_calculator__execute", parallel_tool_calls: false ) ).to eq({ messages: [{role: "user", content: "Hello"}], - tools: Langchain::Tool::Calculator.function_schemas.to_anthropic_format, + tools: LangChain::Tool::Calculator.function_schemas.to_anthropic_format, tool_choice: {disable_parallel_tool_use: true, name: "langchain_tool_calculator__execute", type: "tool"}, system: "Instructions" }) diff --git a/spec/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic_spec.rb b/spec/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic_spec.rb index c8dd1e13e..537b9f8ab 100644 --- a/spec/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/aws_bedrock_anthropic_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::AwsBedrockAnthropic do +RSpec.describe LangChain::Assistant::LLM::Adapters::AwsBedrockAnthropic do describe "#build_tool_choice" do it "returns the tool choice object with 'auto'" do expect(subject.send(:build_tool_choice, "auto", false)).to eq({type: "auto"}) diff --git a/spec/lib/langchain/assistant/llm/adapters/base_spec.rb b/spec/lib/langchain/assistant/llm/adapters/base_spec.rb index 5f436586c..65bf0e424 100644 --- a/spec/lib/langchain/assistant/llm/adapters/base_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/base_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::Base do +RSpec.describe LangChain::Assistant::LLM::Adapters::Base do describe "#build_chat_params" do it "raises NotImplementedError" do expect { subject.build_chat_params(tools: [], instructions: "", messages: [], tool_choice: "", parallel_tool_calls: false) }.to raise_error(NotImplementedError) diff --git a/spec/lib/langchain/assistant/llm/adapters/google_gemini_spec.rb b/spec/lib/langchain/assistant/llm/adapters/google_gemini_spec.rb index 09b0d97e8..831051328 100644 --- a/spec/lib/langchain/assistant/llm/adapters/google_gemini_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/google_gemini_spec.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::GoogleGemini do +RSpec.describe LangChain::Assistant::LLM::Adapters::GoogleGemini do describe "#build_chat_params" do it "returns the chat parameters" do expect( subject.build_chat_params( messages: [{role: "user", content: "Hello"}], instructions: "Instructions", - tools: [Langchain::Tool::Calculator.new], + tools: [LangChain::Tool::Calculator.new], tool_choice: "langchain_tool_calculator__execute", parallel_tool_calls: false ) ).to eq({ messages: [{role: "user", content: "Hello"}], - tools: Langchain::Tool::Calculator.function_schemas.to_google_gemini_format, + tools: LangChain::Tool::Calculator.function_schemas.to_google_gemini_format, tool_choice: {function_calling_config: {allowed_function_names: ["langchain_tool_calculator__execute"], mode: "any"}}, system: "Instructions" }) diff --git a/spec/lib/langchain/assistant/llm/adapters/mistral_ai_spec.rb b/spec/lib/langchain/assistant/llm/adapters/mistral_ai_spec.rb index 31ae258d7..ebfdf53dd 100644 --- a/spec/lib/langchain/assistant/llm/adapters/mistral_ai_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/mistral_ai_spec.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::MistralAI do +RSpec.describe LangChain::Assistant::LLM::Adapters::MistralAI do describe "#build_chat_params" do it "returns the chat parameters" do expect( subject.build_chat_params( messages: [{role: "user", content: "Hello"}], instructions: "Instructions", - tools: [Langchain::Tool::Calculator.new], + tools: [LangChain::Tool::Calculator.new], tool_choice: "langchain_tool_calculator__execute", parallel_tool_calls: false ) ).to eq({ messages: [{role: "user", content: "Hello"}], - tools: Langchain::Tool::Calculator.function_schemas.to_openai_format, + tools: LangChain::Tool::Calculator.function_schemas.to_openai_format, tool_choice: {"function" => {"name" => "langchain_tool_calculator__execute"}, "type" => "function"} }) end diff --git a/spec/lib/langchain/assistant/llm/adapters/ollama_spec.rb b/spec/lib/langchain/assistant/llm/adapters/ollama_spec.rb index b9bbd1ab7..3e13b4926 100644 --- a/spec/lib/langchain/assistant/llm/adapters/ollama_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/ollama_spec.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::Ollama do +RSpec.describe LangChain::Assistant::LLM::Adapters::Ollama do describe "#build_chat_params" do it "returns the chat parameters" do expect( subject.build_chat_params( messages: [{role: "user", content: "Hello"}], instructions: "Instructions", - tools: [Langchain::Tool::Calculator.new], + tools: [LangChain::Tool::Calculator.new], tool_choice: nil, parallel_tool_calls: false ) ).to eq({ messages: [{role: "user", content: "Hello"}], - tools: Langchain::Tool::Calculator.function_schemas.to_openai_format + tools: LangChain::Tool::Calculator.function_schemas.to_openai_format }) end end @@ -38,7 +38,7 @@ content: "Hello", image_url: "https://example.com/image.jpg" ) - ).to be_a(Langchain::Assistant::Messages::OllamaMessage) + ).to be_a(LangChain::Assistant::Messages::OllamaMessage) end end end diff --git a/spec/lib/langchain/assistant/llm/adapters/openai_spec.rb b/spec/lib/langchain/assistant/llm/adapters/openai_spec.rb index c56c14d57..8129f6a95 100644 --- a/spec/lib/langchain/assistant/llm/adapters/openai_spec.rb +++ b/spec/lib/langchain/assistant/llm/adapters/openai_spec.rb @@ -1,19 +1,19 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::LLM::Adapters::OpenAI do +RSpec.describe LangChain::Assistant::LLM::Adapters::OpenAI do describe "#build_chat_params" do it "returns the chat parameters" do expect( subject.build_chat_params( messages: [{role: "user", content: "Hello"}], instructions: "Instructions", - tools: [Langchain::Tool::Calculator.new], + tools: [LangChain::Tool::Calculator.new], tool_choice: "langchain_tool_calculator__execute", parallel_tool_calls: false ) ).to eq({ messages: [{role: "user", content: "Hello"}], - tools: Langchain::Tool::Calculator.function_schemas.to_openai_format, + tools: LangChain::Tool::Calculator.function_schemas.to_openai_format, tool_choice: {"function" => {"name" => "langchain_tool_calculator__execute"}, "type" => "function"}, parallel_tool_calls: false }) diff --git a/spec/lib/langchain/assistant/messages/anthropic_message_spec.rb b/spec/lib/langchain/assistant/messages/anthropic_message_spec.rb index a1f04924e..492fc1989 100644 --- a/spec/lib/langchain/assistant/messages/anthropic_message_spec.rb +++ b/spec/lib/langchain/assistant/messages/anthropic_message_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::Messages::AnthropicMessage do +RSpec.describe LangChain::Assistant::Messages::AnthropicMessage do it "raises an error if role is not one of allowed" do expect { described_class.new(role: "foo") }.to raise_error(ArgumentError) end diff --git a/spec/lib/langchain/assistant/messages/base_spec.rb b/spec/lib/langchain/assistant/messages/base_spec.rb index 9a6befc33..2b8d5d5ec 100644 --- a/spec/lib/langchain/assistant/messages/base_spec.rb +++ b/spec/lib/langchain/assistant/messages/base_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::Messages::Base do +RSpec.describe LangChain::Assistant::Messages::Base do describe "tool?" do it "raises an error" do expect { described_class.new.tool? }.to raise_error(NotImplementedError) diff --git a/spec/lib/langchain/assistant/messages/google_gemini_message_spec.rb b/spec/lib/langchain/assistant/messages/google_gemini_message_spec.rb index 45ab8a406..e3da69f23 100644 --- a/spec/lib/langchain/assistant/messages/google_gemini_message_spec.rb +++ b/spec/lib/langchain/assistant/messages/google_gemini_message_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::Messages::GoogleGeminiMessage do +RSpec.describe LangChain::Assistant::Messages::GoogleGeminiMessage do it "raises an error if role is not one of allowed" do expect { described_class.new(role: "foo") }.to raise_error(ArgumentError) end diff --git a/spec/lib/langchain/assistant/messages/mistral_ai_message_spec.rb b/spec/lib/langchain/assistant/messages/mistral_ai_message_spec.rb index 9f4af8333..a75f2c3c7 100644 --- a/spec/lib/langchain/assistant/messages/mistral_ai_message_spec.rb +++ b/spec/lib/langchain/assistant/messages/mistral_ai_message_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::Messages::MistralAIMessage do +RSpec.describe LangChain::Assistant::Messages::MistralAIMessage do it "raises an error if role is not one of allowed" do expect { described_class.new(role: "foo") }.to raise_error(ArgumentError) end diff --git a/spec/lib/langchain/assistant/messages/ollama_message_spec.rb b/spec/lib/langchain/assistant/messages/ollama_message_spec.rb index 747964e46..0d66fd24e 100644 --- a/spec/lib/langchain/assistant/messages/ollama_message_spec.rb +++ b/spec/lib/langchain/assistant/messages/ollama_message_spec.rb @@ -2,13 +2,13 @@ require "spec_helper" -RSpec.describe Langchain::Assistant::Messages::OllamaMessage do +RSpec.describe LangChain::Assistant::Messages::OllamaMessage do let(:valid_roles) { ["system", "assistant", "user", "tool"] } let(:role) { "assistant" } let(:content) { "This is a message" } let(:image_url) { "https://example.com/image.jpg" } let(:raw_response) { JSON.parse(File.read("spec/fixtures/llm/ollama/chat_with_tool_calls.json")) } - let(:response) { Langchain::LLM::Response::OllamaResponse.new(raw_response) } + let(:response) { LangChain::LLM::Response::OllamaResponse.new(raw_response) } let(:tool_calls) { response.tool_calls } let(:tool_call_id) { "12345" } diff --git a/spec/lib/langchain/assistant/messages/openai_message_spec.rb b/spec/lib/langchain/assistant/messages/openai_message_spec.rb index 6e74bf682..281f33314 100644 --- a/spec/lib/langchain/assistant/messages/openai_message_spec.rb +++ b/spec/lib/langchain/assistant/messages/openai_message_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Assistant::Messages::OpenAIMessage do +RSpec.describe LangChain::Assistant::Messages::OpenAIMessage do it "raises an error if role is not one of allowed" do expect { described_class.new(role: "foo") }.to raise_error(ArgumentError) end diff --git a/spec/lib/langchain/chunk_spec.rb b/spec/lib/langchain/chunk_spec.rb index 6f4effb9f..b2c950900 100644 --- a/spec/lib/langchain/chunk_spec.rb +++ b/spec/lib/langchain/chunk_spec.rb @@ -2,7 +2,7 @@ require "rails_helper" -RSpec.describe Langchain::Chunk do +RSpec.describe LangChain::Chunk do subject { described_class.new(text: "Hello World") } it "has a text" do diff --git a/spec/lib/langchain/chunker/base_spec.rb b/spec/lib/langchain/chunker/base_spec.rb index cab5aaded..c39d5d882 100644 --- a/spec/lib/langchain/chunker/base_spec.rb +++ b/spec/lib/langchain/chunker/base_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::Base do +RSpec.describe LangChain::Chunker::Base do describe "#chunks" do it "raises NotImplementedError" do expect { described_class.new.chunks }.to raise_error(NotImplementedError) diff --git a/spec/lib/langchain/chunker/markdown_spec.rb b/spec/lib/langchain/chunker/markdown_spec.rb index 86f50ecda..766f97310 100644 --- a/spec/lib/langchain/chunker/markdown_spec.rb +++ b/spec/lib/langchain/chunker/markdown_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::Markdown do +RSpec.describe LangChain::Chunker::Markdown do let(:source) { "spec/fixtures/loaders/example.md" } let(:markdown) { File.read(source) } diff --git a/spec/lib/langchain/chunker/recursive_text_spec.rb b/spec/lib/langchain/chunker/recursive_text_spec.rb index 71cc47a98..9e8cfcfee 100644 --- a/spec/lib/langchain/chunker/recursive_text_spec.rb +++ b/spec/lib/langchain/chunker/recursive_text_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::RecursiveText do +RSpec.describe LangChain::Chunker::RecursiveText do let(:source) { "spec/fixtures/loaders/random_facts.txt" } let(:text) { File.read(source) } @@ -24,7 +24,7 @@ .and_call_original chunks = subject.chunks - expect(chunks).to all(be_a(Langchain::Chunk)) + expect(chunks).to all(be_a(LangChain::Chunk)) expect(chunks[1].text).to include(chunks[0].text[-199..]) end end diff --git a/spec/lib/langchain/chunker/semantic_spec.rb b/spec/lib/langchain/chunker/semantic_spec.rb index 51cf9bbb7..b3d9d4c6c 100644 --- a/spec/lib/langchain/chunker/semantic_spec.rb +++ b/spec/lib/langchain/chunker/semantic_spec.rb @@ -1,9 +1,9 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::Semantic do +RSpec.describe LangChain::Chunker::Semantic do let(:source) { "spec/fixtures/loaders/random_facts.txt" } let(:text) { File.read(source) } - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new(text, llm: llm) } diff --git a/spec/lib/langchain/chunker/sentence_spec.rb b/spec/lib/langchain/chunker/sentence_spec.rb index f8121f770..91bcc2319 100644 --- a/spec/lib/langchain/chunker/sentence_spec.rb +++ b/spec/lib/langchain/chunker/sentence_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::Sentence do +RSpec.describe LangChain::Chunker::Sentence do let(:source) { "spec/fixtures/loaders/the_alchemist.txt" } let(:text) { File.read(source) } diff --git a/spec/lib/langchain/chunker/text_spec.rb b/spec/lib/langchain/chunker/text_spec.rb index b5874223c..d87fe85c0 100644 --- a/spec/lib/langchain/chunker/text_spec.rb +++ b/spec/lib/langchain/chunker/text_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Chunker::Text do +RSpec.describe LangChain::Chunker::Text do let(:source) { "spec/fixtures/loaders/example.txt" } let(:text) { File.read(source) } diff --git a/spec/lib/langchain/data_spec.rb b/spec/lib/langchain/data_spec.rb index cdc539b67..01076bf93 100644 --- a/spec/lib/langchain/data_spec.rb +++ b/spec/lib/langchain/data_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Data do +RSpec.describe LangChain::Data do let(:source) { "spec/fixtures/loaders/example.txt" } let(:data) { File.read(source) } @@ -11,18 +11,18 @@ chunks = subject.chunks split_data = data.split("\n\n") - expect(chunks).to all(be_a(Langchain::Chunk)) + expect(chunks).to all(be_a(LangChain::Chunk)) expect(chunks[0].text).to eq(split_data[0]) expect(chunks[1].text).to eq(split_data[1]) expect(chunks[2].text).to eq(split_data[2]) end context "with an optional chunker class" do - subject { described_class.new(data, source: source, chunker: Langchain::Chunker::RecursiveText) } - let(:chunker) { instance_double(Langchain::Chunker::RecursiveText) } + subject { described_class.new(data, source: source, chunker: LangChain::Chunker::RecursiveText) } + let(:chunker) { instance_double(LangChain::Chunker::RecursiveText) } before do - expect(Langchain::Chunker::RecursiveText).to receive(:new).and_return(chunker) + expect(LangChain::Chunker::RecursiveText).to receive(:new).and_return(chunker) end it "uses an optional chunker class" do diff --git a/spec/lib/langchain/evals/ragas/answer_relevance_spec.rb b/spec/lib/langchain/evals/ragas/answer_relevance_spec.rb index d5973fa4b..000ae0599 100644 --- a/spec/lib/langchain/evals/ragas/answer_relevance_spec.rb +++ b/spec/lib/langchain/evals/ragas/answer_relevance_spec.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -RSpec.describe Langchain::Evals::Ragas::AnswerRelevance do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } +RSpec.describe LangChain::Evals::Ragas::AnswerRelevance do + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new(llm: llm, batch_size: 1) } let(:question) { "When is the scheduled launch date and time for the PSLV-C56 mission, and where will it be launched from?" } @@ -12,7 +12,7 @@ let(:generated_question) { "What is the purpose of the PSLV-C56 mission?" } before do - allow(subject.llm).to receive(:complete).and_return(double("Langchain::LLM::Response::OpenAIResponse", completion: generated_question)) + allow(subject.llm).to receive(:complete).and_return(double("LangChain::LLM::Response::OpenAIResponse", completion: generated_question)) allow(subject).to receive(:calculate_similarity) .with(original_question: question, generated_question: generated_question) .and_return(score) diff --git a/spec/lib/langchain/evals/ragas/context_relevance_spec.rb b/spec/lib/langchain/evals/ragas/context_relevance_spec.rb index 24a1fe316..8012b425a 100644 --- a/spec/lib/langchain/evals/ragas/context_relevance_spec.rb +++ b/spec/lib/langchain/evals/ragas/context_relevance_spec.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -RSpec.describe Langchain::Evals::Ragas::ContextRelevance do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } +RSpec.describe LangChain::Evals::Ragas::ContextRelevance do + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new(llm: llm) } let(:question) { "When was the Chimnabai Clock Tower completed, and who was it named after?" } @@ -12,7 +12,7 @@ let(:sentences) { "It was completed in 1896 and named in memory of Chimnabai I (1864–1885), a queen and the first wife of Sayajirao Gaekwad III of Baroda State." } before do - allow(subject.llm).to receive(:complete).and_return(double("Langchain::LLM::Response::OpenAIResponse", completion: sentences)) + allow(subject.llm).to receive(:complete).and_return(double("LangChain::LLM::Response::OpenAIResponse", completion: sentences)) end it "generates the context_relevance score" do diff --git a/spec/lib/langchain/evals/ragas/faithfulness_spec.rb b/spec/lib/langchain/evals/ragas/faithfulness_spec.rb index c9eeb2187..d11fa3cca 100644 --- a/spec/lib/langchain/evals/ragas/faithfulness_spec.rb +++ b/spec/lib/langchain/evals/ragas/faithfulness_spec.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -RSpec.describe Langchain::Evals::Ragas::Faithfulness do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } +RSpec.describe LangChain::Evals::Ragas::Faithfulness do + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new(llm: llm) } let(:question) { "Who directed the film Oppenheimer and who stars as J. Robert Oppenheimer in the film?" } diff --git a/spec/lib/langchain/evals/ragas/main_spec.rb b/spec/lib/langchain/evals/ragas/main_spec.rb index 83f46b14c..9f2f5f3ce 100644 --- a/spec/lib/langchain/evals/ragas/main_spec.rb +++ b/spec/lib/langchain/evals/ragas/main_spec.rb @@ -1,7 +1,7 @@ # frozen_string_literal: true -RSpec.describe Langchain::Evals::Ragas::Main do - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } +RSpec.describe LangChain::Evals::Ragas::Main do + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new(llm: llm) } let(:question) { "Who directed the film Oppenheimer and who stars as J. Robert Oppenheimer in the film?" } @@ -10,9 +10,9 @@ describe "#score" do before do - allow_any_instance_of(Langchain::Evals::Ragas::AnswerRelevance).to receive(:score).and_return(0.9573145866787608) - allow_any_instance_of(Langchain::Evals::Ragas::ContextRelevance).to receive(:score).and_return(0.6666666666666666) - allow_any_instance_of(Langchain::Evals::Ragas::Faithfulness).to receive(:score).and_return(0.5) + allow_any_instance_of(LangChain::Evals::Ragas::AnswerRelevance).to receive(:score).and_return(0.9573145866787608) + allow_any_instance_of(LangChain::Evals::Ragas::ContextRelevance).to receive(:score).and_return(0.6666666666666666) + allow_any_instance_of(LangChain::Evals::Ragas::Faithfulness).to receive(:score).and_return(0.5) end it "generates the scores" do diff --git a/spec/lib/langchain/llm/ai21_spec.rb b/spec/lib/langchain/llm/ai21_spec.rb index bd9ea40e8..7c7851084 100644 --- a/spec/lib/langchain/llm/ai21_spec.rb +++ b/spec/lib/langchain/llm/ai21_spec.rb @@ -2,7 +2,7 @@ require "ai21" -RSpec.describe Langchain::LLM::AI21 do +RSpec.describe LangChain::LLM::AI21 do let(:subject) { described_class.new(api_key: "123") } describe "#complete" do diff --git a/spec/lib/langchain/llm/anthropic_spec.rb b/spec/lib/langchain/llm/anthropic_spec.rb index c472ff356..f346461c9 100644 --- a/spec/lib/langchain/llm/anthropic_spec.rb +++ b/spec/lib/langchain/llm/anthropic_spec.rb @@ -2,7 +2,7 @@ require "anthropic" -RSpec.describe Langchain::LLM::Anthropic do +RSpec.describe LangChain::LLM::Anthropic do let(:subject) { described_class.new(api_key: "123") } describe "#initialize" do @@ -70,7 +70,7 @@ end it "raises an error" do - expect { subject.complete(prompt: completion) }.to raise_error(Langchain::LLM::ApiError, "Anthropic API error: The request is invalid. Please check the request and try again.") + expect { subject.complete(prompt: completion) }.to raise_error(LangChain::LLM::ApiError, "Anthropic API error: The request is invalid. Please check the request and try again.") end end end @@ -145,7 +145,7 @@ it "handles streaming responses correctly" do rsp = subject.chat(messages: messages, &stream_handler) - expect(rsp).to be_a(Langchain::LLM::Response::AnthropicResponse) + expect(rsp).to be_a(LangChain::LLM::Response::AnthropicResponse) expect(rsp.completion_tokens).to eq(10) expect(rsp.total_tokens).to eq(10) expect(rsp.chat_completion).to eq("Life is pretty good") @@ -167,7 +167,7 @@ it "handles streaming responses correctly" do rsp = subject.chat(messages: messages, &stream_handler) - expect(rsp).to be_a(Langchain::LLM::Response::AnthropicResponse) + expect(rsp).to be_a(LangChain::LLM::Response::AnthropicResponse) expect(rsp.completion_tokens).to eq(89) expect(rsp.total_tokens).to eq(89) expect(rsp.chat_completion).to eq("Okay, let's check the weather for San Francisco, CA:") @@ -184,7 +184,7 @@ rsp = subject.chat(messages: [{role: "user", content: "What's the weather?"}], &stream_handler) # Verify the response - expect(rsp).to be_a(Langchain::LLM::Response::AnthropicResponse) + expect(rsp).to be_a(LangChain::LLM::Response::AnthropicResponse) expect(rsp.chat_completion).to eq("I'll check the weather for you:") # Verify the tool call with empty input is handled correctly diff --git a/spec/lib/langchain/llm/aws_bedrock_spec.rb b/spec/lib/langchain/llm/aws_bedrock_spec.rb index 9d5634a1c..f3995cda0 100644 --- a/spec/lib/langchain/llm/aws_bedrock_spec.rb +++ b/spec/lib/langchain/llm/aws_bedrock_spec.rb @@ -2,7 +2,7 @@ require "aws-sdk-bedrockruntime" -RSpec.describe Langchain::LLM::AwsBedrock do +RSpec.describe LangChain::LLM::AwsBedrock do let(:subject) { described_class.new } before do @@ -114,7 +114,7 @@ i += 1 end - expect(response).to be_a(Langchain::LLM::Response::AnthropicResponse) + expect(response).to be_a(LangChain::LLM::Response::AnthropicResponse) expect(response.chat_completion).to eq("The capital of France is Paris.") end end @@ -473,7 +473,7 @@ it "returns an AnthropicResponse" do response = subject.send(:response_from_chunks, chunks) - expect(response).to be_a(Langchain::LLM::Response::AnthropicResponse) + expect(response).to be_a(LangChain::LLM::Response::AnthropicResponse) expect(response.chat_completion).to eq("The capital of France is Paris.") end diff --git a/spec/lib/langchain/llm/azure_spec.rb b/spec/lib/langchain/llm/azure_spec.rb index ca7274931..9f07f15c9 100644 --- a/spec/lib/langchain/llm/azure_spec.rb +++ b/spec/lib/langchain/llm/azure_spec.rb @@ -2,7 +2,7 @@ require "openai" -RSpec.describe Langchain::LLM::Azure do +RSpec.describe LangChain::LLM::Azure do let(:subject) do described_class.new( api_key: "123", diff --git a/spec/lib/langchain/llm/base_spec.rb b/spec/lib/langchain/llm/base_spec.rb index df3f8c96f..f5b18c71b 100644 --- a/spec/lib/langchain/llm/base_spec.rb +++ b/spec/lib/langchain/llm/base_spec.rb @@ -1,15 +1,15 @@ # frozen_string_literal: true -class TestLLM < Langchain::LLM::Base +class TestLLM < LangChain::LLM::Base end -class CustomTestLLM < Langchain::LLM::Base +class CustomTestLLM < LangChain::LLM::Base def initialize chat_parameters.update(version: {default: 1}) end end -RSpec.describe Langchain::LLM::Base do +RSpec.describe LangChain::LLM::Base do let(:subject) { described_class.new } describe "#chat" do @@ -53,12 +53,12 @@ def initialize it "returns an instance of ChatParameters" do chat_params = subject.chat_parameters - expect(chat_params).to be_instance_of(Langchain::LLM::Parameters::Chat) + expect(chat_params).to be_instance_of(LangChain::LLM::Parameters::Chat) end it "proxies the provided params to the UnifiedParameters" do chat_params = subject.chat_parameters({stream: true}) - expect(chat_params).to be_instance_of(Langchain::LLM::Parameters::Chat) + expect(chat_params).to be_instance_of(LangChain::LLM::Parameters::Chat) expect(chat_params[:stream]).to be_truthy end diff --git a/spec/lib/langchain/llm/cohere_spec.rb b/spec/lib/langchain/llm/cohere_spec.rb index 1bc2eafb9..25a512a7f 100644 --- a/spec/lib/langchain/llm/cohere_spec.rb +++ b/spec/lib/langchain/llm/cohere_spec.rb @@ -2,7 +2,7 @@ require "cohere" -RSpec.describe Langchain::LLM::Cohere do +RSpec.describe LangChain::LLM::Cohere do let(:subject) { described_class.new(api_key: "123") } describe "#initialize" do @@ -127,7 +127,7 @@ system: "You are a cheerful happy chatbot!", messages: [{role: "user", message: "How are you?"}] ) - ).to be_a(Langchain::LLM::Response::CohereResponse) + ).to be_a(LangChain::LLM::Response::CohereResponse) end end diff --git a/spec/lib/langchain/llm/google_gemini_spec.rb b/spec/lib/langchain/llm/google_gemini_spec.rb index 8647c01db..44b6b9f37 100644 --- a/spec/lib/langchain/llm/google_gemini_spec.rb +++ b/spec/lib/langchain/llm/google_gemini_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::GoogleGemini do +RSpec.describe LangChain::LLM::GoogleGemini do subject { described_class.new(api_key: "123") } describe "#initialize" do @@ -40,7 +40,7 @@ it "returns valid llm response object" do response = subject.embed(text: "Hello world") - expect(response).to be_a(Langchain::LLM::Response::GoogleGeminiResponse) + expect(response).to be_a(LangChain::LLM::Response::GoogleGeminiResponse) expect(response.model).to eq("text-embedding-004") expect(response.embedding).to eq(embedding) end @@ -88,7 +88,7 @@ it "returns valid llm response object" do response = subject.chat(messages: messages) - expect(response).to be_a(Langchain::LLM::Response::GoogleGeminiResponse) + expect(response).to be_a(LangChain::LLM::Response::GoogleGeminiResponse) expect(response.model).to eq("gemini-1.5-pro-latest") expect(response.chat_completion).to eq("The answer is 4.0") end diff --git a/spec/lib/langchain/llm/google_vertex_ai_spec.rb b/spec/lib/langchain/llm/google_vertex_ai_spec.rb index b8dc2c421..cd6d60144 100644 --- a/spec/lib/langchain/llm/google_vertex_ai_spec.rb +++ b/spec/lib/langchain/llm/google_vertex_ai_spec.rb @@ -2,7 +2,7 @@ require "googleauth" -RSpec.describe Langchain::LLM::GoogleVertexAI do +RSpec.describe LangChain::LLM::GoogleVertexAI do subject { described_class.new(project_id: "123", region: "us-central1") } before do @@ -47,7 +47,7 @@ it "returns valid llm response object" do response = subject.embed(text: "Hello world") - expect(response).to be_a(Langchain::LLM::Response::GoogleGeminiResponse) + expect(response).to be_a(LangChain::LLM::Response::GoogleGeminiResponse) expect(response.model).to eq("textembedding-gecko") expect(response.embedding).to eq(embedding) end @@ -64,7 +64,7 @@ it "returns valid llm response object" do response = subject.chat(messages: messages) - expect(response).to be_a(Langchain::LLM::Response::GoogleGeminiResponse) + expect(response).to be_a(LangChain::LLM::Response::GoogleGeminiResponse) expect(response.model).to eq("gemini-1.0-pro") expect(response.chat_completion).to eq("The sky is not a physical object with a defined height.") end diff --git a/spec/lib/langchain/llm/hugging_face_spec.rb b/spec/lib/langchain/llm/hugging_face_spec.rb index 0037b7ca7..7fdb87c4b 100644 --- a/spec/lib/langchain/llm/hugging_face_spec.rb +++ b/spec/lib/langchain/llm/hugging_face_spec.rb @@ -2,7 +2,7 @@ require "hugging_face" -RSpec.describe Langchain::LLM::HuggingFace do +RSpec.describe LangChain::LLM::HuggingFace do let(:subject) { described_class.new(api_key: "123") } describe "#embed" do diff --git a/spec/lib/langchain/llm/mistral_ai_spec.rb b/spec/lib/langchain/llm/mistral_ai_spec.rb index 943f66282..bf2d03cc5 100644 --- a/spec/lib/langchain/llm/mistral_ai_spec.rb +++ b/spec/lib/langchain/llm/mistral_ai_spec.rb @@ -2,7 +2,7 @@ require "mistral-ai" -RSpec.describe Langchain::LLM::MistralAI do +RSpec.describe LangChain::LLM::MistralAI do let(:subject) { described_class.new(api_key: "123") } let(:mock_client) { instance_double(Mistral::Controllers::Client) } diff --git a/spec/lib/langchain/llm/ollama_spec.rb b/spec/lib/langchain/llm/ollama_spec.rb index b9945f891..af1c70afd 100644 --- a/spec/lib/langchain/llm/ollama_spec.rb +++ b/spec/lib/langchain/llm/ollama_spec.rb @@ -2,7 +2,7 @@ require "faraday" -RSpec.describe Langchain::LLM::Ollama do +RSpec.describe LangChain::LLM::Ollama do let(:default_url) { "http://localhost:11434" } let(:subject) { described_class.new(url: default_url, default_options: {completion_model: "llama3.2", embedding_model: "llama3.2"}) } let(:client) { subject.send(:client) } @@ -53,7 +53,7 @@ end it "returns an embedding" do - expect(subject.embed(text: "Hello, world!")).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(subject.embed(text: "Hello, world!")).to be_a(LangChain::LLM::Response::OllamaResponse) expect(subject.embed(text: "Hello, world!").embedding.count).to eq(3) end @@ -83,11 +83,11 @@ let(:response) { subject.complete(prompt: prompt) } it "returns a completion", :vcr do - expect(response).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(response).to be_a(LangChain::LLM::Response::OllamaResponse) expect(response.completion).to eq("Complicated.") end - it "does not use streamed responses", vcr: {cassette_name: "Langchain_LLM_Ollama_complete_returns_a_completion"} do + it "does not use streamed responses", vcr: {cassette_name: "LangChain_LLM_Ollama_complete_returns_a_completion"} do expect(client).to receive(:post).with("api/generate", hash_including(stream: false)).and_call_original response end @@ -97,20 +97,20 @@ let(:streamed_responses) { [] } it "returns a completion", :vcr do - expect(response).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(response).to be_a(LangChain::LLM::Response::OllamaResponse) expect(response.completion).to eq("Complicated.") expect(response.total_tokens).to eq(36) end - it "uses streamed responses", vcr: {cassette_name: "Langchain_LLM_Ollama_complete_when_passing_a_block_returns_a_completion"} do + it "uses streamed responses", vcr: {cassette_name: "LangChain_LLM_Ollama_complete_when_passing_a_block_returns_a_completion"} do expect(client).to receive(:post).with("api/generate", hash_including(stream: true)).and_call_original response end - it "yields the intermediate responses to the block", vcr: {cassette_name: "Langchain_LLM_Ollama_complete_when_passing_a_block_returns_a_completion"} do + it "yields the intermediate responses to the block", vcr: {cassette_name: "LangChain_LLM_Ollama_complete_when_passing_a_block_returns_a_completion"} do response expect(streamed_responses.length).to eq 4 - expect(streamed_responses).to be_all { |resp| resp.is_a?(Langchain::LLM::Response::OllamaResponse) } + expect(streamed_responses).to be_all { |resp| resp.is_a?(LangChain::LLM::Response::OllamaResponse) } expect(streamed_responses.map(&:completion).join).to eq("Complicated.") end end @@ -121,11 +121,11 @@ let(:response) { subject.chat(messages: messages) } it "returns a chat completion", :vcr do - expect(response).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(response).to be_a(LangChain::LLM::Response::OllamaResponse) expect(response.chat_completion).to include("I'm just a language model") end - it "does not use streamed responses", vcr: {cassette_name: "Langchain_LLM_Ollama_chat_returns_a_chat_completion"} do + it "does not use streamed responses", vcr: {cassette_name: "LangChain_LLM_Ollama_chat_returns_a_chat_completion"} do expect(client).to receive(:post).with("api/chat", hash_including(stream: false)).and_call_original response end @@ -135,19 +135,19 @@ let(:streamed_responses) { [] } it "returns a chat completion", :vcr do - expect(response).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(response).to be_a(LangChain::LLM::Response::OllamaResponse) expect(response.chat_completion).to include("I'm just a language model") end - it "uses streamed responses", vcr: {cassette_name: "Langchain_LLM_Ollama_chat_when_passing_a_block_returns_a_chat_completion"} do + it "uses streamed responses", vcr: {cassette_name: "LangChain_LLM_Ollama_chat_when_passing_a_block_returns_a_chat_completion"} do expect(client).to receive(:post).with("api/chat", hash_including(stream: true)).and_call_original response end - it "yields the intermediate responses to the block", vcr: {cassette_name: "Langchain_LLM_Ollama_chat_when_passing_a_block_returns_a_chat_completion"} do + it "yields the intermediate responses to the block", vcr: {cassette_name: "LangChain_LLM_Ollama_chat_when_passing_a_block_returns_a_chat_completion"} do response expect(streamed_responses.length).to eq 51 - expect(streamed_responses).to be_all { |resp| resp.is_a?(Langchain::LLM::Response::OllamaResponse) } + expect(streamed_responses).to be_all { |resp| resp.is_a?(LangChain::LLM::Response::OllamaResponse) } expect(streamed_responses.map(&:chat_completion).join).to include("I'm just a language model") end end @@ -161,7 +161,7 @@ it "returns a summarization", :vcr do response = subject.summarize(text: mary_had_a_little_lamb_text) - expect(response).to be_a(Langchain::LLM::Response::OllamaResponse) + expect(response).to be_a(LangChain::LLM::Response::OllamaResponse) expect(response.completion).not_to match(/summary/) expect(response.completion).to start_with("A young girl named Mary has a pet lamb") end diff --git a/spec/lib/langchain/llm/openai_spec.rb b/spec/lib/langchain/llm/openai_spec.rb index 352d0ceef..32357f0cf 100644 --- a/spec/lib/langchain/llm/openai_spec.rb +++ b/spec/lib/langchain/llm/openai_spec.rb @@ -2,7 +2,7 @@ require "openai" -RSpec.describe Langchain::LLM::OpenAI do +RSpec.describe LangChain::LLM::OpenAI do let(:subject) { described_class.new(api_key: "123", **options) } let(:options) { {} } @@ -12,19 +12,19 @@ expect { subject }.not_to raise_error end - it "forwards the Langchain logger to the client" do + it "forwards the LangChain logger to the client" do f_mock = double("f_mock", response: nil) allow(OpenAI::Client).to receive(:new) { |**, &block| block&.call(f_mock) } subject - expect(f_mock).to have_received(:response).with(:logger, Langchain.logger, anything) + expect(f_mock).to have_received(:response).with(:logger, LangChain.logger, anything) end context "when log level is DEBUG" do before do - Langchain.logger.level = Logger::DEBUG + LangChain.logger.level = Logger::DEBUG end it "configures the client to log the errors" do @@ -46,7 +46,7 @@ context "when log level is not DEBUG" do before do - Langchain.logger.level = Logger::INFO + LangChain.logger.level = Logger::INFO end it "configures the client to NOT log the errors" do @@ -137,7 +137,7 @@ it "returns valid llm response object" do response = subject.embed(text: "Hello World") - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.model).to eq("text-embedding-3-small") expect(response.embedding).to eq([-0.007097351, 0.0035200312, -0.0069700438]) expect(response.prompt_tokens).to eq(2) @@ -149,7 +149,7 @@ it "returns an embedding" do response = subject.embed(text: "Hello World") - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.embedding).to eq(result) end end @@ -162,7 +162,7 @@ it "returns an embedding" do response = subject.embed(text: "Hello World", model: "text-embedding-ada-002", user: "id") - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.embedding).to eq(result) end end @@ -205,7 +205,7 @@ end end - Langchain::LLM::OpenAI::EMBEDDING_SIZES.each do |model_key, dimensions| + LangChain::LLM::OpenAI::EMBEDDING_SIZES.each do |model_key, dimensions| model = model_key.to_s context "when using model #{model}" do @@ -240,7 +240,7 @@ it "generates an embedding using #{model}" do embedding_response = subject.embed(text: text, model: model) - expect(embedding_response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(embedding_response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(embedding_response.model).to eq(model) expect(embedding_response.embedding).to eq(result) expect(embedding_response.prompt_tokens).to eq(2) @@ -324,7 +324,7 @@ it "returns valid llm response object" do response = subject.complete(prompt: "Hello World") - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.model).to eq("gpt-4o-mini-2024-07-18") expect(response.completion).to eq("The meaning of life is subjective and can vary from person to person.") expect(response.prompt_tokens).to eq(7) @@ -335,7 +335,7 @@ it "returns a completion" do response = subject.complete(prompt: "Hello World") - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.model).to eq("gpt-4o-mini-2024-07-18") expect(response.completions).to eq([{"message" => {"role" => "assistant", "content" => "The meaning of life is subjective and can vary from person to person."}, "finish_reason" => "stop", "index" => 0}]) expect(response.completion).to eq("The meaning of life is subjective and can vary from person to person.") @@ -363,7 +363,7 @@ end before do - allow(Langchain).to receive(:logger).and_return(logger) + allow(LangChain).to receive(:logger).and_return(logger) allow(logger).to receive(:warn) end @@ -433,7 +433,7 @@ it "raises an error" do expect { subject.complete(prompt: "Hello World") - }.to raise_error(Langchain::LLM::ApiError, "OpenAI API error: User location is not supported for the API use.") + }.to raise_error(LangChain::LLM::ApiError, "OpenAI API error: User location is not supported for the API use.") end end end @@ -503,13 +503,13 @@ beep: :boop ) - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) end it "returns valid llm response object" do response = subject.chat(messages: [{role: "user", content: "What is the meaning of life?"}]) - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.model).to eq("gpt-4o-mini-2024-07-18") expect(response.chat_completion).to eq("As an AI language model, I don't have feelings, but I'm functioning well. How can I assist you today?") expect(response.prompt_tokens).to eq(14) @@ -521,7 +521,7 @@ it "sends prompt within messages" do response = subject.chat(messages: [{role: "user", content: prompt}]) - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.model).to eq("gpt-4o-mini-2024-07-18") expect(response.completions).to eq(choices) expect(response.chat_completion).to eq(answer) @@ -637,7 +637,7 @@ it "does not raise NoMethodError and returns correctly assembled response" do expect { response = subject.chat(messages: messages, &streaming_block) - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.chat_completion).to eq(expected_completion) expect(response.role).to eq("assistant") expect(response.prompt_tokens).to eq(5) @@ -675,7 +675,7 @@ response = subject.chat(messages: [content: prompt, role: "user"]) do |chunk| chunk end - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.prompt_tokens).to eq(10) expect(response.completion_tokens).to eq(11) expect(response.total_tokens).to eq(12) @@ -712,7 +712,7 @@ response = subject.chat(messages: [content: prompt, role: "user"], n: 2) do |chunk| chunk end - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.completions).to eq( [ {"index" => 0, "message" => {"role" => "assistant", "content" => answer}, "finish_reason" => "stop"}, @@ -773,7 +773,7 @@ chunk end - expect(response).to be_a(Langchain::LLM::Response::OpenAIResponse) + expect(response).to be_a(LangChain::LLM::Response::OpenAIResponse) expect(response.raw_response.dig("choices", 0, "message", "tool_calls")).to eq(expected_tool_calls) end end @@ -786,7 +786,7 @@ it "raises an error" do expect { subject.chat(messages: [content: prompt, role: "user"]) - }.to raise_error(Langchain::LLM::ApiError, "OpenAI API error: User location is not supported for the API use.") + }.to raise_error(LangChain::LLM::ApiError, "OpenAI API error: User location is not supported for the API use.") end end diff --git a/spec/lib/langchain/llm/parameters/chat_spec.rb b/spec/lib/langchain/llm/parameters/chat_spec.rb index 02300ea4e..b4ae9bc71 100644 --- a/spec/lib/langchain/llm/parameters/chat_spec.rb +++ b/spec/lib/langchain/llm/parameters/chat_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Parameters::Chat do +RSpec.describe LangChain::LLM::Parameters::Chat do let(:aliases) do {max_tokens_supported: :max_tokens} end diff --git a/spec/lib/langchain/llm/replicate_spec.rb b/spec/lib/langchain/llm/replicate_spec.rb index 2b2abe452..472a787f0 100644 --- a/spec/lib/langchain/llm/replicate_spec.rb +++ b/spec/lib/langchain/llm/replicate_spec.rb @@ -2,7 +2,7 @@ require "replicate" -RSpec.describe Langchain::LLM::Replicate do +RSpec.describe LangChain::LLM::Replicate do let(:subject) { described_class.new(api_key: "123") } describe "#completion_model" do diff --git a/spec/lib/langchain/llm/response/anthropic_response_spec.rb b/spec/lib/langchain/llm/response/anthropic_response_spec.rb index 3fe841868..6008563d3 100644 --- a/spec/lib/langchain/llm/response/anthropic_response_spec.rb +++ b/spec/lib/langchain/llm/response/anthropic_response_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Response::AnthropicResponse do +RSpec.describe LangChain::LLM::Response::AnthropicResponse do let(:raw_chat_completions_response) { JSON.parse File.read("spec/fixtures/llm/anthropic/chat.json") } diff --git a/spec/lib/langchain/llm/response/aws_bedrock_meta_response_spec.rb b/spec/lib/langchain/llm/response/aws_bedrock_meta_response_spec.rb index 3500a20b3..318933676 100644 --- a/spec/lib/langchain/llm/response/aws_bedrock_meta_response_spec.rb +++ b/spec/lib/langchain/llm/response/aws_bedrock_meta_response_spec.rb @@ -1,8 +1,8 @@ # frozen_string_literal: true -require_relative "#{Langchain.root}/langchain/llm/response/aws_bedrock_meta_response" +require_relative "#{LangChain.root}/langchain/llm/response/aws_bedrock_meta_response" -RSpec.describe Langchain::LLM::Response::AwsBedrockMetaResponse do +RSpec.describe LangChain::LLM::Response::AwsBedrockMetaResponse do let(:raw_chat_completions_response) { JSON.parse File.read("spec/fixtures/llm/aws_bedrock_meta/complete.json") } diff --git a/spec/lib/langchain/llm/response/cohere_response_spec.rb b/spec/lib/langchain/llm/response/cohere_response_spec.rb index 04e58e438..03a5d88d4 100644 --- a/spec/lib/langchain/llm/response/cohere_response_spec.rb +++ b/spec/lib/langchain/llm/response/cohere_response_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Response::CohereResponse do +RSpec.describe LangChain::LLM::Response::CohereResponse do let(:raw_chat_completions_response) { JSON.parse File.read("spec/fixtures/llm/cohere/chat.json") } diff --git a/spec/lib/langchain/llm/response/google_gemini_response_spec.rb b/spec/lib/langchain/llm/response/google_gemini_response_spec.rb index ec94c927b..947a33603 100644 --- a/spec/lib/langchain/llm/response/google_gemini_response_spec.rb +++ b/spec/lib/langchain/llm/response/google_gemini_response_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Response::GoogleGeminiResponse do +RSpec.describe LangChain::LLM::Response::GoogleGeminiResponse do describe "#chat_completion" do let(:raw_response) { JSON.parse File.read("spec/fixtures/llm/google_gemini/chat.json") diff --git a/spec/lib/langchain/llm/response/mistral_ai_response_spec.rb b/spec/lib/langchain/llm/response/mistral_ai_response_spec.rb index 5f3791ba0..80a20e172 100644 --- a/spec/lib/langchain/llm/response/mistral_ai_response_spec.rb +++ b/spec/lib/langchain/llm/response/mistral_ai_response_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Response::MistralAIResponse do +RSpec.describe LangChain::LLM::Response::MistralAIResponse do let(:raw_chat_completions_response) { JSON.parse File.read("spec/fixtures/llm/mistral_ai/chat.json") } diff --git a/spec/lib/langchain/llm/response/ollama_response_spec.rb b/spec/lib/langchain/llm/response/ollama_response_spec.rb index c241eaac4..b65b0a0bd 100644 --- a/spec/lib/langchain/llm/response/ollama_response_spec.rb +++ b/spec/lib/langchain/llm/response/ollama_response_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::Response::OllamaResponse do +RSpec.describe LangChain::LLM::Response::OllamaResponse do subject { described_class.new(raw_response) } describe "chat completions" do diff --git a/spec/lib/langchain/llm/unified_parameters_spec.rb b/spec/lib/langchain/llm/unified_parameters_spec.rb index f1a82c886..01f21a96a 100644 --- a/spec/lib/langchain/llm/unified_parameters_spec.rb +++ b/spec/lib/langchain/llm/unified_parameters_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::LLM::UnifiedParameters do +RSpec.describe LangChain::LLM::UnifiedParameters do # For now, the unifier only maps keys, but in the future it may be beneficial # to introduce an ActiveModel-style validator to restrict inputs to conform to # types required of the LLMs APIs diff --git a/spec/lib/langchain/loader_spec.rb b/spec/lib/langchain/loader_spec.rb index 9054c3baa..c40c8e4c9 100644 --- a/spec/lib/langchain/loader_spec.rb +++ b/spec/lib/langchain/loader_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Loader do +RSpec.describe LangChain::Loader do describe "#load" do let(:status) { ["200", "OK"] } let(:body) { "Lorem Ipsum" } @@ -45,7 +45,7 @@ let(:path) { "spec/fixtures/loaders/example.txt" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("Lorem Ipsum") end end @@ -54,7 +54,7 @@ let(:path) { "http://example.com/example.txt" } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq("Lorem Ipsum") end end @@ -65,7 +65,7 @@ let(:path) { "spec/fixtures/loaders/example.html" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq("Lorem Ipsum\n\nDolor sit amet.") end end @@ -76,7 +76,7 @@ let(:content_type) { "text/html" } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq("Lorem Ipsum\n\nDolor sit amet.") end end @@ -87,7 +87,7 @@ let(:path) { "spec/fixtures/loaders/cairo-unicode.pdf" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("UTF-8 encoded sample plain-text file") expect(subject.value).to include("The ASCII compatible UTF-8 encoding used in this plain-text file") expect(subject.value).to include("The Greek anthem:") @@ -103,7 +103,7 @@ let(:content_type) { "application/pdf" } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("UTF-8 encoded sample plain-text file") expect(subject.value).to include("The ASCII compatible UTF-8 encoding used in this plain-text file") expect(subject.value).to include("The Greek anthem:") @@ -119,7 +119,7 @@ let(:path) { "spec/fixtures/loaders/sample.docx" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ac faucibus odio.") end end @@ -130,7 +130,7 @@ let(:content_type) { "application/vnd.openxmlformats-officedocument.wordprocessingml.document" } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc ac faucibus odio.") end end @@ -158,7 +158,7 @@ let(:path) { "spec/fixtures/loaders/example.csv" } it "loads data from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq(result) end end @@ -171,7 +171,7 @@ subject { described_class.new(path, options).load } it "loads data from csv file separated by semicolon" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq(semicolon_result) end end @@ -183,7 +183,7 @@ let(:content_type) { "text/csv" } it "loads data from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq(result) end end @@ -198,7 +198,7 @@ let(:path) { "spec/fixtures/loaders/example.json" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include(result) end end @@ -209,7 +209,7 @@ let(:body) { File.read("spec/fixtures/loaders/example.json") } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include(result) end end @@ -229,7 +229,7 @@ let(:path) { "spec/fixtures/loaders/example.jsonl" } it "loads text from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq(result) end end @@ -240,7 +240,7 @@ let(:body) { File.read("spec/fixtures/loaders/example.jsonl") } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq(result) end end @@ -251,7 +251,7 @@ let(:path) { "spec/fixtures/loaders/example.md" } it "loads markdown from file" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("Lorem Ipsum") end end @@ -260,7 +260,7 @@ let(:path) { "http://example.com/example.md" } it "loads markdown from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq("Lorem Ipsum") end end @@ -275,7 +275,7 @@ let(:path) { "spec/fixtures/loaders/example.txt" } it "returns data processed with custom processor" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to include("muspI meroL") end end @@ -284,7 +284,7 @@ let(:path) { "http://example.com/example.txt" } it "loads text from URL" do - expect(subject).to be_a(Langchain::Data) + expect(subject).to be_a(LangChain::Data) expect(subject.value).to eq("muspI meroL") end end @@ -292,13 +292,13 @@ context "with an optional chunker class" do subject do - described_class.new(path, chunker: Langchain::Chunker::RecursiveText) + described_class.new(path, chunker: LangChain::Chunker::RecursiveText) end let(:path) { "http://example.com/example.txt" } - it "passes an optional chunker class to Langchain::Data" do - expect(Langchain::Data).to receive(:new).with(instance_of(String), chunker: Langchain::Chunker::RecursiveText, source: nil) + it "passes an optional chunker class to LangChain::Data" do + expect(LangChain::Data).to receive(:new).with(instance_of(String), chunker: LangChain::Chunker::RecursiveText, source: nil) subject.load end end @@ -308,7 +308,7 @@ let(:path) { "spec/fixtures/loaders/example.swf" } it "raises unknown format" do - expect { subject }.to raise_error Langchain::Loader::UnknownFormatError + expect { subject }.to raise_error LangChain::Loader::UnknownFormatError end end @@ -318,7 +318,7 @@ let(:content_type) { "application/vnd.swf" } it "raises unknown format" do - expect { subject }.to raise_error Langchain::Loader::UnknownFormatError + expect { subject }.to raise_error LangChain::Loader::UnknownFormatError end end end diff --git a/spec/lib/langchain/output_parsers/base_spec.rb b/spec/lib/langchain/output_parsers/base_spec.rb index 94c10ac92..679142750 100644 --- a/spec/lib/langchain/output_parsers/base_spec.rb +++ b/spec/lib/langchain/output_parsers/base_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::OutputParsers::Base do +RSpec.describe LangChain::OutputParsers::Base do describe "#parse" do it "must be implemented by subclasses" do expect { described_class.new.parse(text: "") }.to raise_error(NotImplementedError) diff --git a/spec/lib/langchain/output_parsers/fix_spec.rb b/spec/lib/langchain/output_parsers/fix_spec.rb index 22d951f77..098e9184b 100644 --- a/spec/lib/langchain/output_parsers/fix_spec.rb +++ b/spec/lib/langchain/output_parsers/fix_spec.rb @@ -2,24 +2,24 @@ require_relative "spec_helper" -RSpec.describe Langchain::OutputParsers::OutputFixingParser do +RSpec.describe LangChain::OutputParsers::OutputFixingParser do let!(:llm_example) do - Langchain::LLM::OpenAI.new(api_key: "123") + LangChain::LLM::OpenAI.new(api_key: "123") end let!(:parser_example) do - Langchain::OutputParsers::StructuredOutputParser.from_json_schema(schema_example) + LangChain::OutputParsers::StructuredOutputParser.from_json_schema(schema_example) end let!(:prompt_template_example) do - Langchain::Prompt::PromptTemplate.new( + LangChain::Prompt::PromptTemplate.new( template: "Generate details of a fictional character.\n{format_instructions}\nCharacter description: {description}", input_variables: ["description", "format_instructions"] ) end let!(:fix_prompt_template_example) do - Langchain::Prompt::PromptTemplate.from_template( + LangChain::Prompt::PromptTemplate.from_template( <<~INSTRUCTIONS Custom Instructions: -------------- @@ -59,13 +59,13 @@ describe "#initialize" do it "creates a new instance" do - expect(described_class.new(**kwargs_example)).to be_a(Langchain::OutputParsers::OutputFixingParser) + expect(described_class.new(**kwargs_example)).to be_a(LangChain::OutputParsers::OutputFixingParser) end [ - {named: "llm", expect_class: "Langchain::LLM", llm: {}}, - {named: "parser", expect_class: "Langchain::OutputParsers", parser: {}}, - {named: "prompt", expect_class: "Langchain::Prompt::PromptTemplate", prompt: {}} + {named: "llm", expect_class: "LangChain::LLM", llm: {}}, + {named: "parser", expect_class: "LangChain::OutputParsers", parser: {}}, + {named: "prompt", expect_class: "LangChain::Prompt::PromptTemplate", prompt: {}} ].each do |data| named = data[:named] expect_class = data[:expect_class] @@ -80,13 +80,13 @@ describe ".from_llm" do it "creates a new instance from given llm, parser and prompt" do parser = described_class.from_llm(**kwargs_example) - expect(parser).to be_a(Langchain::OutputParsers::OutputFixingParser) + expect(parser).to be_a(LangChain::OutputParsers::OutputFixingParser) expect(parser.prompt.to_h).to eq(kwargs_example[:prompt].to_h) end it "defaults prompt to a naive_fix_prompt" do parser = described_class.from_llm(llm: kwargs_example[:llm], parser: kwargs_example[:parser]) - expect(parser).to be_a(Langchain::OutputParsers::OutputFixingParser) + expect(parser).to be_a(LangChain::OutputParsers::OutputFixingParser) expect(parser.prompt.template).to eq( <<~INSTRUCTIONS.chomp Instructions: @@ -110,8 +110,8 @@ end [ - {named: "llm", expect_class: "Langchain::LLM", llm: nil}, - {named: "parser", expect_class: "Langchain::OutputParsers", parser: nil} + {named: "llm", expect_class: "LangChain::LLM", llm: nil}, + {named: "parser", expect_class: "LangChain::OutputParsers", parser: nil} ].each do |data| named = data[:named] expect_class = data[:expect_class] @@ -173,7 +173,7 @@ .with(prompt: match(fix_prompt_matcher_example)) .with(no_args) .and_return("I still don't understand, I'm only a large language model :)") - expect { parser.parse("Whoops I don't understand") }.to raise_error(Langchain::OutputParsers::OutputParserException) + expect { parser.parse("Whoops I don't understand") }.to raise_error(LangChain::OutputParsers::OutputParserException) expect(parser.llm).to have_received(:chat).once end @@ -183,7 +183,7 @@ .with(prompt: match(fix_prompt_matcher_example)) .with(no_args) .and_return(invalid_schema_json_text_response) - expect { parser.parse("Whoops I don't understand") }.to raise_error(Langchain::OutputParsers::OutputParserException) + expect { parser.parse("Whoops I don't understand") }.to raise_error(LangChain::OutputParsers::OutputParserException) expect(parser.llm).to have_received(:chat).once end end diff --git a/spec/lib/langchain/output_parsers/structured_spec.rb b/spec/lib/langchain/output_parsers/structured_spec.rb index 245967194..1c2023a6b 100644 --- a/spec/lib/langchain/output_parsers/structured_spec.rb +++ b/spec/lib/langchain/output_parsers/structured_spec.rb @@ -2,7 +2,7 @@ require_relative "spec_helper" -RSpec.describe Langchain::OutputParsers::StructuredOutputParser do +RSpec.describe LangChain::OutputParsers::StructuredOutputParser do let!(:json_with_backticks_text_response) do <<~RESPONSE I'm responding with a narrative even though you asked for only json response: @@ -19,7 +19,7 @@ described_class.new( schema: schema_example ) - ).to be_a(Langchain::OutputParsers::StructuredOutputParser) + ).to be_a(LangChain::OutputParsers::StructuredOutputParser) end it "creates a new instance from a Hash schema" do @@ -27,7 +27,7 @@ described_class.new( schema: schema_example ) - ).to be_a(Langchain::OutputParsers::StructuredOutputParser) + ).to be_a(LangChain::OutputParsers::StructuredOutputParser) end it "fails if input is not a valid json schema" do @@ -88,7 +88,7 @@ parser = described_class.from_json_schema(schema_example) expect { parser.parse("Sorry, I'm just a large language model blah blah..") - }.to raise_error(Langchain::OutputParsers::OutputParserException) + }.to raise_error(LangChain::OutputParsers::OutputParserException) end it "fails to parse response text if the json does not conform to the schema" do @@ -96,7 +96,7 @@ expect { parser.parse(invalid_schema_json_text_response) }.to raise_error( - Langchain::OutputParsers::OutputParserException, + LangChain::OutputParsers::OutputParserException, /'#\/interests' did not contain a minimum number of items/ ) end @@ -105,7 +105,7 @@ describe ".from_json_schema" do it "creates a new instance from given JSON::Schema" do parser = described_class.from_json_schema(schema_example) - expect(parser).to be_a(Langchain::OutputParsers::StructuredOutputParser) + expect(parser).to be_a(LangChain::OutputParsers::StructuredOutputParser) expect(parser.schema.to_json).to eq(schema_example.to_json) end end diff --git a/spec/lib/langchain/processors/base_spec.rb b/spec/lib/langchain/processors/base_spec.rb index cbcc5c72a..d06e0af99 100644 --- a/spec/lib/langchain/processors/base_spec.rb +++ b/spec/lib/langchain/processors/base_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Base do +RSpec.describe LangChain::Processors::Base do describe "#parse" do it "must be implemented by subclasses" do expect { described_class.new.parse("") }.to raise_error(NotImplementedError) diff --git a/spec/lib/langchain/processors/csv_spec.rb b/spec/lib/langchain/processors/csv_spec.rb index d9e9a86d8..2d039b20a 100644 --- a/spec/lib/langchain/processors/csv_spec.rb +++ b/spec/lib/langchain/processors/csv_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::CSV do +RSpec.describe LangChain::Processors::CSV do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.csv") } diff --git a/spec/lib/langchain/processors/docx_spec.rb b/spec/lib/langchain/processors/docx_spec.rb index be1900b53..472b17f59 100644 --- a/spec/lib/langchain/processors/docx_spec.rb +++ b/spec/lib/langchain/processors/docx_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Docx do +RSpec.describe LangChain::Processors::Docx do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/sample.docx") } let(:text) { "Lorem ipsum dolor sit amet, consectetur adipiscing elit" } diff --git a/spec/lib/langchain/processors/eml_spec.rb b/spec/lib/langchain/processors/eml_spec.rb index 5232a5dd0..4325571f2 100644 --- a/spec/lib/langchain/processors/eml_spec.rb +++ b/spec/lib/langchain/processors/eml_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Eml do +RSpec.describe LangChain::Processors::Eml do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/sample.eml") } let(:text) { "Lorem Ipsum.\nDolor sit amet." } diff --git a/spec/lib/langchain/processors/html_spec.rb b/spec/lib/langchain/processors/html_spec.rb index 3d1c22911..4cbee3849 100644 --- a/spec/lib/langchain/processors/html_spec.rb +++ b/spec/lib/langchain/processors/html_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::HTML do +RSpec.describe LangChain::Processors::HTML do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.html") } let(:text) { "Lorem Ipsum\n\nDolor sit amet." } diff --git a/spec/lib/langchain/processors/json_spec.rb b/spec/lib/langchain/processors/json_spec.rb index 04330174c..26c261626 100644 --- a/spec/lib/langchain/processors/json_spec.rb +++ b/spec/lib/langchain/processors/json_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::JSON do +RSpec.describe LangChain::Processors::JSON do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.json") } let(:data) do diff --git a/spec/lib/langchain/processors/jsonl_spec.rb b/spec/lib/langchain/processors/jsonl_spec.rb index b40e2878e..05f47c9d8 100644 --- a/spec/lib/langchain/processors/jsonl_spec.rb +++ b/spec/lib/langchain/processors/jsonl_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::JSONL do +RSpec.describe LangChain::Processors::JSONL do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.jsonl") } let(:data) do diff --git a/spec/lib/langchain/processors/markdown_spec.rb b/spec/lib/langchain/processors/markdown_spec.rb index 589d665aa..81675e0fd 100644 --- a/spec/lib/langchain/processors/markdown_spec.rb +++ b/spec/lib/langchain/processors/markdown_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Markdown do +RSpec.describe LangChain::Processors::Markdown do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.md") } let(:text) { "Lorem ipsum dolor sit amet, consectetur adipiscing elit." } diff --git a/spec/lib/langchain/processors/pdf_spec.rb b/spec/lib/langchain/processors/pdf_spec.rb index 6f83666f5..2bb52b2c1 100644 --- a/spec/lib/langchain/processors/pdf_spec.rb +++ b/spec/lib/langchain/processors/pdf_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::PDF do +RSpec.describe LangChain::Processors::PDF do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/cairo-unicode.pdf") } let(:text) { "UTF-8 encoded sample plain-text file" } diff --git a/spec/lib/langchain/processors/pptx_spec.rb b/spec/lib/langchain/processors/pptx_spec.rb index 572360d6e..56277863c 100644 --- a/spec/lib/langchain/processors/pptx_spec.rb +++ b/spec/lib/langchain/processors/pptx_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Pptx do +RSpec.describe LangChain::Processors::Pptx do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/sample.pptx") } let(:text) { "Lorem ipsum dolor sit amet, consectetur adipiscing elit" } diff --git a/spec/lib/langchain/processors/text_spec.rb b/spec/lib/langchain/processors/text_spec.rb index a769a9b82..8cc1411b9 100644 --- a/spec/lib/langchain/processors/text_spec.rb +++ b/spec/lib/langchain/processors/text_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Text do +RSpec.describe LangChain::Processors::Text do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/example.txt") } let(:text) { "Lorem Ipsum is simply dummy text of the printing and typesetting industry" } diff --git a/spec/lib/langchain/processors/xls_spec.rb b/spec/lib/langchain/processors/xls_spec.rb index acb97d6d5..658ada6ae 100644 --- a/spec/lib/langchain/processors/xls_spec.rb +++ b/spec/lib/langchain/processors/xls_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Xls do +RSpec.describe LangChain::Processors::Xls do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/sample.xls") } let(:data) { diff --git a/spec/lib/langchain/processors/xlsx_spec.rb b/spec/lib/langchain/processors/xlsx_spec.rb index 0d3540281..5eb893962 100644 --- a/spec/lib/langchain/processors/xlsx_spec.rb +++ b/spec/lib/langchain/processors/xlsx_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Processors::Xlsx do +RSpec.describe LangChain::Processors::Xlsx do describe "#parse" do let(:file) { File.open("spec/fixtures/loaders/sample.xlsx") } let(:data) { diff --git a/spec/lib/langchain/prompts/base_spec.rb b/spec/lib/langchain/prompts/base_spec.rb index 800952822..12d139176 100644 --- a/spec/lib/langchain/prompts/base_spec.rb +++ b/spec/lib/langchain/prompts/base_spec.rb @@ -2,8 +2,8 @@ require "tempfile" -RSpec.describe Langchain::Prompt::Base do - subject { Langchain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) } +RSpec.describe LangChain::Prompt::Base do + subject { LangChain::Prompt::PromptTemplate.new(template: "Tell me a {adjective} joke.", input_variables: ["adjective"]) } describe "#save" do let(:file_path) { Tempfile.new(["test_file", ".json"]).path } diff --git a/spec/lib/langchain/prompts/few_shot_prompt_template_spec.rb b/spec/lib/langchain/prompts/few_shot_prompt_template_spec.rb index 1622dbfd7..6758437c0 100644 --- a/spec/lib/langchain/prompts/few_shot_prompt_template_spec.rb +++ b/spec/lib/langchain/prompts/few_shot_prompt_template_spec.rb @@ -1,13 +1,13 @@ # frozen_string_literal: true -RSpec.describe Langchain::Prompt::FewShotPromptTemplate do +RSpec.describe LangChain::Prompt::FewShotPromptTemplate do let(:input_variables) { ["adjective"] } let(:validate_template) { true } let(:prompt) do described_class.new( prefix: "Write antonyms for the following words.", suffix: "Input: {adjective}\nOutput:", - example_prompt: Langchain::Prompt::PromptTemplate.new( + example_prompt: LangChain::Prompt::PromptTemplate.new( input_variables: ["input", "output"], template: "Input: {input}\nOutput: {output}" ), @@ -22,7 +22,7 @@ describe "#initialize" do it "creates a new instance" do - expect(prompt).to be_a(Langchain::Prompt::FewShotPromptTemplate) + expect(prompt).to be_a(LangChain::Prompt::FewShotPromptTemplate) expect(prompt.format(adjective: "good")).to eq( <<~PROMPT.chomp Write antonyms for the following words. diff --git a/spec/lib/langchain/prompts/loading_spec.rb b/spec/lib/langchain/prompts/loading_spec.rb index d5844fdd1..616b83d21 100644 --- a/spec/lib/langchain/prompts/loading_spec.rb +++ b/spec/lib/langchain/prompts/loading_spec.rb @@ -1,11 +1,11 @@ # frozen_string_literal: true -RSpec.describe Langchain::Prompt do +RSpec.describe LangChain::Prompt do describe "#load_from_path" do context "when json file" do it "loads a new prompt from file" do prompt = described_class.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.json") - expect(prompt).to be_a(Langchain::Prompt::PromptTemplate) + expect(prompt).to be_a(LangChain::Prompt::PromptTemplate) expect(prompt.input_variables).to eq(["adjective", "content"]) end end @@ -13,15 +13,15 @@ context "when yaml file" do it "loads a new prompt from file" do prompt = described_class.load_from_path(file_path: "spec/fixtures/prompt/prompt_template.yaml") - expect(prompt).to be_a(Langchain::Prompt::PromptTemplate) + expect(prompt).to be_a(LangChain::Prompt::PromptTemplate) expect(prompt.input_variables).to eq(["adjective", "content"]) end end it "loads a new few shot prompt from file" do prompt = described_class.load_from_path(file_path: "spec/fixtures/prompt/few_shot_prompt_template.json") - expect(prompt).to be_a(Langchain::Prompt::FewShotPromptTemplate) - expect(prompt.example_prompt).to be_a(Langchain::Prompt::PromptTemplate) + expect(prompt).to be_a(LangChain::Prompt::FewShotPromptTemplate) + expect(prompt.example_prompt).to be_a(LangChain::Prompt::PromptTemplate) expect(prompt.prefix).to eq("Write antonyms for the following words.") end end diff --git a/spec/lib/langchain/prompts/prompt_template_spec.rb b/spec/lib/langchain/prompts/prompt_template_spec.rb index d79035e83..92494e2d9 100644 --- a/spec/lib/langchain/prompts/prompt_template_spec.rb +++ b/spec/lib/langchain/prompts/prompt_template_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Prompt::PromptTemplate do +RSpec.describe LangChain::Prompt::PromptTemplate do let!(:prompt_example) do <<~PROMPT.chomp I want you to act as a naming consultant for new companies. @@ -15,7 +15,7 @@ template: prompt_example, input_variables: ["product"] ) - ).to be_a(Langchain::Prompt::PromptTemplate) + ).to be_a(LangChain::Prompt::PromptTemplate) end it "raises an error if the template is invalid" do diff --git a/spec/lib/langchain/tool/calculator_spec.rb b/spec/lib/langchain/tool/calculator_spec.rb index 63a7159c7..9ba3c8f79 100644 --- a/spec/lib/langchain/tool/calculator_spec.rb +++ b/spec/lib/langchain/tool/calculator_spec.rb @@ -2,11 +2,11 @@ require "eqn" -RSpec.describe Langchain::Tool::Calculator do +RSpec.describe LangChain::Tool::Calculator do describe "#execute" do it "calculates the result" do response = subject.execute(input: "2+2") - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq(4) end @@ -14,7 +14,7 @@ allow(Eqn::Calculator).to receive(:calc).and_raise(Eqn::ParseError) response = subject.execute(input: "two plus two") - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("\"two plus two\" is an invalid mathematical expression") end end diff --git a/spec/lib/langchain/tool/database_spec.rb b/spec/lib/langchain/tool/database_spec.rb index de339634f..fb31e13df 100644 --- a/spec/lib/langchain/tool/database_spec.rb +++ b/spec/lib/langchain/tool/database_spec.rb @@ -2,7 +2,7 @@ require "sequel" -RSpec.describe Langchain::Tool::Database do +RSpec.describe LangChain::Tool::Database do subject { described_class.new(connection_string: "mock:///") } describe "#execute" do @@ -24,13 +24,13 @@ it "returns salary and count of users" do response = subject.execute(input: "SELECT max(salary), count(*) FROM users") - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq([{salary: 23500, count: 101}]) end it "returns jobs and counts of users" do response = subject.execute(input: "SELECT job, count(*) FROM users GROUP BY job") - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq([{job: "teacher", count: 5}, {job: "cook", count: 98}]) end end @@ -44,7 +44,7 @@ it "returns the schema" do response = subject.dump_schema - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("CREATE TABLE users(\nid integer PRIMARY KEY,\nname string,\njob string,\nFOREIGN KEY (job) REFERENCES jobs(job));\n") end @@ -52,7 +52,7 @@ allow(subject.db).to receive(:foreign_key_list).with(:users).and_return([{columns: [:job], table: :jobs, key: nil}]) response = subject.dump_schema - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("CREATE TABLE users(\nid integer PRIMARY KEY,\nname string,\njob string,\nFOREIGN KEY (job) REFERENCES jobs());\n") end end diff --git a/spec/lib/langchain/tool/file_system_spec.rb b/spec/lib/langchain/tool/file_system_spec.rb index df2a6f0af..3a9deb3b2 100644 --- a/spec/lib/langchain/tool/file_system_spec.rb +++ b/spec/lib/langchain/tool/file_system_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Tool::FileSystem do +RSpec.describe LangChain::Tool::FileSystem do subject { described_class.new } context "directory operations" do @@ -10,14 +10,14 @@ it "lists a directory" do allow(Dir).to receive(:entries).with(directory_path).and_return(entries) response = subject.list_directory(directory_path: directory_path) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq(entries) end it "returns a no such directory error" do allow(Dir).to receive(:entries).with(directory_path).and_raise(Errno::ENOENT) response = subject.list_directory(directory_path: directory_path) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("No such directory: #{directory_path}") end end @@ -30,14 +30,14 @@ it "successfully writes" do allow(File).to receive(:write).with(file_path, content) response = subject.write_to_file(file_path: file_path, content: content) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("File written successfully") end it "returns a permission denied error" do allow(File).to receive(:write).with(file_path, content).and_raise(Errno::EACCES) response = subject.write_to_file(file_path: file_path, content: content) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("Permission denied: #{file_path}") end end @@ -49,14 +49,14 @@ it "successfully reads" do allow(File).to receive(:read).with(file_path).and_return(content) response = subject.read_file(file_path: file_path) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq(content) end it "returns an error" do allow(File).to receive(:read).with(file_path).and_raise(Errno::ENOENT) response = subject.read_file(file_path: file_path) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("No such file: #{file_path}") end end diff --git a/spec/lib/langchain/tool/google_search_spec.rb b/spec/lib/langchain/tool/google_search_spec.rb index bd78036ff..4507e4fa8 100644 --- a/spec/lib/langchain/tool/google_search_spec.rb +++ b/spec/lib/langchain/tool/google_search_spec.rb @@ -2,7 +2,7 @@ require "google_search_results" -RSpec.describe Langchain::Tool::GoogleSearch do +RSpec.describe LangChain::Tool::GoogleSearch do subject { described_class.new(api_key: "123") } @@ -32,7 +32,7 @@ describe "#execute" do it "returns the answer" do response = subject.execute(input: "how tall is empire state building") - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("1,250′, 1,454′ to tip") end end diff --git a/spec/lib/langchain/tool/ruby_code_interpreter_spec.rb b/spec/lib/langchain/tool/ruby_code_interpreter_spec.rb index fa67716e5..42964e953 100644 --- a/spec/lib/langchain/tool/ruby_code_interpreter_spec.rb +++ b/spec/lib/langchain/tool/ruby_code_interpreter_spec.rb @@ -1,10 +1,10 @@ # frozen_string_literal: true -RSpec.describe Langchain::Tool::RubyCodeInterpreter do +RSpec.describe LangChain::Tool::RubyCodeInterpreter do describe "#execute" do it "executes the expression" do response = subject.execute(input: '"hello world".reverse!') - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("dlrow olleh") end @@ -18,7 +18,7 @@ def reverse(string) CODE response = subject.execute(input: code) - expect(response).to be_a(Langchain::ToolResponse) + expect(response).to be_a(LangChain::ToolResponse) expect(response.content).to eq("dlrow olleh") end end diff --git a/spec/lib/langchain/tool/tavily_spec.rb b/spec/lib/langchain/tool/tavily_spec.rb index 6130bad3c..219baf5d1 100644 --- a/spec/lib/langchain/tool/tavily_spec.rb +++ b/spec/lib/langchain/tool/tavily_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Tool::Tavily do +RSpec.describe LangChain::Tool::Tavily do subject { described_class.new(api_key: "123") } let(:response) { @@ -16,7 +16,7 @@ max_results: 1, include_answer: true ) - expect(result).to be_a(Langchain::ToolResponse) + expect(result).to be_a(LangChain::ToolResponse) expect(result.content).to eq(response) end end diff --git a/spec/lib/langchain/tool/weather_spec.rb b/spec/lib/langchain/tool/weather_spec.rb index fea14a7c1..b98d81e72 100644 --- a/spec/lib/langchain/tool/weather_spec.rb +++ b/spec/lib/langchain/tool/weather_spec.rb @@ -1,6 +1,6 @@ # spec/langchain/tool/weather_spec.rb -RSpec.describe Langchain::Tool::Weather do +RSpec.describe LangChain::Tool::Weather do let(:api_key) { "dummy_api_key" } let(:weather_tool) { described_class.new(api_key: api_key) } @@ -30,7 +30,7 @@ it "returns the parsed weather data" do result = weather_tool.get_current_weather(city: city, state_code: state_code, country_code: country_code) - expect(result).to be_a(Langchain::ToolResponse) + expect(result).to be_a(LangChain::ToolResponse) expect(result.content).to eq({ temperature: "72 °F", humidity: "50%", @@ -54,7 +54,7 @@ it "returns an error message" do result = weather_tool.get_current_weather(city: city, state_code: state_code) - expect(result).to be_a(Langchain::ToolResponse) + expect(result).to be_a(LangChain::ToolResponse) expect(result.content).to eq("Location not found") end end @@ -66,7 +66,7 @@ it "returns the error message" do result = weather_tool.get_current_weather(city: city, state_code: state_code) - expect(result).to be_a(Langchain::ToolResponse) + expect(result).to be_a(LangChain::ToolResponse) expect(result.content).to eq("API request failed: 404 - Not Found") end end diff --git a/spec/lib/langchain/tool/wikipedia_spec.rb b/spec/lib/langchain/tool/wikipedia_spec.rb index 390f160b7..eee3aeefe 100644 --- a/spec/lib/langchain/tool/wikipedia_spec.rb +++ b/spec/lib/langchain/tool/wikipedia_spec.rb @@ -2,7 +2,7 @@ require "wikipedia" -RSpec.describe Langchain::Tool::Wikipedia do +RSpec.describe LangChain::Tool::Wikipedia do describe "#execute" do before do allow(Wikipedia).to receive(:find) diff --git a/spec/lib/langchain/utils/hash_transformer_spec.rb b/spec/lib/langchain/utils/hash_transformer_spec.rb index 7f6d477de..cd2b153b6 100644 --- a/spec/lib/langchain/utils/hash_transformer_spec.rb +++ b/spec/lib/langchain/utils/hash_transformer_spec.rb @@ -1,4 +1,4 @@ -RSpec.describe Langchain::Utils::HashTransformer do +RSpec.describe LangChain::Utils::HashTransformer do describe ".symbolize_keys" do it "symbolizes string keys at the top level of the hash" do hash = {"name" => "Alice", "age" => 30} diff --git a/spec/lib/langchain/utils/image_wrapper_spec.rb b/spec/lib/langchain/utils/image_wrapper_spec.rb index 348e5808c..e5b32a126 100644 --- a/spec/lib/langchain/utils/image_wrapper_spec.rb +++ b/spec/lib/langchain/utils/image_wrapper_spec.rb @@ -1,6 +1,6 @@ # frozen_string_literal: true -RSpec.describe Langchain::Utils::ImageWrapper do +RSpec.describe LangChain::Utils::ImageWrapper do let(:image_url) { "https://example.com/sf-cable-car.jpeg" } let(:uri_https) { instance_double(URI::HTTPS) } diff --git a/spec/lib/langchain/utils/to_boolean_spec.rb b/spec/lib/langchain/utils/to_boolean_spec.rb index dc4b88eee..12f37140d 100644 --- a/spec/lib/langchain/utils/to_boolean_spec.rb +++ b/spec/lib/langchain/utils/to_boolean_spec.rb @@ -2,7 +2,7 @@ require "spec_helper" -RSpec.describe Langchain::Utils::ToBoolean do +RSpec.describe LangChain::Utils::ToBoolean do describe "#to_bool" do subject(:to_bool) { described_class.new.to_bool(value) } diff --git a/spec/lib/langchain/vectorsearch/base_spec.rb b/spec/lib/langchain/vectorsearch/base_spec.rb index 37563a365..fc7f43abc 100644 --- a/spec/lib/langchain/vectorsearch/base_spec.rb +++ b/spec/lib/langchain/vectorsearch/base_spec.rb @@ -1,13 +1,13 @@ # frozen_string_literal: true -RSpec.describe Langchain::Vectorsearch::Base do - subject { described_class.new(llm: Langchain::LLM::OpenAI.new(api_key: "123")) } +RSpec.describe LangChain::Vectorsearch::Base do + subject { described_class.new(llm: LangChain::LLM::OpenAI.new(api_key: "123")) } describe "#initialize" do it "correctly sets llm" do expect( subject.llm - ).to be_a(Langchain::LLM::OpenAI) + ).to be_a(LangChain::LLM::OpenAI) end end @@ -110,9 +110,9 @@ describe "#add_data" do it "allows adding multiple paths" do paths = [ - Langchain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), - Langchain.root.join("../spec/fixtures/loaders/test_doc.pdf"), - Langchain.root.join("../spec/fixtures/loaders/example.txt") + LangChain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), + LangChain.root.join("../spec/fixtures/loaders/test_doc.pdf"), + LangChain.root.join("../spec/fixtures/loaders/example.txt") ] expect(subject).to receive(:add_texts).with(texts: array_with_strings_matcher(size: 14)) @@ -126,15 +126,15 @@ context "with an optional chunker class" do subject do - described_class.new(llm: Langchain::LLM::OpenAI.new(api_key: "123")) + described_class.new(llm: LangChain::LLM::OpenAI.new(api_key: "123")) end - let(:paths) { Langchain.root.join("../spec/fixtures/loaders/example.txt") } + let(:paths) { LangChain.root.join("../spec/fixtures/loaders/example.txt") } - it "passes an optional chunker class to Langchain::Loader", :aggregate_failures do - expect(Langchain::Loader).to receive(:new).with(paths, {}, chunker: Langchain::Chunker::RecursiveText).and_call_original + it "passes an optional chunker class to LangChain::Loader", :aggregate_failures do + expect(LangChain::Loader).to receive(:new).with(paths, {}, chunker: LangChain::Chunker::RecursiveText).and_call_original # #add_data will raise NotImplementedError when it calls #add_texts, this is expected and ignored in this test - expect { subject.add_data(paths: paths, chunker: Langchain::Chunker::RecursiveText) }.to raise_error(NotImplementedError) + expect { subject.add_data(paths: paths, chunker: LangChain::Chunker::RecursiveText) }.to raise_error(NotImplementedError) end end end diff --git a/spec/lib/langchain/vectorsearch/chroma_spec.rb b/spec/lib/langchain/vectorsearch/chroma_spec.rb index 6f6dbe681..7a07220df 100644 --- a/spec/lib/langchain/vectorsearch/chroma_spec.rb +++ b/spec/lib/langchain/vectorsearch/chroma_spec.rb @@ -2,14 +2,14 @@ require "chroma-db" -RSpec.describe Langchain::Vectorsearch::Chroma do +RSpec.describe LangChain::Vectorsearch::Chroma do let(:index_name) { "documents" } subject { described_class.new( url: "http://localhost:8000", index_name: index_name, - llm: Langchain::LLM::OpenAI.new(api_key: "123") + llm: LangChain::LLM::OpenAI.new(api_key: "123") ) } diff --git a/spec/lib/langchain/vectorsearch/elasticsearch_spec.rb b/spec/lib/langchain/vectorsearch/elasticsearch_spec.rb index e1d5b1d99..ec4bbf287 100644 --- a/spec/lib/langchain/vectorsearch/elasticsearch_spec.rb +++ b/spec/lib/langchain/vectorsearch/elasticsearch_spec.rb @@ -2,10 +2,10 @@ require "elasticsearch" -RSpec.describe Langchain::Vectorsearch::Elasticsearch do - let!(:llm) { Langchain::LLM::HuggingFace.new(api_key: "123456") } +RSpec.describe LangChain::Vectorsearch::Elasticsearch do + let!(:llm) { LangChain::LLM::HuggingFace.new(api_key: "123456") } subject { - Langchain::Vectorsearch::Elasticsearch.new( + LangChain::Vectorsearch::Elasticsearch.new( url: "http://localhost:9200", index_name: "langchain", llm: llm diff --git a/spec/lib/langchain/vectorsearch/hnswlib_spec.rb b/spec/lib/langchain/vectorsearch/hnswlib_spec.rb index 6649e91ca..84db4b8fd 100644 --- a/spec/lib/langchain/vectorsearch/hnswlib_spec.rb +++ b/spec/lib/langchain/vectorsearch/hnswlib_spec.rb @@ -2,16 +2,16 @@ require "hnswlib" -RSpec.describe Langchain::Vectorsearch::Hnswlib do +RSpec.describe LangChain::Vectorsearch::Hnswlib do before do FileUtils.rm("./test.ann") if File.exist?("./test.ann") end before do - allow_any_instance_of(Langchain::LLM::GoogleGemini).to receive(:default_dimensions).and_return(3) + allow_any_instance_of(LangChain::LLM::GoogleGemini).to receive(:default_dimensions).and_return(3) end - let(:llm) { Langchain::LLM::GoogleGemini.new(api_key: "123") } + let(:llm) { LangChain::LLM::GoogleGemini.new(api_key: "123") } subject { described_class.new(llm: llm, path_to_index: "./test.ann") } describe "#initialize" do diff --git a/spec/lib/langchain/vectorsearch/milvus_spec.rb b/spec/lib/langchain/vectorsearch/milvus_spec.rb index e755db051..f56f47098 100644 --- a/spec/lib/langchain/vectorsearch/milvus_spec.rb +++ b/spec/lib/langchain/vectorsearch/milvus_spec.rb @@ -2,7 +2,7 @@ require "milvus" -RSpec.describe Langchain::Vectorsearch::Milvus do +RSpec.describe LangChain::Vectorsearch::Milvus do let(:index_name) { "documents" } subject { @@ -10,7 +10,7 @@ url: "http://localhost:8000", api_key: "123", index_name: index_name, - llm: Langchain::LLM::OpenAI.new(api_key: "123") + llm: LangChain::LLM::OpenAI.new(api_key: "123") ) } diff --git a/spec/lib/langchain/vectorsearch/pgvector_spec.rb b/spec/lib/langchain/vectorsearch/pgvector_spec.rb index 719150fbe..45a720a91 100644 --- a/spec/lib/langchain/vectorsearch/pgvector_spec.rb +++ b/spec/lib/langchain/vectorsearch/pgvector_spec.rb @@ -5,14 +5,14 @@ if ENV["POSTGRES_URL"] client = ::PG.connect(ENV["POSTGRES_URL"]) - subject = Langchain::Vectorsearch::Pgvector.new( + subject = LangChain::Vectorsearch::Pgvector.new( url: ENV["POSTGRES_URL"], index_name: "products", - llm: Langchain::LLM::OpenAI.new(api_key: "123") + llm: LangChain::LLM::OpenAI.new(api_key: "123") ) subject.create_default_schema - RSpec.describe Langchain::Vectorsearch::Pgvector do + RSpec.describe LangChain::Vectorsearch::Pgvector do let(:client) { client } subject { diff --git a/spec/lib/langchain/vectorsearch/pinecone_spec.rb b/spec/lib/langchain/vectorsearch/pinecone_spec.rb index 4ef45205b..9992c6845 100644 --- a/spec/lib/langchain/vectorsearch/pinecone_spec.rb +++ b/spec/lib/langchain/vectorsearch/pinecone_spec.rb @@ -2,10 +2,10 @@ require "pinecone" -RSpec.describe Langchain::Vectorsearch::Pinecone do +RSpec.describe LangChain::Vectorsearch::Pinecone do let(:index_name) { "documents" } let(:namespace) { "namespaced" } - let(:llm) { Langchain::LLM::OpenAI.new(api_key: "123") } + let(:llm) { LangChain::LLM::OpenAI.new(api_key: "123") } subject { described_class.new( @@ -197,9 +197,9 @@ describe "#add_data" do it "allows adding multiple paths" do paths = [ - Langchain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), - Langchain.root.join("../spec/fixtures/loaders/test_doc.pdf"), - Langchain.root.join("../spec/fixtures/loaders/example.txt") + LangChain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), + LangChain.root.join("../spec/fixtures/loaders/test_doc.pdf"), + LangChain.root.join("../spec/fixtures/loaders/example.txt") ] expect(subject).to receive(:add_texts).with(texts: array_with_strings_matcher(size: 14), namespace: "") @@ -213,9 +213,9 @@ it "allows namespaces" do paths = [ - Langchain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), - Langchain.root.join("../spec/fixtures/loaders/test_doc.pdf"), - Langchain.root.join("../spec/fixtures/loaders/example.txt") + LangChain.root.join("../spec/fixtures/loaders/cairo-unicode.pdf"), + LangChain.root.join("../spec/fixtures/loaders/test_doc.pdf"), + LangChain.root.join("../spec/fixtures/loaders/example.txt") ] expect(subject).to receive(:add_texts).with(texts: array_with_strings_matcher(size: 14), namespace: "earthlings") @@ -224,12 +224,12 @@ end context "with an optional chunker class" do - let(:paths) { Langchain.root.join("../spec/fixtures/loaders/example.txt") } + let(:paths) { LangChain.root.join("../spec/fixtures/loaders/example.txt") } - it "passes an optional chunker class to Langchain::Loader", :aggregate_failures do - expect(Langchain::Loader).to receive(:new).with(paths, {}, chunker: Langchain::Chunker::RecursiveText).and_call_original + it "passes an optional chunker class to LangChain::Loader", :aggregate_failures do + expect(LangChain::Loader).to receive(:new).with(paths, {}, chunker: LangChain::Chunker::RecursiveText).and_call_original expect(subject).to receive(:add_texts).and_return(true) - subject.add_data(paths: paths, chunker: Langchain::Chunker::RecursiveText) + subject.add_data(paths: paths, chunker: LangChain::Chunker::RecursiveText) end end end diff --git a/spec/lib/langchain/vectorsearch/qdrant_spec.rb b/spec/lib/langchain/vectorsearch/qdrant_spec.rb index 68e25965d..b4c75546f 100644 --- a/spec/lib/langchain/vectorsearch/qdrant_spec.rb +++ b/spec/lib/langchain/vectorsearch/qdrant_spec.rb @@ -2,7 +2,7 @@ require "qdrant" -RSpec.describe Langchain::Vectorsearch::Qdrant do +RSpec.describe LangChain::Vectorsearch::Qdrant do let(:index_name) { "documents" } subject { @@ -10,7 +10,7 @@ url: "http://localhost:8000", index_name: index_name, api_key: "secret", - llm: Langchain::LLM::OpenAI.new(api_key: "123") + llm: LangChain::LLM::OpenAI.new(api_key: "123") ) } diff --git a/spec/lib/langchain/vectorsearch/weaviate_spec.rb b/spec/lib/langchain/vectorsearch/weaviate_spec.rb index c09a74fb7..184de7b72 100644 --- a/spec/lib/langchain/vectorsearch/weaviate_spec.rb +++ b/spec/lib/langchain/vectorsearch/weaviate_spec.rb @@ -2,13 +2,13 @@ require "weaviate" -RSpec.describe Langchain::Vectorsearch::Weaviate do +RSpec.describe LangChain::Vectorsearch::Weaviate do subject { described_class.new( url: "http://localhost:8080", api_key: "123", index_name: "Products", - llm: Langchain::LLM::OpenAI.new(api_key: "123") + llm: LangChain::LLM::OpenAI.new(api_key: "123") ) } diff --git a/spec/tool_definition_spec.rb b/spec/tool_definition_spec.rb index 2f67091bb..f851c586e 100644 --- a/spec/tool_definition_spec.rb +++ b/spec/tool_definition_spec.rb @@ -2,10 +2,10 @@ require "spec_helper" -RSpec.describe Langchain::ToolDefinition do +RSpec.describe LangChain::ToolDefinition do let(:dummy_class) do Class.new do - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition def self.name "DummyTool" @@ -26,9 +26,10 @@ def self.name it "returns the correct snake_case name for complex class names" do complex_class = Class.new do - extend Langchain::ToolDefinition + extend LangChain::ToolDefinition + def self.name - "Langchain::Tool::API1Interface" + "LangChain::Tool::API1Interface" end end expect(complex_class.tool_name).to eq("langchain_tool_api1_interface") @@ -45,7 +46,7 @@ def self.name end end - describe Langchain::ToolDefinition::ParameterBuilder do + describe LangChain::ToolDefinition::ParameterBuilder do let(:builder) { described_class.new(parent_type: "object") } it "aliases item to property" do @@ -173,7 +174,7 @@ def self.name end end - describe Langchain::ToolDefinition::FunctionSchemas do + describe LangChain::ToolDefinition::FunctionSchemas do let(:tool_name) { "test_tool" } subject(:function_schemas) { described_class.new(tool_name) } diff --git a/spec/tool_response_spec.rb b/spec/tool_response_spec.rb index 6aacf7fd9..5ffc435bf 100644 --- a/spec/tool_response_spec.rb +++ b/spec/tool_response_spec.rb @@ -2,7 +2,7 @@ require "spec_helper" -RSpec.describe Langchain::ToolResponse do +RSpec.describe LangChain::ToolResponse do describe "#initialize" do context "with content" do subject(:response) { described_class.new(content: "test content") } diff --git a/test/dummy/app/views/layouts/application.html.erb b/test/dummy/app/views/layouts/application.html.erb index de5ffb404..d68898250 100644 --- a/test/dummy/app/views/layouts/application.html.erb +++ b/test/dummy/app/views/layouts/application.html.erb @@ -1,7 +1,7 @@ - <%= content_for(:title) || "Langchainrb" %> + <%= content_for(:title) || "LangChain.rb" %> diff --git a/test/dummy/app/views/pwa/manifest.json.erb b/test/dummy/app/views/pwa/manifest.json.erb index a59290b14..0a399cc3b 100644 --- a/test/dummy/app/views/pwa/manifest.json.erb +++ b/test/dummy/app/views/pwa/manifest.json.erb @@ -1,5 +1,5 @@ { - "name": "Langchainrb", + "name": "LangChain.rb", "icons": [ { "src": "/icon.png", @@ -16,7 +16,7 @@ "start_url": "/", "display": "standalone", "scope": "/", - "description": "Langchainrb.", + "description": "LangChain.rb.", "theme_color": "red", "background_color": "red" } diff --git a/test/dummy/config/routes.rb b/test/dummy/config/routes.rb index f40b85af9..2ce4ff064 100644 --- a/test/dummy/config/routes.rb +++ b/test/dummy/config/routes.rb @@ -1,3 +1,3 @@ Rails.application.routes.draw do - mount Langchain::Engine => "/langchain" + mount LangChain::Engine => "/langchain" end diff --git a/test/langchain_test.rb b/test/langchain_test.rb index f368a7d98..45de15512 100644 --- a/test/langchain_test.rb +++ b/test/langchain_test.rb @@ -1,7 +1,7 @@ require "test_helper" -class LangchainTest < ActiveSupport::TestCase +class LangChainTest < ActiveSupport::TestCase test "it has a version number" do - assert Langchain::VERSION + assert LangChain::VERSION end end