|
4 | 4 | [](https://github.com/Pandora-IsoMemo/llmModule/actions/workflows/R-CMD-check.yaml) |
5 | 5 | <!-- badges: end --> |
6 | 6 |
|
| 7 | +`llmModule` provides a structured and extensible R interface to interact with both remote |
| 8 | +(e.g., OpenAI, DeepSeek) and local (via Ollama) Large Language Model (LLM) APIs. It simplifies key |
| 9 | +workflows such as model selection, prompt configuration, and request handling through a consistent |
| 10 | +object-oriented interface. |
| 11 | + |
| 12 | +`llmModule` provides a structured R interface for working with Large Language Model (LLM) APIs, |
| 13 | +including [OpenAI](https://platform.openai.com) and [DeepSeek](https://platform.deepseek.com). |
| 14 | + |
| 15 | + |
| 16 | +It simplifies interactions with chat-based LLMs by offering methods and S3 classes for: |
| 17 | + |
| 18 | +- API key management and validation |
| 19 | +- Prompt configuration |
| 20 | +- Sending chat prompts |
| 21 | +- Extracting responses |
| 22 | + |
| 23 | +## 🚀 Features |
| 24 | + |
| 25 | +- Modular, object-oriented interface using S3 classes: |
| 26 | + - `RemoteLlmApi` for remote providers (OpenAI, DeepSeek) |
| 27 | + - `LocalLlmApi` for local Ollama servers |
| 28 | + - `LlmPromptConfig` to configure prompt messages and parameters |
| 29 | + - `LlmResponse` for structured handling of responses |
| 30 | +- Comprehensive API validation: |
| 31 | + - Validates API key format, provider match, and key functionality via test request |
| 32 | + - Clear error reporting with automatic suggestion of likely provider mismatches |
| 33 | +- Local model support (via [Ollama](https://ollama.com)): |
| 34 | + - Allows to pull new models |
| 35 | + - Allows exclusion of deprecated or irrelevant models via `exclude_pattern` |
| 36 | +- Unified method interface: |
| 37 | + - `get_llm_models()` to fetch available models |
| 38 | + - `send_prompt()` to submit prompts and retrieve responses |
| 39 | +- Optional Docker integration for local deployment (see below) |
| 40 | + |
7 | 41 | --- |
8 | 42 |
|
9 | | -## 🧠 Docker Installation (recommended) |
| 43 | +## 🧪 Quick Example |
| 44 | + |
| 45 | +```r |
| 46 | +# Create an LLM API object |
| 47 | +api <- new_RemoteLlmApi("~/.secrets/openai.txt", provider = "OpenAI") |
| 48 | + |
| 49 | +# Set up a prompt |
| 50 | +prompt <- new_LlmPromptConfig( |
| 51 | + model = "gpt-4.1", |
| 52 | + prompt_content = "What's the capital of Italy?" |
| 53 | +) |
| 54 | + |
| 55 | +# Send the prompt |
| 56 | +# result <- send_prompt(api, prompt) |
| 57 | + |
| 58 | +# Extract the assistant's reply |
| 59 | +result$choices[[1]]$message$content |
| 60 | +``` |
| 61 | + |
| 62 | +--- |
| 63 | + |
| 64 | +## 📦 Docker Installation (recommended) |
10 | 65 |
|
11 | 66 | Run this app in your browser with just one command! The Docker setup includes all components — the |
12 | | -`llm-module` Shiny frontend and the `ollama` backend for local LLM model serving — no manual setup required. |
| 67 | +`llmModule` Shiny frontend and the `ollama` backend for local LLM model serving — no manual setup required. |
13 | 68 |
|
14 | 69 | ### ✅ 1. Install the software Docker |
15 | 70 |
|
@@ -56,8 +111,8 @@ These commands will: |
56 | 111 |
|
57 | 112 | 1. The first time you run this, it will download the necessary Docker images for |
58 | 113 | - `ollama` (for model serving and its REST API) and |
59 | | - - the `llm-module` (the Shiny web frontend that controls Ollama and can also interact with other LLM APIs |
60 | | - like OpenAI, Deepseek). |
| 114 | + - the `llm-module` (the Shiny web frontend that controls Ollama and can also interact with other |
| 115 | + LLM APIs like OpenAI, Deepseek). |
61 | 116 | 2. After images are pulled, a Docker network and a Docker volume will be created, and both container will start. |
62 | 117 | 3. The `llm-module` container hosts the application, which you can access in your web browser at `http://127.0.0.1:3838/`. |
63 | 118 |
|
@@ -103,5 +158,3 @@ OLLAMA_LOCAL_MODELS_PATH=</path/to/your/models> docker compose up --build |
103 | 158 | ``` |
104 | 159 |
|
105 | 160 | *Tip:* Use `docker compose down` to stop and clean up the containers when done. |
106 | | - |
107 | | - |
|
0 commit comments