Skip to content

Commit e60cb60

Browse files
committed
update ReadMe
1 parent 0bde2d3 commit e60cb60

File tree

4 files changed

+67
-13
lines changed

4 files changed

+67
-13
lines changed

R/00-OllamaModelManager-helpers.R

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,6 @@ is_model_available <- function(manager, model_name) {
7676
# @param manager An OllamaModelManager object
7777
# @param model_name Character string of the model name
7878
# @return An OllamaModel object
79-
# @export
8079
pull_model_if_needed <- function(manager, model_name) {
8180

8281
available <- is_model_available(manager, model_name)

R/01-LlmPromptConfig-class.R

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,8 @@
44
#' making requests to Large Language Models (LLMs) such as OpenAI's GPT models and DeepSeek models.
55
#'
66
#' @param prompt_content character string containing the primary instruction or query for the model. This serves as the main input to the LLM.
7-
#' @param model Character string specifying the model to use (e.g., `'gpt-4-turbo'` for OpenAI or `'deepseek-chat'` for DeepSeek). To retrieve a list of valid models for each LLM, use the \code{get_llm_models()} function.
7+
#' @param model Character string specifying the model to use (e.g., `'gpt-4.1'` for OpenAI or `'deepseek-chat'` for DeepSeek).
8+
#' To retrieve a list of valid models for each LLM, use the \code{get_llm_models()} method
89
#'
910
#' See the following documentation for valid models:
1011
#' - \href{https://platform.openai.com/docs/models}{OpenAI model list}
@@ -30,10 +31,10 @@
3031
#' models <- get_llm_models(api)
3132
#' }
3233
#'
33-
#' # Create a parameter object for OpenAI GPT-4 Turbo
34+
#' # Create a parameter object for OpenAI GPT-4.1
3435
#' params <- new_LlmPromptConfig(
3536
#' prompt_content = 'Explain entropy in simple terms.',
36-
#' model = 'gpt-4-turbo',
37+
#' model = 'gpt-4.1',
3738
#' temperature = 0.7,
3839
#' max_tokens = 150
3940
#' )

README.md

Lines changed: 59 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,67 @@
44
[![R-CMD-check](https://github.com/Pandora-IsoMemo/llmModule/actions/workflows/R-CMD-check.yaml/badge.svg)](https://github.com/Pandora-IsoMemo/llmModule/actions/workflows/R-CMD-check.yaml)
55
<!-- badges: end -->
66

7+
`llmModule` provides a structured and extensible R interface to interact with both remote
8+
(e.g., OpenAI, DeepSeek) and local (via Ollama) Large Language Model (LLM) APIs. It simplifies key
9+
workflows such as model selection, prompt configuration, and request handling through a consistent
10+
object-oriented interface.
11+
12+
`llmModule` provides a structured R interface for working with Large Language Model (LLM) APIs,
13+
including [OpenAI](https://platform.openai.com) and [DeepSeek](https://platform.deepseek.com).
14+
15+
16+
It simplifies interactions with chat-based LLMs by offering methods and S3 classes for:
17+
18+
- API key management and validation
19+
- Prompt configuration
20+
- Sending chat prompts
21+
- Extracting responses
22+
23+
## 🚀 Features
24+
25+
- Modular, object-oriented interface using S3 classes:
26+
- `RemoteLlmApi` for remote providers (OpenAI, DeepSeek)
27+
- `LocalLlmApi` for local Ollama servers
28+
- `LlmPromptConfig` to configure prompt messages and parameters
29+
- `LlmResponse` for structured handling of responses
30+
- Comprehensive API validation:
31+
- Validates API key format, provider match, and key functionality via test request
32+
- Clear error reporting with automatic suggestion of likely provider mismatches
33+
- Local model support (via [Ollama](https://ollama.com)):
34+
- Allows to pull new models
35+
- Allows exclusion of deprecated or irrelevant models via `exclude_pattern`
36+
- Unified method interface:
37+
- `get_llm_models()` to fetch available models
38+
- `send_prompt()` to submit prompts and retrieve responses
39+
- Optional Docker integration for local deployment (see below)
40+
741
---
842

9-
## 🧠 Docker Installation (recommended)
43+
## 🧪 Quick Example
44+
45+
```r
46+
# Create an LLM API object
47+
api <- new_RemoteLlmApi("~/.secrets/openai.txt", provider = "OpenAI")
48+
49+
# Set up a prompt
50+
prompt <- new_LlmPromptConfig(
51+
model = "gpt-4.1",
52+
prompt_content = "What's the capital of Italy?"
53+
)
54+
55+
# Send the prompt
56+
# result <- send_prompt(api, prompt)
57+
58+
# Extract the assistant's reply
59+
result$choices[[1]]$message$content
60+
```
61+
62+
---
63+
64+
## 📦 Docker Installation (recommended)
1065

1166
Run this app in your browser with just one command! The Docker setup includes all components — the
12-
`llm-module` Shiny frontend and the `ollama` backend for local LLM model serving — no manual setup required.
67+
`llmModule` Shiny frontend and the `ollama` backend for local LLM model serving — no manual setup required.
1368

1469
### ✅ 1. Install the software Docker
1570

@@ -56,8 +111,8 @@ These commands will:
56111

57112
1. The first time you run this, it will download the necessary Docker images for
58113
- `ollama` (for model serving and its REST API) and
59-
- the `llm-module` (the Shiny web frontend that controls Ollama and can also interact with other LLM APIs
60-
like OpenAI, Deepseek).
114+
- the `llm-module` (the Shiny web frontend that controls Ollama and can also interact with other
115+
LLM APIs like OpenAI, Deepseek).
61116
2. After images are pulled, a Docker network and a Docker volume will be created, and both container will start.
62117
3. The `llm-module` container hosts the application, which you can access in your web browser at `http://127.0.0.1:3838/`.
63118

@@ -103,5 +158,3 @@ OLLAMA_LOCAL_MODELS_PATH=</path/to/your/models> docker compose up --build
103158
```
104159

105160
*Tip:* Use `docker compose down` to stop and clean up the containers when done.
106-
107-

man/new_LlmPromptConfig.Rd

Lines changed: 4 additions & 3 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

0 commit comments

Comments
 (0)