llmModule provides a structured and extensible R interface to interact with both remote
(e.g., OpenAI, DeepSeek) and local (via Ollama) Large Language Model (LLM) APIs. It simplifies key
workflows such as model selection, prompt configuration, and request handling through a consistent
object-oriented interface.
llmModule provides a structured R interface for working with Large Language Model (LLM) APIs,
including OpenAI and DeepSeek.
It simplifies interactions with chat-based LLMs by offering methods and S3 classes for:
- API key management and validation
- Prompt configuration
- Sending chat prompts
- Extracting responses
- Modular, object-oriented interface using S3 classes:
RemoteLlmApifor remote providers (OpenAI, DeepSeek)LocalLlmApifor local Ollama serversLlmPromptConfigto configure prompt messages and parametersLlmResponsefor structured handling of responses
- Comprehensive API validation:
- Validates API key format, provider match, and key functionality via test request
- Clear error reporting with automatic suggestion of likely provider mismatches
- Local model support (via Ollama):
- Allows to pull new models
- Allows exclusion of deprecated or irrelevant models via
exclude_pattern
- Unified method interface:
get_llm_models()to fetch available modelssend_prompt()to submit prompts and retrieve responses
- Optional Docker integration for local deployment (see below)
# Create an LLM API object
api <- new_RemoteLlmApi("~/.secrets/openai.txt", provider = "OpenAI")
# Set up a prompt
prompt <- new_LlmPromptConfig(
model = "gpt-4.1",
prompt_content = "What's the capital of Italy?"
)
# Send the prompt
result <- send_prompt(api, prompt)
# Extract the assistant's reply
result$choices[[1]]$message$contentRun this app in your browser with just one command! The Docker setup includes all components — the
llmModule Shiny frontend and the ollama backend for local LLM model serving — no manual setup required.
Download installation files from one of the links below and follow installation instructions:
After Docker is installed you can pull & run the app manually.
Open a terminal (command line):
- Windows command line:
- Open the Start menu or press the
Windows key+R; - Type cmd or cmd.exe in the Run command box;
- Press Enter.
- Open the Start menu or press the
- MacOS: open the Terminal app.
- Linux: most Linux systems use the same default keyboard shortcut to start the
command line:
Ctrl-Alt-TorSuper-T
To start the app you need the docker-compose.yaml of this Repository. You can either:
Run directly without cloning the repo:
curl -sL https://raw.githubusercontent.com/Pandora-IsoMemo/llmModule/refs/heads/main/docker-compose.yml | docker compose -f - up
OR: Clone the repository and run in the project directory:
git clone https://github.com/Pandora-IsoMemo/llmModule.git
cd llmModule
docker compose up
These commands will:
- The first time you run this, it will download the necessary Docker images for
ollama(for model serving and its REST API) and- the
llm-module(the Shiny web frontend that controls Ollama and can also interact with other LLM APIs like OpenAI, Deepseek).
- After images are pulled, a Docker network and a Docker volume will be created, and both container will start.
- The
llm-modulecontainer hosts the application, which you can access in your web browser athttp://127.0.0.1:3838/.
To use your own pre-downloaded Ollama models, specify a custom path by setting the
OLLAMA_LOCAL_MODELS_PATH environment variable.
This requires cloning the repository and running Docker Compose from the project directory.
OLLAMA_LOCAL_MODELS_PATH=/path/to/your/models docker compose upDefault locations for Ollama models:
- linux:
/usr/share/ollama/.ollama - macOS:
~/.ollama - windows:
C:\\Users\\<username>\\.ollama
This will mount your local models into the container for faster startup and persistent access.
To build and run the app locally:
docker compose up --buildUse --build if:
- You made changes to source code or Dockerfiles,
- Or you're testing a fresh environment.
To run with a custom models path:
OLLAMA_LOCAL_MODELS_PATH=</path/to/your/models> docker compose up --buildTip: Use docker compose down to stop and clean up the containers when done.