Skip to content

Getting Started

Bell Eapen edited this page Jan 17, 2026 · 5 revisions

DHTI is a modular reference architecture for bringing Generative AI into healthcare workflows — a quick prototyping platform that wires together an EMR (OpenMRS), a GenAI app server (LangServe), optional self‑hosted LLMs (Ollama), a vector store (Redis), monitoring (LangFuse), an MCP tool server, and supporting utilities (Neo4j, HAPI/CQL) so you can develop, test and share “elixirs” (LangServe apps) and “conches” (OpenMRS O3 UI modules or CDS-hook containers) without reengineering your stack. Clone the repo, then install the two required prerequisites: Node.js (LTS recommended) and Docker. Optionally install Python 3.9+ if you plan to author LangServe elixirs or run the examples. The base class provides you with everything to get started!

Prerequisites

Required Optional
Node.js (LTS) Python 3.9+ for custom elixirs
Docker Engine / Desktop uv, cookiecutter for scaffolding
AI Agent (Optional) Claude Desktop or similar for vibe-coding

No API keys needed for the demo (uses a mock LLM).

Watch this video

DHTI

Ultra-quick start (1 minute)

git clone https://github.com/dermatologist/dhti.git
cd dhti
./demo.sh

AI-Powered Setup (Vibe Coding)

If you have an AI agent (like Claude Desktop) with the start-dhti skill, you can scaffold and start the entire stack by simply asking:

"Start a new DHTI project with a healthcare chatbot."

Quickstart

npx dhti-cli help                         # list commands
npx dhti-cli compose add -m langserve     # add LangServe to ~/dhti/docker-compose.yml
npx dhti-cli compose read                 # view generated compose
npx dhti-cli elixir install -g https://github.com/dermatologist/dhti-elixir.git -n dhti-elixir-schat -s packages/simple_chat
npx dhti-cli docker -n yourdockerhandle/genai-test:1.0 -t elixir
npx dhti-cli docker -u                    # start services from compose

Visit http://localhost/openmrs/spa/home (admin / Admin123) to see the conch.

Add Real LLMs Later

Gemini, OpenAI and OpenRouter will be picked up if the respective environment variables are set. Configure LLM by editing ~/dhti/elixir/app/bootstrap.py (model name, provider, temperature, etc.). Ollama models can be pulled into the Ollama container and referenced here.

Iterate Quickly

See Hot Reload for copying source/dist into running containers for rapid feedback.

Clean Up

Stop and remove all containers:

npx dhti-cli docker -d

Next Steps

Clone this wiki locally