Skip to content

mattthew-man/react-native-mlc-chat

Repository files navigation

ReactNativeLLM

ReactNativeLLM is a reference implementation for building offline-capable conversational AI experiences on iOS and Android with a single TypeScript codebase. It combines on-device inference, contextual retrieval, and a polished chat interface into a modular foundation that can be extended for production deployments.

Executive Summary

  • Offline-first pipeline powered by react-native-ai for running quantized language models on device.
  • Context orchestration that segments Markdown knowledge bases and surfaces relevant snippets per turn.
  • A clean React Native architecture with screen-level stores, reusable UI primitives, and typed service layers.
  • Documentation site (Docusaurus) covering integration guides, API contracts, and operations playbooks.

An annotated product walkthrough is bundled as demo.MP4 in the repository root.

Solution Capabilities

  • Model lifecycle management – download, cache, and activate models with progress feedback and network awareness.
  • Conversational workspace – Gifted Chat integration, adaptive theme support, and context toggles for human-in-the-loop control.
  • Context intelligence – Markdown parsing, embedding-friendly chunking, and relevance scoring via Fuse.js.
  • Resilience features – persisted session state, graceful fallback paths when context is unavailable, and telemetry hooks for diagnostics.

Architecture Overview

  • src/screens hosts high-level navigation containers for model selection and chat workflows.
  • src/components provides composable UI focused on chat ergonomics and control surfaces.
  • src/hooks encapsulates domain-specific state machines (model preparation, context refresh, connectivity).
  • src/services packages file system utilities, context processing logic, and adapters to the model control plane.
  • src/theme manages cross-platform visual styling using a context-driven design token approach.

Refer to docs/ for an extended architectural deep dive and sequence diagrams.

Getting Started

Prerequisites

  • Node.js 18 or later
  • React Native CLI environment (Xcode/iOS Simulator on macOS, Android Studio + SDK/NDK)
  • CocoaPods (macOS) for iOS dependencies

Install Tooling

git clone <repository-url>
cd ReactNativeLLM
npm install         # or: yarn install

# iOS only
(cd ios && pod install)

Launch the Application

# Start Metro in a dedicated terminal
npm start

# iOS simulator
npm run ios

# Android emulator or attached device
npm run android

Additional scripts: npm test, npm run lint, and npx tsc --noEmit for unit tests, linting, and static analysis respectively.

Development Workflow

  • Use the Model Selection screen to download or activate a model before entering the chat experience.
  • Long-press the context toggle to generate a sample context.md while prototyping.
  • Store curated Markdown knowledge in the documents directory; the context manager will reindex on demand.
  • Observe network status via the header indicator to gauge whether downloads are possible.

Quality and Tooling

  • Testing – Jest harness with React Native Testing Library (see __tests__/).
  • Linting & Formatting – ESLint with the React Native recommended baseline plus Prettier.
  • Type Safety – TypeScript strict mode across screens, hooks, and services.
  • CI Ready – Scripts are structured for easy adoption in GitHub Actions or Bitrise pipelines.

Documentation

  • Product and integration guides live in docs/.
  • The Docusaurus site in website/ can be served locally with npm start after installing dependencies inside that directory.
  • API references are grouped by components, hooks, and services to simplify onboarding for new contributors.

Roadmap Candidates

  • Model lifecycle enhancements (removal, version pinning, cloud sync).
  • In-app context editing and Markdown validation.
  • Export pipelines for curated conversations and context audit trails.
  • Accessibility review including screen reader flows and high-contrast themes.
  • Optional server-side relay for hybrid on-device/cloud inference.

Security & Privacy Considerations

  • All inference occurs on device; no prompts or responses are transmitted externally by default.
  • Context files remain in the application sandbox and can be rotated by the end user.
  • Review mlc-config.json and related platform files before shipping to production stores.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published