Skip to content

pjsilicon/airplane-mode-local-llm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

✈️ Airplane Mode Local LLM

Chat with AI completely offline - No internet required!

Airplane Mode Local LLM is a user-friendly chat application that lets you have conversations with AI models that run entirely on your computer. Perfect for working on planes, in areas with poor internet, or when you value your privacy.


🌟 What Can This App Do?

  • 💬 Chat with AI completely offline - No internet connection needed once set up
  • 🔄 Multiple conversations - Organize your chats like browser tabs
  • 🤖 Switch between AI models - Try different models for different tasks
  • 🎨 Dark and light themes - Easy on your eyes, day or night
  • 💾 Auto-save conversations - Your chats are saved automatically
  • 📤 Export your chats - Save conversations as Markdown, PDF, or JSON
  • Real-time responses - Watch the AI type responses in real-time
  • 📱 Works on any device - Desktop, tablet, or mobile

🚀 Quick Start (Easy Launch)

The easiest way to get started! We've created simple launcher scripts that do all the work for you. Just double-click and go!

First-Time Setup (3 Easy Steps)

If this is your first time using the app, follow these steps:

Step 1: Download an AI Model

Double-click: Download AI Model.command

What it does: Downloads an AI brain to your computer (you only do this once!)

What to expect:

  • A window will open asking which model you want
  • We recommend starting with llama3.2:1b (smallest and fastest)
  • The download takes 5-15 minutes depending on your internet
  • You'll see a progress bar showing the download
  • When done, it says "SUCCESS!" and you can close the window

Tip: You only need to do this step once! After that, the AI model stays on your computer forever (unless you delete it).

Step 2: Start Ollama (The AI Engine)

Double-click: Start Ollama.command

What it does: Starts the AI engine that powers the models

What to expect:

  • A terminal window opens with messages
  • You'll see "Listening on 127.0.0.1:11434" - this means it's working!
  • Keep this window open! Don't close it while using the app
  • The window looks "stuck" - that's normal! It's running in the background

If you see an error: Make sure you installed Ollama from ollama.com (see Prerequisites section below)

Step 3: Start the Chat App

Double-click: Start App.command

What it does: Launches the chat interface where you talk to AI

What to expect:

  • Another terminal window opens
  • It checks if everything is ready
  • You'll see "Local: http://localhost:3010"
  • Your web browser automatically opens to the chat app
  • Keep this window open! Don't close it while chatting
  • Start typing and chatting with AI!

Using the App Every Day

After the first-time setup, starting the app is super simple:

  1. Double-click: Start Ollama.command (wait until you see "Listening...")
  2. Double-click: Start App.command (your browser opens automatically)
  3. Start chatting! Type your messages and get AI responses

That's it! No typing commands, no complex setup.

How to Stop the App

When you're done chatting:

  1. Close your web browser tab
  2. In the "Start App.command" window, press Ctrl+C or just close the window
  3. In the "Start Ollama.command" window, press Ctrl+C or just close the window

Both terminal windows can now be closed. Your conversations are auto-saved!

Troubleshooting the Launchers

Problem: "Permission denied" when double-clicking

  • Right-click the file and choose "Open"
  • Click "Open" again in the security dialog
  • Mac will remember your choice for next time

Problem: Nothing happens when I double-click

  • Make sure the file ends with .command
  • Try right-clicking → Open With → Terminal

Problem: "Ollama is not installed" error

  • You need to install Ollama first from ollama.com
  • See the Prerequisites section below for detailed instructions

Problem: "Node.js is not installed" error

  • You need to install Node.js first from nodejs.org
  • Download the LTS version and run the installer

Problem: The app window closes immediately

  • Check if you have enough disk space (need at least 5GB)
  • Try running the script again
  • If it keeps failing, see the Manual Installation section below

🎯 Who Is This For?

This app is perfect for:

  • Beginners who want to try local AI without complex setup
  • Privacy-conscious users who want AI without sending data to the cloud
  • Frequent travelers who need AI access without internet
  • Developers who want a local AI assistant
  • Students who want a study companion that works offline

📋 What You'll Need (Prerequisites)

Before we start, you'll need to install two things on your computer:

1. Node.js (JavaScript Runtime)

What is it? Node.js lets you run JavaScript applications on your computer.

How to install:

  • Windows:

    1. Go to nodejs.org
    2. Download the "LTS" version (the green button)
    3. Run the installer and follow the prompts
    4. Click "Next" through all the screens (default settings are fine)
  • Mac:

    1. Go to nodejs.org
    2. Download the "LTS" version (the green button)
    3. Open the downloaded file and follow the installer

    OR use Homebrew (if you have it):

    brew install node
  • Linux (Ubuntu/Debian):

    curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash -
    sudo apt-get install -y nodejs

Check if it worked: Open Terminal (Mac/Linux) or Command Prompt (Windows) and type:

node --version
npm --version

You should see version numbers like v20.x.x and 10.x.x

2. Ollama (AI Model Runtime)

What is it? Ollama is the engine that runs AI models on your computer.

How to install:

  • Mac:

    1. Go to ollama.com
    2. Click "Download" and get the Mac version
    3. Open the downloaded file and drag Ollama to Applications
    4. Open Ollama from Applications (you'll see an icon in your menu bar)
  • Windows:

    1. Go to ollama.com
    2. Click "Download" and get the Windows version
    3. Run the installer
    4. Ollama will start automatically
  • Linux:

    curl -fsSL https://ollama.com/install.sh | sh

Check if it worked: Open Terminal/Command Prompt and type:

ollama --version

You should see a version number like 0.x.x


🚀 Installation Instructions (Step-by-Step)

Note: If you're using the Easy Launcher Scripts (recommended!), you can skip most of these steps. Just install the Prerequisites below, then jump to the "Quick Start" section at the top!


Manual Installation (Advanced Users)

Don't worry if you're new to this - we'll go through each step together!

Step 1: Download This Application

Option A: Download as ZIP (Easiest)

  1. If you downloaded this as a ZIP file, unzip it to a location you'll remember (like your Desktop or Documents folder)
  2. Remember where you saved it!

Option B: Using Git (If you know how)

git clone <repository-url>
cd ollama_test

Step 2: Open Terminal or Command Prompt

On Mac:

  1. Press Cmd + Space to open Spotlight
  2. Type "Terminal" and press Enter

On Windows:

  1. Press Windows Key + R
  2. Type "cmd" and press Enter

On Linux:

  1. Press Ctrl + Alt + T

Step 3: Navigate to the Project Folder

In the Terminal/Command Prompt, you need to go to where you saved the app.

Example (adjust the path to where YOU saved it):

# If you saved it to your Desktop on Mac:
cd ~/Desktop/ollama_test

# If you saved it to your Desktop on Windows:
cd C:\Users\YourName\Desktop\ollama_test

# If you saved it to Documents:
cd ~/Documents/ollama_test

Tip: You can usually type cd (with a space) and then drag the folder into the Terminal window - it will fill in the path for you!

Step 4: Install Dependencies

This downloads all the pieces the app needs to run. Type this command and press Enter:

npm install

What to expect:

  • You'll see a lot of text scrolling by - this is normal!
  • It might take 2-5 minutes depending on your internet speed
  • When it's done, you'll see your cursor blinking again

If you see errors:

  • Make sure you're in the right folder (see Step 3)
  • Make sure Node.js is installed correctly (see Prerequisites)
  • Try closing and reopening Terminal/Command Prompt

🎮 How to Run the Application

RECOMMENDED: Use the Easy Launcher Scripts! (see "Quick Start" section above)

If you prefer the launcher scripts, just double-click Start Ollama.command and then Start App.command. The instructions below are for manual/advanced users who want to use the command line directly.


Manual Method (Advanced)

You need to start TWO things: Ollama (the AI engine) and the app (the interface).

Step 1: Start Ollama

On Mac/Linux: Open a Terminal window and type:

ollama serve

On Windows: Ollama usually starts automatically! If not:

  1. Look for Ollama in your system tray (bottom-right corner)
  2. Right-click and select "Start"

What to expect:

  • The terminal will show messages like "Listening on 127.0.0.1:11434"
  • Keep this window open - you need Ollama running in the background
  • You won't see a cursor - this is normal, it means Ollama is running!

Step 2: Download an AI Model

Open a NEW Terminal/Command Prompt window (keep Ollama running in the first one!) and type:

ollama pull llama3.2

What to expect:

  • This downloads an AI model (about 2GB)
  • It will show a progress bar
  • Takes 5-15 minutes depending on your internet speed
  • You only need to do this ONCE!

Other models you can try later:

ollama pull llama3.2:1b    # Smaller, faster (1.3GB)
ollama pull codellama       # Good for coding (3.8GB)
ollama pull mistral         # Alternative model (4.1GB)

To see all your installed models:

ollama list

Step 3: Start the Chat Application

In your Terminal/Command Prompt (in the project folder), type:

npm run dev

What to expect:

Step 4: Open Your Web Browser

  1. Open your web browser (Chrome, Firefox, Safari, Edge - any works!)
  2. Go to: http://localhost:3010
  3. You should see the chat interface!

🎉 Congratulations! You're ready to chat!


💡 How to Use the App

Starting Your First Conversation

  1. Select a model - Click the dropdown at the top and choose a model (like "llama3.2")
  2. Type your message - Use the text box at the bottom
  3. Press Enter - Watch the AI respond in real-time!

Creating Multiple Conversations

  • Click the "+ New Chat" button in the sidebar
  • Each conversation is saved automatically
  • Click on any conversation to switch to it

Switching AI Models

  • Click the model dropdown at the top
  • Select a different model
  • Each conversation can use a different model

Changing the Theme

  • Click the settings icon (⚙️) in the top-right
  • Choose between:
    • 🌞 Light mode
    • 🌙 Dark mode
    • 🖥️ System (follows your computer's theme)

Exporting Your Conversations

  1. Click the three dots (...) menu in a conversation
  2. Choose your export format:
    • Markdown (.md) - Human-readable text format
    • PDF - Printable document
    • JSON - Computer-readable format

Renaming Conversations

  1. Hover over a conversation in the sidebar
  2. Click the pencil icon (✏️)
  3. Type a new name and press Enter

Deleting Conversations

  1. Hover over a conversation in the sidebar
  2. Click the trash icon (🗑️)
  3. Confirm you want to delete it

🔧 Troubleshooting Common Issues

Problem: "Cannot connect to Ollama"

Solutions:

  1. Make sure Ollama is running (see Step 1 in "How to Run")
  2. Check if Ollama is running: ollama list in Terminal
  3. Restart Ollama:
    # Stop it (Ctrl+C in the Ollama terminal)
    # Start it again
    ollama serve

Problem: "No models available"

Solution: You need to download at least one model:

ollama pull llama3.2

Problem: "Port 3010 is already in use"

Solution: Something else is using that port. Either:

  • Close the other application
  • Or use a different port:
    npm run dev -- --port 3011
    Then visit: http://localhost:3011

Problem: App won't start - "npm: command not found"

Solution: Node.js isn't installed correctly. Revisit the Prerequisites section and reinstall Node.js.

Problem: Ollama command not found

Solution: Ollama isn't installed correctly. Revisit the Prerequisites section and reinstall Ollama.

Problem: The AI responses are very slow

Solutions:

  • Use a smaller model: ollama pull llama3.2:1b
  • Close other applications to free up memory
  • Restart your computer
  • Check if your computer meets minimum requirements (8GB RAM recommended)

Problem: I closed the terminal and everything stopped

Solution: That's normal! You need to keep the terminals open:

  • One terminal for Ollama (ollama serve)
  • One terminal for the app (npm run dev)

⌨️ Keyboard Shortcuts

Make chatting even faster with these shortcuts:

  • Enter - Send your message
  • Shift + Enter - Add a new line (without sending)
  • Ctrl/Cmd + N - New conversation
  • Ctrl/Cmd + , - Open settings
  • Escape - Close modals/dialogs

📁 Project Structure

Curious about how this works? Here's what's inside:

ollama_test/
├── src/                      # Application source code
│   ├── components/          # UI components (buttons, chat, sidebar)
│   ├── services/            # Ollama API communication
│   ├── store/               # App state management
│   ├── hooks/               # React custom hooks
│   ├── types.ts             # TypeScript definitions
│   └── App.tsx              # Main application
├── public/                  # Static files (icons, images)
├── package.json             # Project dependencies
└── README.md               # This file!

🛠️ Technology Stack

This app is built with modern web technologies:

  • React 18 - User interface framework
  • TypeScript - Type-safe JavaScript
  • Vite - Lightning-fast build tool
  • Tailwind CSS - Styling framework
  • Zustand - Lightweight state management
  • Ollama - Local AI model runtime
  • React Hot Toast - Notifications
  • Lucide React - Beautiful icons
  • Framer Motion - Smooth animations

🔒 Privacy & Security

Your data stays on your computer:

  • ✅ No internet connection required after setup
  • ✅ No data sent to external servers
  • ✅ All conversations stored locally
  • ✅ You own your data completely
  • ✅ Open source - you can inspect the code

Note: The AI models themselves are downloaded from Ollama's servers during initial setup, but after that, everything runs offline.


🆘 Getting More Help

Documentation

Community

  • Search for existing issues on GitHub
  • Ask questions in online forums
  • Check video tutorials on YouTube for "Ollama installation"

Common Questions

Q: Can I use this without internet? A: Yes! After initial setup and model downloads, everything works offline.

Q: How much disk space do I need? A: At least 5-10GB for the app and models. Larger models need more space.

Q: Which model should I use? A: Start with llama3.2:1b (smallest, fastest) or llama3.2 (balanced). Try different models to see what works best!

Q: Can I use multiple models at once? A: Each conversation uses one model, but you can have different conversations using different models.

Q: Is this safe? A: Yes! Everything runs locally on your computer. No data is sent to external servers.

Q: How do I update to a newer version? A: Download the latest version and run npm install again in the project folder.


🎓 Next Steps

Once you're comfortable with the basics:

  1. Try different models - Each has different strengths

    ollama pull codellama    # Great for programming help
    ollama pull mistral      # Good for creative writing
  2. Customize the system prompt - In Settings, change how the AI behaves

  3. Export your conversations - Save important chats for later

  4. Explore keyboard shortcuts - Get faster at chatting

  5. Run as desktop app - Use the Electron version for a native experience


📝 Tips for Best Results

For better AI responses:

  • Be specific in your questions
  • Provide context when needed
  • Break complex questions into smaller parts
  • If you don't like a response, try rephrasing your question

For better performance:

  • Use smaller models for simple tasks
  • Use larger models for complex reasoning
  • Close other applications while using AI
  • Keep your conversations organized

🙏 Credits

This application uses:

  • Ollama - For running AI models locally
  • React - For the user interface
  • Open source libraries - Built by the community

📜 License

This project is licensed under the MIT License - you're free to use, modify, and distribute it!


Built with ❤️ for people who want AI without the cloud


🚦 Quick Start Checklist

Use this checklist to make sure everything is set up:

Easy Method (Using Launcher Scripts - Recommended)

  • Node.js installed (node --version works)
  • Ollama installed (ollama --version works)
  • Project files downloaded and unzipped
  • AI model downloaded (double-clicked Download AI Model.command)
  • Ollama is running (double-clicked Start Ollama.command)
  • App is running (double-clicked Start App.command)
  • Browser opened automatically to the chat interface
  • Successfully sent a test message to the AI

🎉 If you checked all these boxes, you're all set!


Manual Method (Command Line)

  • Node.js installed (node --version works)
  • Ollama installed (ollama --version works)
  • Project files downloaded and unzipped
  • Dependencies installed (npm install completed)
  • At least one AI model downloaded (ollama pull llama3.2)
  • Ollama is running (ollama serve in a terminal)
  • App is running (npm run dev in another terminal)
  • Browser opened to http://localhost:3010
  • Successfully sent a test message to the AI

🎉 If you checked all these boxes, you're all set!


Need help? Don't get stuck - reach out to the community or check the troubleshooting section above!

About

A completely offline AI chat interface powered by Ollama. Chat with local LLM models without internet - perfect for planes, privacy, and peace of mind.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors