Chat with AI completely offline - No internet required!
Airplane Mode Local LLM is a user-friendly chat application that lets you have conversations with AI models that run entirely on your computer. Perfect for working on planes, in areas with poor internet, or when you value your privacy.
- 💬 Chat with AI completely offline - No internet connection needed once set up
- 🔄 Multiple conversations - Organize your chats like browser tabs
- 🤖 Switch between AI models - Try different models for different tasks
- 🎨 Dark and light themes - Easy on your eyes, day or night
- 💾 Auto-save conversations - Your chats are saved automatically
- 📤 Export your chats - Save conversations as Markdown, PDF, or JSON
- ⚡ Real-time responses - Watch the AI type responses in real-time
- 📱 Works on any device - Desktop, tablet, or mobile
The easiest way to get started! We've created simple launcher scripts that do all the work for you. Just double-click and go!
If this is your first time using the app, follow these steps:
Double-click: Download AI Model.command
What it does: Downloads an AI brain to your computer (you only do this once!)
What to expect:
- A window will open asking which model you want
- We recommend starting with llama3.2:1b (smallest and fastest)
- The download takes 5-15 minutes depending on your internet
- You'll see a progress bar showing the download
- When done, it says "SUCCESS!" and you can close the window
Tip: You only need to do this step once! After that, the AI model stays on your computer forever (unless you delete it).
Double-click: Start Ollama.command
What it does: Starts the AI engine that powers the models
What to expect:
- A terminal window opens with messages
- You'll see "Listening on 127.0.0.1:11434" - this means it's working!
- Keep this window open! Don't close it while using the app
- The window looks "stuck" - that's normal! It's running in the background
If you see an error: Make sure you installed Ollama from ollama.com (see Prerequisites section below)
Double-click: Start App.command
What it does: Launches the chat interface where you talk to AI
What to expect:
- Another terminal window opens
- It checks if everything is ready
- You'll see "Local: http://localhost:3010"
- Your web browser automatically opens to the chat app
- Keep this window open! Don't close it while chatting
- Start typing and chatting with AI!
After the first-time setup, starting the app is super simple:
- Double-click:
Start Ollama.command(wait until you see "Listening...") - Double-click:
Start App.command(your browser opens automatically) - Start chatting! Type your messages and get AI responses
That's it! No typing commands, no complex setup.
When you're done chatting:
- Close your web browser tab
- In the "Start App.command" window, press
Ctrl+Cor just close the window - In the "Start Ollama.command" window, press
Ctrl+Cor just close the window
Both terminal windows can now be closed. Your conversations are auto-saved!
Problem: "Permission denied" when double-clicking
- Right-click the file and choose "Open"
- Click "Open" again in the security dialog
- Mac will remember your choice for next time
Problem: Nothing happens when I double-click
- Make sure the file ends with
.command - Try right-clicking → Open With → Terminal
Problem: "Ollama is not installed" error
- You need to install Ollama first from ollama.com
- See the Prerequisites section below for detailed instructions
Problem: "Node.js is not installed" error
- You need to install Node.js first from nodejs.org
- Download the LTS version and run the installer
Problem: The app window closes immediately
- Check if you have enough disk space (need at least 5GB)
- Try running the script again
- If it keeps failing, see the Manual Installation section below
This app is perfect for:
- Beginners who want to try local AI without complex setup
- Privacy-conscious users who want AI without sending data to the cloud
- Frequent travelers who need AI access without internet
- Developers who want a local AI assistant
- Students who want a study companion that works offline
Before we start, you'll need to install two things on your computer:
What is it? Node.js lets you run JavaScript applications on your computer.
How to install:
-
Windows:
- Go to nodejs.org
- Download the "LTS" version (the green button)
- Run the installer and follow the prompts
- Click "Next" through all the screens (default settings are fine)
-
Mac:
- Go to nodejs.org
- Download the "LTS" version (the green button)
- Open the downloaded file and follow the installer
OR use Homebrew (if you have it):
brew install node
-
Linux (Ubuntu/Debian):
curl -fsSL https://deb.nodesource.com/setup_lts.x | sudo -E bash - sudo apt-get install -y nodejs
Check if it worked: Open Terminal (Mac/Linux) or Command Prompt (Windows) and type:
node --version
npm --versionYou should see version numbers like v20.x.x and 10.x.x
What is it? Ollama is the engine that runs AI models on your computer.
How to install:
-
Mac:
- Go to ollama.com
- Click "Download" and get the Mac version
- Open the downloaded file and drag Ollama to Applications
- Open Ollama from Applications (you'll see an icon in your menu bar)
-
Windows:
- Go to ollama.com
- Click "Download" and get the Windows version
- Run the installer
- Ollama will start automatically
-
Linux:
curl -fsSL https://ollama.com/install.sh | sh
Check if it worked: Open Terminal/Command Prompt and type:
ollama --versionYou should see a version number like 0.x.x
Note: If you're using the Easy Launcher Scripts (recommended!), you can skip most of these steps. Just install the Prerequisites below, then jump to the "Quick Start" section at the top!
Don't worry if you're new to this - we'll go through each step together!
Option A: Download as ZIP (Easiest)
- If you downloaded this as a ZIP file, unzip it to a location you'll remember (like your Desktop or Documents folder)
- Remember where you saved it!
Option B: Using Git (If you know how)
git clone <repository-url>
cd ollama_testOn Mac:
- Press
Cmd + Spaceto open Spotlight - Type "Terminal" and press Enter
On Windows:
- Press
Windows Key + R - Type "cmd" and press Enter
On Linux:
- Press
Ctrl + Alt + T
In the Terminal/Command Prompt, you need to go to where you saved the app.
Example (adjust the path to where YOU saved it):
# If you saved it to your Desktop on Mac:
cd ~/Desktop/ollama_test
# If you saved it to your Desktop on Windows:
cd C:\Users\YourName\Desktop\ollama_test
# If you saved it to Documents:
cd ~/Documents/ollama_testTip: You can usually type cd (with a space) and then drag the folder into the Terminal window - it will fill in the path for you!
This downloads all the pieces the app needs to run. Type this command and press Enter:
npm installWhat to expect:
- You'll see a lot of text scrolling by - this is normal!
- It might take 2-5 minutes depending on your internet speed
- When it's done, you'll see your cursor blinking again
If you see errors:
- Make sure you're in the right folder (see Step 3)
- Make sure Node.js is installed correctly (see Prerequisites)
- Try closing and reopening Terminal/Command Prompt
RECOMMENDED: Use the Easy Launcher Scripts! (see "Quick Start" section above)
If you prefer the launcher scripts, just double-click Start Ollama.command and then Start App.command. The instructions below are for manual/advanced users who want to use the command line directly.
You need to start TWO things: Ollama (the AI engine) and the app (the interface).
On Mac/Linux: Open a Terminal window and type:
ollama serveOn Windows: Ollama usually starts automatically! If not:
- Look for Ollama in your system tray (bottom-right corner)
- Right-click and select "Start"
What to expect:
- The terminal will show messages like "Listening on 127.0.0.1:11434"
- Keep this window open - you need Ollama running in the background
- You won't see a cursor - this is normal, it means Ollama is running!
Open a NEW Terminal/Command Prompt window (keep Ollama running in the first one!) and type:
ollama pull llama3.2What to expect:
- This downloads an AI model (about 2GB)
- It will show a progress bar
- Takes 5-15 minutes depending on your internet speed
- You only need to do this ONCE!
Other models you can try later:
ollama pull llama3.2:1b # Smaller, faster (1.3GB)
ollama pull codellama # Good for coding (3.8GB)
ollama pull mistral # Alternative model (4.1GB)To see all your installed models:
ollama listIn your Terminal/Command Prompt (in the project folder), type:
npm run devWhat to expect:
- You'll see messages like "Local: http://localhost:3010"
- The app is now running!
- Keep this window open
- Open your web browser (Chrome, Firefox, Safari, Edge - any works!)
- Go to: http://localhost:3010
- You should see the chat interface!
🎉 Congratulations! You're ready to chat!
- Select a model - Click the dropdown at the top and choose a model (like "llama3.2")
- Type your message - Use the text box at the bottom
- Press Enter - Watch the AI respond in real-time!
- Click the "+ New Chat" button in the sidebar
- Each conversation is saved automatically
- Click on any conversation to switch to it
- Click the model dropdown at the top
- Select a different model
- Each conversation can use a different model
- Click the settings icon (⚙️) in the top-right
- Choose between:
- 🌞 Light mode
- 🌙 Dark mode
- 🖥️ System (follows your computer's theme)
- Click the three dots (...) menu in a conversation
- Choose your export format:
- Markdown (.md) - Human-readable text format
- PDF - Printable document
- JSON - Computer-readable format
- Hover over a conversation in the sidebar
- Click the pencil icon (✏️)
- Type a new name and press Enter
- Hover over a conversation in the sidebar
- Click the trash icon (🗑️)
- Confirm you want to delete it
Solutions:
- Make sure Ollama is running (see Step 1 in "How to Run")
- Check if Ollama is running:
ollama listin Terminal - Restart Ollama:
# Stop it (Ctrl+C in the Ollama terminal) # Start it again ollama serve
Solution: You need to download at least one model:
ollama pull llama3.2Solution: Something else is using that port. Either:
- Close the other application
- Or use a different port:
Then visit: http://localhost:3011
npm run dev -- --port 3011
Solution: Node.js isn't installed correctly. Revisit the Prerequisites section and reinstall Node.js.
Solution: Ollama isn't installed correctly. Revisit the Prerequisites section and reinstall Ollama.
Solutions:
- Use a smaller model:
ollama pull llama3.2:1b - Close other applications to free up memory
- Restart your computer
- Check if your computer meets minimum requirements (8GB RAM recommended)
Solution: That's normal! You need to keep the terminals open:
- One terminal for Ollama (
ollama serve) - One terminal for the app (
npm run dev)
Make chatting even faster with these shortcuts:
- Enter - Send your message
- Shift + Enter - Add a new line (without sending)
- Ctrl/Cmd + N - New conversation
- Ctrl/Cmd + , - Open settings
- Escape - Close modals/dialogs
Curious about how this works? Here's what's inside:
ollama_test/
├── src/ # Application source code
│ ├── components/ # UI components (buttons, chat, sidebar)
│ ├── services/ # Ollama API communication
│ ├── store/ # App state management
│ ├── hooks/ # React custom hooks
│ ├── types.ts # TypeScript definitions
│ └── App.tsx # Main application
├── public/ # Static files (icons, images)
├── package.json # Project dependencies
└── README.md # This file!
This app is built with modern web technologies:
- React 18 - User interface framework
- TypeScript - Type-safe JavaScript
- Vite - Lightning-fast build tool
- Tailwind CSS - Styling framework
- Zustand - Lightweight state management
- Ollama - Local AI model runtime
- React Hot Toast - Notifications
- Lucide React - Beautiful icons
- Framer Motion - Smooth animations
Your data stays on your computer:
- ✅ No internet connection required after setup
- ✅ No data sent to external servers
- ✅ All conversations stored locally
- ✅ You own your data completely
- ✅ Open source - you can inspect the code
Note: The AI models themselves are downloaded from Ollama's servers during initial setup, but after that, everything runs offline.
- Ollama Docs: ollama.com/docs
- Node.js Help: nodejs.org/en/docs
- Search for existing issues on GitHub
- Ask questions in online forums
- Check video tutorials on YouTube for "Ollama installation"
Q: Can I use this without internet? A: Yes! After initial setup and model downloads, everything works offline.
Q: How much disk space do I need? A: At least 5-10GB for the app and models. Larger models need more space.
Q: Which model should I use?
A: Start with llama3.2:1b (smallest, fastest) or llama3.2 (balanced). Try different models to see what works best!
Q: Can I use multiple models at once? A: Each conversation uses one model, but you can have different conversations using different models.
Q: Is this safe? A: Yes! Everything runs locally on your computer. No data is sent to external servers.
Q: How do I update to a newer version?
A: Download the latest version and run npm install again in the project folder.
Once you're comfortable with the basics:
-
Try different models - Each has different strengths
ollama pull codellama # Great for programming help ollama pull mistral # Good for creative writing
-
Customize the system prompt - In Settings, change how the AI behaves
-
Export your conversations - Save important chats for later
-
Explore keyboard shortcuts - Get faster at chatting
-
Run as desktop app - Use the Electron version for a native experience
For better AI responses:
- Be specific in your questions
- Provide context when needed
- Break complex questions into smaller parts
- If you don't like a response, try rephrasing your question
For better performance:
- Use smaller models for simple tasks
- Use larger models for complex reasoning
- Close other applications while using AI
- Keep your conversations organized
This application uses:
- Ollama - For running AI models locally
- React - For the user interface
- Open source libraries - Built by the community
This project is licensed under the MIT License - you're free to use, modify, and distribute it!
Built with ❤️ for people who want AI without the cloud
Use this checklist to make sure everything is set up:
- Node.js installed (
node --versionworks) - Ollama installed (
ollama --versionworks) - Project files downloaded and unzipped
- AI model downloaded (double-clicked
Download AI Model.command) - Ollama is running (double-clicked
Start Ollama.command) - App is running (double-clicked
Start App.command) - Browser opened automatically to the chat interface
- Successfully sent a test message to the AI
🎉 If you checked all these boxes, you're all set!
- Node.js installed (
node --versionworks) - Ollama installed (
ollama --versionworks) - Project files downloaded and unzipped
- Dependencies installed (
npm installcompleted) - At least one AI model downloaded (
ollama pull llama3.2) - Ollama is running (
ollama servein a terminal) - App is running (
npm run devin another terminal) - Browser opened to http://localhost:3010
- Successfully sent a test message to the AI
🎉 If you checked all these boxes, you're all set!
Need help? Don't get stuck - reach out to the community or check the troubleshooting section above!