Skip to content

Offline AI survival assistant powered by gpt-oss, providing critical instructions without an internet connection. Built for the OpenAI Open Model Hackathon.

License

Notifications You must be signed in to change notification settings

vero-code/last-signum

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Last Signum

When the last signal fades, we stay.

Last Signum is an offline AI survival agent powered by OpenAI's gpt-oss models. It's designed to provide clear, verified, step-by-step instructions on first aid, repairs, food, water and shelter — even with no internet connection.

The project was submitted to the OpenAI Open Model Hackathon.


✨ Features

  • Fully Offline: The entire system runs locally on your device, ensuring privacy and availability without a network connection.
  • Expert Knowledge Base: Answers are based on a curated library of survival guides, not the model's general knowledge, preventing dangerous "hallucinations".
  • Smart Synthesis: Uses an advanced RAG (Retrieval-Augmented Generation) pipeline to understand complex, multi-part questions and provide comprehensive, prioritized answers.
  • Persistent Chat History: Your conversation is automatically saved in your browser, so you can close the app and continue right where you left off.
  • Clean & Thematic UI: A responsive, forest-themed interface that's easy to read and use in any situation.

🧠 How It Works

Architecture Diagram

Last Signum uses an advanced RAG architecture to provide accurate and safe answers.

  1. Knowledge Base: The agent's knowledge is stored in a local library of text files (/knowledge_base).
  2. Smart Search (Retrieval): When a user asks a complex question, the system first uses the gpt-oss model to break it down into several simpler sub-questions. It then uses a specialized embedding model (nomic-embed-text) to find the most relevant information for each sub-question in the knowledge base.
  3. Answer Synthesis (Generation): The gpt-oss model receives the user's original question along with all the retrieved information. Following a strict set of safety rules, it synthesizes this information into a clear, actionable, step-by-step answer, citing the sources it used.

🛠️ Tech Stack

  • Frontend: React, Vite, Nginx
  • Backend: Python, FastAPI
  • Containerization: Docker, Docker Compose
  • AI Framework: LangChain
  • Vector Store: ChromaDB
  • Model Runner: Ollama
  • Core Models:
    • Generative: gpt-oss:20b
    • Embedding: nomic-embed-text

📂 Project Structure

│  
├── backend/            # Python
│ ├── knowledge_base/   # AI's source of truth
│ │ ├── first_aid.txt
│ │ └── ...
│ ├── .dockerignore
│ ├── .env.example      # Example environment variables
│ ├── Dockerfile
│ ├── main.py           # FastAPI server & RAG logic
│ └── requirements.txt 
│
├── frontend/           # React UI  
│ ├── src/
│ ├── .dockerignore
│ ├── .env.example      # Example environment variables 
│ ├── Dockerfile
│ ├── index.html
│ ├── nginx.conf
│ └── package.json
│  
├── docker-compose.yml
├── LICENSE
└── README.md           # Docs

⚙️ Installation & Run

This is the fastest and most reliable way to run the project.

Prerequisites:

  • Git
  • Docker and Docker Compose (must be installed and running)
  • Ollama (must be installed and running)

1. Clone the repository:

git clone https://github.com/vero-code/last-signum.git
cd last-signum

2. Download the required AI models:

ollama pull gpt-oss:20b
ollama pull nomic-embed-text

3. Set up environment variables:

In the backend and frontend folder, create a .env file from the .env.example template.

4. Run the application:

docker-compose up --build

The first build might take a few minutes. Subsequent launches will be much faster.

5. Open the application:

Open your browser and go to http://localhost:5173.

To stop the application, press Ctrl + C in the terminal, and then run docker-compose down.

Alternatively: Manual/Development Setup

This method is for development and allows for hot-reloading of both frontend and backend code.

Prerequisites:

  • Git, Python v3.13.4, Node.js v22.16.0

  • Ollama (must be installed and running)

1. Clone and set up environment: Follow steps 1-3 from the Docker method above.

2. Set up and run the backend:

cd backend
python -m venv venv
# On Windows
.venv\Scripts\Activate.ps1
# On macOS/Linux
# source .venv/bin/activate
pip install -r requirements.txt
uvicorn main:app --reload

3. Set up and run the frontend (in a new terminal):

cd frontend
npm install
npm run dev

4. Open the application:

Open your browser and go to http://localhost:5173.


🧪 Testing

Here are a few examples you can use to test the capabilities of Last Signum.

Direct Question

How do I purify water by boiling?

I found some dandelions in the forest, are they safe to eat?

Tests the model's ability to retrieve and format specific information from a single source file.

Synthesis Question

I have a deep cut on my arm and I feel cold. What should I do?

Tests the model's ability to retrieve information from multiple sources and synthesize a prioritized, step-by-step plan.

Out-of-Scope Question

What is the capital of Japan?

Tests the model's safety guardrails. It should correctly state that this information is not in its knowledge base, proving it doesn't "hallucinate" answers.


🛡️ Safety

  • All advice is for reference only — not a substitute for professional help.

  • The AI's knowledge is limited to the provided text files.


📜 License

MIT License. See LICENSE for details.


🙏 Acknowledgements

  • AI & User Avatars by Icons8.
  • Background photo generated by AI.

About

Offline AI survival assistant powered by gpt-oss, providing critical instructions without an internet connection. Built for the OpenAI Open Model Hackathon.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published