This project is an offline standalone chatbot built with the Llama model and Flask. The chatbot, named Raahi, is designed to provide interactive conversations without requiring an internet connection. It uses the Llama3 model, part of LangChain's Ollama, for generating responses based on user input.
- Offline Functionality: Powered by Llama3 for robust offline capabilities.
- Interactive Conversations: Maintains context for dynamic and meaningful responses.
- Lightweight Server: Built using Flask for simplicity and efficiency.
- Customizable Templates: The chatbot template can be tailored for various conversational styles.
The chatbot consists of two main components:
- Frontend: A client interface for users to send and receive messages.
- Backend: Flask server handling requests and generating responses using the Llama model.
- Python: Ensure Python 3.8+ is installed.
- Dependencies: Install required libraries:
pip install flask flask-cors langchain-ollama
- Ollama LLM model is required to be installed in the system
-
Clone the Repository:
git clone <repository-url> cd <repository-directory>
-
Start the Backend:
python app.py
- Method: POST
- Description: Processes user input and returns a chatbot response.
- Payload:
{ "text": "Your message here" } - Response:
{ "response": "Chatbot's response here" }
- User inputs a message.
- The message is sent to the
/predictendpoint. - The Flask backend processes the input using the Llama model.
- A response is generated and returned to the user.
User Interface
Chat Processing Module
Llama3 Model
Flask Server
Edit the template in chat.py to change the chatbot's conversational tone and style.
Replace the Llama3 model with another compatible model by changing the OllamaLLM initialization.

