Skip to content

renato-perussi/AI_Chatbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Chatbot

Overview

This project is a web-based AI chatbot built with Python and Streamlit, designed to provide an interactive conversational interface powered by large language models hosted on Hugging Face. The application allows users to select different models and inference providers, manage chat history locally, and interact with the model in real time through a clean and configurable UI.

The project is structured with separation of concerns, centralizing configuration, utility functions, and application logic to facilitate maintainability and extensibility.

ai_chatbot_home_light ai_chatbot_home_dark
Light Dark
ai_chatbot_chat_light ai_chatbot_chat_dark
Light Dark

Key Features

  • Interactive chat interface built with Streamlit.
  • Integration with Hugging Face Inference API via huggingface_hub.
  • Support for multiple language models and inference providers.
  • Persistent chat history stored locally.
  • Sidebar-based configuration for model and provider selection.
  • Environment-based API key management using .env files.

Technology Stack

  • Python 3.10+
  • Streamlit
  • Hugging Face Inference API (huggingface_hub)
  • dotenv
  • Standard Python libraries for file handling and serialization

Project Structure

AI_Chatbot/
│
├── app.py               # Application entry point
├── config.py            # Centralized configuration (models, providers, constants)
├── utils.py             # Core utilities and business logic
├── requirements.txt     # Project dependencies
├── .env                 # Environment variables (not committed)
├── .gitignore           # Git ignore rules
│
├── chats/               # Stored chat history files
├── images_readme/       # Assets used in documentation
└── .venv/               # Local virtual environment (optional)

Application Flow

  1. Load the API token from environment variables.
  2. Configure the Streamlit page and UI layout.
  3. Initialize the sidebar with model and provider options.
  4. Instantiate the Hugging Face inference client.
  5. Load and display existing chat history.
  6. Accept user input and generate AI responses.
  7. Persist conversations locally for future sessions.

Configuration

Model, provider, and application-level settings are defined in config.py. This includes:

  • Available language models
  • Inference providers
  • UI-related constants
  • Timeout settings
  • Default greeting messages

Environment variables are managed via a .env file. At minimum, the following variable is required:

HF_TOKEN=your_huggingface_api_token

Installation

  1. Clone the repository:
git clone <repository-url>
cd AI_Chatbot
  1. Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate  # Linux/macOS
.venv\\Scripts\\activate     # Windows
  1. Install dependencies:
pip install -r requirements.txt
  1. Create a .env file and add your Hugging Face token.

Running the Application

Start the Streamlit app with:

streamlit run app.py

The application will be available locally, typically at http://localhost:8501.

Extensibility

The project is designed to be easily extended. Common extension points include:

  • Adding new models or providers in config.py.
  • Customizing the chat UI and sidebar layout.
  • Implementing conversation analytics or logging.
  • Integrating authentication or user management.

Best Practices Followed

  • Clear separation between configuration, utilities, and application logic.
  • Environment-based secret management.
  • Modular and readable codebase.
  • Type hints and docstrings for core functions.

License

This project is provided for educational and experimental purposes. Review and define an appropriate license before using it in production environments.

Author

Renato Perussi

About

This project is a web-based conversational interface built with Python and Streamlit that utilizes large language models from Hugging Face. It features real-time interaction, local chat history management, and the ability to choose between various models and inference providers within a customizable user interface.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages