This project is a web-based AI chatbot built with Python and Streamlit, designed to provide an interactive conversational interface powered by large language models hosted on Hugging Face. The application allows users to select different models and inference providers, manage chat history locally, and interact with the model in real time through a clean and configurable UI.
The project is structured with separation of concerns, centralizing configuration, utility functions, and application logic to facilitate maintainability and extensibility.
![]() |
![]() |
|---|---|
| Light | Dark |
![]() |
![]() |
|---|---|
| Light | Dark |
- Interactive chat interface built with Streamlit.
- Integration with Hugging Face Inference API via
huggingface_hub. - Support for multiple language models and inference providers.
- Persistent chat history stored locally.
- Sidebar-based configuration for model and provider selection.
- Environment-based API key management using
.envfiles.
- Python 3.10+
- Streamlit
- Hugging Face Inference API (
huggingface_hub) - dotenv
- Standard Python libraries for file handling and serialization
AI_Chatbot/
│
├── app.py # Application entry point
├── config.py # Centralized configuration (models, providers, constants)
├── utils.py # Core utilities and business logic
├── requirements.txt # Project dependencies
├── .env # Environment variables (not committed)
├── .gitignore # Git ignore rules
│
├── chats/ # Stored chat history files
├── images_readme/ # Assets used in documentation
└── .venv/ # Local virtual environment (optional)
- Load the API token from environment variables.
- Configure the Streamlit page and UI layout.
- Initialize the sidebar with model and provider options.
- Instantiate the Hugging Face inference client.
- Load and display existing chat history.
- Accept user input and generate AI responses.
- Persist conversations locally for future sessions.
Model, provider, and application-level settings are defined in config.py. This includes:
- Available language models
- Inference providers
- UI-related constants
- Timeout settings
- Default greeting messages
Environment variables are managed via a .env file. At minimum, the following variable is required:
HF_TOKEN=your_huggingface_api_token
- Clone the repository:
git clone <repository-url>
cd AI_Chatbot
- Create and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # Linux/macOS
.venv\\Scripts\\activate # Windows
- Install dependencies:
pip install -r requirements.txt
- Create a
.envfile and add your Hugging Face token.
Start the Streamlit app with:
streamlit run app.py
The application will be available locally, typically at http://localhost:8501.
The project is designed to be easily extended. Common extension points include:
- Adding new models or providers in
config.py. - Customizing the chat UI and sidebar layout.
- Implementing conversation analytics or logging.
- Integrating authentication or user management.
- Clear separation between configuration, utilities, and application logic.
- Environment-based secret management.
- Modular and readable codebase.
- Type hints and docstrings for core functions.
This project is provided for educational and experimental purposes. Review and define an appropriate license before using it in production environments.
Renato Perussi



