AI Agent for ABSA is a full-featured AI agent that detects product or service features (aspects) in texts and scores the sentiment (positive, negative, neutral) for each feature. It has a powerful backend developed with FastAPI and a user-friendly interface created with Streamlit. It performs semantic analysis in the background using large language models (LLMs) like Google Gemini.
- Live Demo: HuggingFace Spaces
- Documentation: GitHub Pages
- GitHub Repository: hanifekaptan/ai-agent-for-aspect-based-sentiment-analysis
- Advanced Analysis: Processes comments from text inputs or CSV files to perform aspect-based sentiment analysis.
- RESTful API: A FastAPI backend offering comprehensive endpoints and automatic OpenAPI documentation.
- Modern Interface: A responsive Streamlit frontend that supports modes like single text, CSV file upload, and sample data analysis.
- Scalability: High performance through concurrent LLM calls and asynchronous operations.
- Error Handling: Robust error management in the API for situations like quota exceeded and connection errors.
- Docker Support: Easily deployable structure with a
Dockerfileandentrypoint.shscript. - Testing: Verification of basic API functions with smoke tests written using
pytest.
- Python 3.12
pipandvenv
-
Clone the repository:
git clone https://github.com/hanifekaptan/ai-agent-for-aspect-based-sentiment-analysis.git cd ai-agent-for-aspect-based-sentiment-analysis -
Create and activate a virtual environment:
# Windows python -m venv .venv .venv\Scripts\activate # Linux/Mac source .venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Environment Variables: Create a file named
.envin the project root directory and add your Google Gemini API key:GOOGLE_API_KEY="YOUR_GEMINI_API_KEY" MODEL_NAME="gemini-1.5-flash-latest"
-
Run the Backend (FastAPI):
uvicorn app.main:app --reload --host 127.0.0.1 --port 8000
The API will be accessible at http://localhost:8000, and the interactive documentation at /docs.
-
Run the Frontend (Streamlit) (in a new terminal):
streamlit run frontend/app.py --server.port 8501
The application interface will be displayed at http://localhost:8501.
aspect-based-senetiment-analyzer-agent/
βββ app/ # FastAPI backend application
β βββ api/ # API routes and endpoints
β β βββ analyze.py # Analysis endpoint
β β βββ health.py # Health check endpoint
β βββ core/ # Core configuration and helpers
β β βββ logging.py # Logging setup
β βββ llm/ # LLM client logic
β β βββ client.py # Gemini API calls
β βββ prompting/ # Prompt management
β β βββ manager.py # Loading and rendering prompt templates
β βββ prompts/ # Prompt templates (YAML)
β βββ schemas/ # Pydantic data models
β βββ services/ # Business logic services
β β βββ absa_service.py # ABSA analysis logic
β βββ utils/ # Utility functions
β βββ main.py # FastAPI application entry point
βββ frontend/ # Streamlit frontend application
β βββ api/
β β βββ client.py # Backend API client
β βββ components/ # Reusable UI components
β βββ streamlit_app/
β βββ app.py # Streamlit application entry point
βββ docker/ # Docker configurations
β βββ entrypoint.sh # Script to start both backend and frontend
β βββ space.Dockerfile # Combined Dockerfile for HuggingFace
βββ tests/ # Test suite
β βββ test_smoke.py # API smoke tests
βββ Dockerfile # Main Dockerfile for HuggingFace Space
βββ requirements.txt # Python dependencies
βββ README.md
- Analysis Engine: Makes API calls to Google's Gemini model using
langchain-google-genai. - Asynchronous Operations: Manages concurrent LLM requests using
asyncioandSemaphore, which improves performance. - Data Parsing: Uses
pandasto process data from single text, CSV, or other formats. - Service Layer: Provides a clean separation between API routes, business logic, and LLM calls.
- API Documentation: Automatically generated OpenAPI (Swagger) documentation available at
/docs.
- Component-Based: Modular UI components for search, file upload, and result visualization.
- API Client: A
requests-based HTTP client with error handling to communicate with the backend. - State Management: Uses
st.session_stateto manage analysis results and API quota errors. - User-Friendly Design: A clean, understandable, and interactive interface.
- The user enters text or uploads a CSV file via the Streamlit interface.
- The frontend sends the request to the
/analyzeendpoint. - The backend receives the input, cleans it, and divides it into appropriate batches for analysis.
- The
absa_servicegenerates prompts to be sent to the LLM for each batch. - The LLM analyzes the features and sentiments in the texts and returns a response in a structured format.
- The backend parses the LLM response and sends it to the frontend in JSON format.
- The frontend visualizes the results with metrics, charts, and tables.
The project includes basic smoke tests written with pytest:
# Run all tests
pytest- Smoke Tests: Verify the basic functionality of the API by testing the
/healthand mocked/analyzeendpoints.
This project is configured to run both the backend and frontend in a single container. This simplifies deployment, especially on platforms like HuggingFace Spaces.
-
Build the Docker Image:
docker build -t aspect-based-sentiment-analyzer . -
Run the Container:
docker run -p 8000:8000 -p 8501:7860 -e GOOGLE_API_KEY="YOUR_GEMINI_API_KEY" aspect-based-sentiment-analyzer- The backend API will be accessible at
http://localhost:8000. - The Streamlit interface will be accessible at
http://localhost:8501.
- The backend API will be accessible at
The Dockerfile in the project root is compatible with HuggingFace Spaces. You just need to link your repository to a Space and add GOOGLE_API_KEY as a secret.
- Backend:
- FastAPI
- Uvicorn
- langchain-google-genai
- Pandas, NumPy
- Pydantic
- Frontend:
- Streamlit
- Requests
- Language: Python 3.12
- Testing:
- pytest
- pytest-asyncio
- DevOps:
- Docker
- HuggingFace Spaces
Used to check the status of the service.
- Response:
{ "status": "ok", "timestamp": 1678886400.0 }
Performs analysis on text or a CSV file.
-
Request (Text):
textfield inmultipart/form-data. -
Request (File):
upload_filefield inmultipart/form-data. -
Successful Response:
{ "items_submitted": 5, "batches_sent": 1, "results": [ { "id": "1", "aspects": [ { "term": "screen", "sentiment": "positive" }, { "term": "battery", "sentiment": "negative" } ] } ], "duration_seconds": 5.43 } -
Error Response (Quota Exceeded):
{ "detail": { "error": "Upstream quota exceeded or rate-limit received", "message": "An error related to the quota was returned from the model provider. Please try again later.", "upstream_error": "429 Quota exceeded for model..." } }
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Hanife Kaptan - hanifekaptan.dev@gmail.com
Project Link: https://github.com/hanifekaptan/ai-agent-for-aspect-based-sentiment-analysis
β Don't forget to star this repository if you find it useful!