Generative AI models are widely used for research, learning, and decision-making.
However, these systems often generate confident but factually incorrect information, including fake citations, non-existent references, and misleading links that appear legitimate but cannot be verified.
This lack of reliability makes it difficult for users to trust AI-generated content and can lead to:
- Misinformation
- Academic and research errors
- Legal and ethical risks
There is a strong need for a system that can detect, flag, and explain unreliable AI-generated claims and citations in a transparent and user-friendly manner.
TruthLens
Tech Team
https://drive.google.com/drive/folders/1040BcrSvTXqV4PpJ0EGRVvO2VCgjfwg4
TruthLens is a web-based application that analyzes AI-generated text to detect hallucinations, misleading claims, and unreliable citations.
Instead of acting as a black-box verifier, TruthLens focuses on sentence-level explainability, allowing users to understand why certain content is considered trustworthy or risky.
- Sentence-level claim analysis
- Detection of misleading or unverifiable citations
- Trust score with animated confidence bar
- Contextual explanation of score (low / medium / high)
- Apple-style scroll storytelling UI
- Transparent and explainable output
TruthLens is designed for students, researchers, journalists, educators, and anyone relying on AI-generated content.
- Python 3.8 or above
- pip
- A modern web browser
- Git
- Navigate to the project directory:
cd citation - Create and activate a virtual Enivironment
python -m venv venv source venv/bin/activate # Windows: venv\Scripts\activate
- Install Dependenies
pip install -r requirements.txt
- Start the backend server:
python main.py- Navigate to the frontend folder:
cd Frontend- Open index.html using: • VS Code Live Server (recommended), or • Any local static server, or • Directly in a browser
1. Open the TruthLens web interface.
2. Paste AI-generated text into the input box.
3. Click “Verify with TruthLens”.
4. TruthLens will:
• Analyze each sentence individually
• Detect misleading or unsupported claims
• Compute a trust score
• Display a confidence bar
• Explain why the score is low, medium, or high
• Highlight risky sentences clearly
• 🟢 High Confidence – Claims are well-supported or low-risk
• 🟡 Medium Confidence – Some claims lack strong evidence
• 🔴 Low Confidence – Misleading or unverifiable citations detected