Skip to content

ByteQuest-2025/GFGBQ-Team-tech-team

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GFGBQ-Team-tech-team

Problem Statement

Generative AI models are widely used for research, learning, and decision-making.
However, these systems often generate confident but factually incorrect information, including fake citations, non-existent references, and misleading links that appear legitimate but cannot be verified.

This lack of reliability makes it difficult for users to trust AI-generated content and can lead to:

  • Misinformation
  • Academic and research errors
  • Legal and ethical risks

There is a strong need for a system that can detect, flag, and explain unreliable AI-generated claims and citations in a transparent and user-friendly manner.


Project Name

TruthLens


Team Name

Tech Team


Demonstration Video Link

https://drive.google.com/drive/folders/1040BcrSvTXqV4PpJ0EGRVvO2VCgjfwg4


PPT Link

👉 https://docs.google.com/presentation/d/1-jLeurC3QEws75AUyOn4pY07YYAe_QKR/edit?usp=drivesdk&ouid=103076608127478627289&rtpof=true&sd=true


Project Overview

TruthLens is a web-based application that analyzes AI-generated text to detect hallucinations, misleading claims, and unreliable citations.

Instead of acting as a black-box verifier, TruthLens focuses on sentence-level explainability, allowing users to understand why certain content is considered trustworthy or risky.

Key Features

  • Sentence-level claim analysis
  • Detection of misleading or unverifiable citations
  • Trust score with animated confidence bar
  • Contextual explanation of score (low / medium / high)
  • Apple-style scroll storytelling UI
  • Transparent and explainable output

TruthLens is designed for students, researchers, journalists, educators, and anyone relying on AI-generated content.


Setup and Installation Instructions

Prerequisites

  • Python 3.8 or above
  • pip
  • A modern web browser
  • Git

Backend Setup

  1. Navigate to the project directory:
    cd citation
  2. Create and activate a virtual Enivironment
    python -m venv venv
    source venv/bin/activate   # Windows: venv\Scripts\activate
  3. Install Dependenies
    pip install -r requirements.txt
    
  4. Start the backend server:
python main.py

Frontend Setup

  1. Navigate to the frontend folder:
cd Frontend
  1. Open index.html using: • VS Code Live Server (recommended), or • Any local static server, or • Directly in a browser

Usage Instructions

1.	Open the TruthLens web interface.
2.	Paste AI-generated text into the input box.
3.	Click “Verify with TruthLens”.
4.	TruthLens will:
•	Analyze each sentence individually
•	Detect misleading or unsupported claims
•	Compute a trust score
•	Display a confidence bar
•	Explain why the score is low, medium, or high
•	Highlight risky sentences clearly

Score Interpretation

•	🟢 High Confidence – Claims are well-supported or low-risk
•	🟡 Medium Confidence – Some claims lack strong evidence
•	🔴 Low Confidence – Misleading or unverifiable citations detected

Screenshots

Screenshot 2026-01-04 at 1 54 06 PM Screenshot 2026-01-04 at 1 54 30 PM Screenshot 2026-01-04 at 1 54 46 PM Screenshot 2026-01-04 at 1 56 07 PM Screenshot 2026-01-04 at 1 55 30 PM Screenshot 2026-01-04 at 1 57 56 PM

About

Repository for tech team - Vibe Coding Hackathon

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors