Skip to content

Vernon-codes/Ai_text_generator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sentiment-Aware AI Text Generator

Overview

This project is an AI-powered text generator that produces paragraphs or essays based on the sentiment of the input prompt. The system detects positive, negative, or neutral sentiment and generates text aligned with that sentiment.

App Preview

Here is a screenshot of the Sentiment-Aware AI Text Generator in action:

Sentiment Text Generator Screenshot


Features

  • Automatic sentiment detection of user prompts
  • Sentiment-aligned text generation
  • Interactive frontend using Streamlit
  • Manual sentiment override
  • Adjustable output length and creativity parameters

Technical Approach

1. Sentiment Analysis

  • Utilizes Hugging Face transformers pipeline:

    pipeline("sentiment-analysis")
  • Default model: distilbert-base-uncased-finetuned-sst-2-english

  • Outputs positive, negative, or neutral sentiment with confidence scores.

2. Text Generation

  • Uses GPT-2 (pipeline("text-generation", model="gpt2")) for generating coherent paragraphs.

  • Prompt conditioning: prepends a sentiment instruction to user input, e.g.:

 "Write a positive paragraph about: <user_prompt>"
  • Adjustable parameters:

  • max_new_tokens – controls paragraph length

  • temperature – controls creativity

  • top_k and top_p – control diversity of output

3. Frontend

Built with Streamlit:

  • Input text area for prompt

  • Detect sentiment button

  • Manual sentiment override checkbox

  • Generate text button

  • Output display and download option Installation

Installation

1. Clone or download the repository:

git clone <your-repo-url>
cd Ai_text_generator

2. Install Dependencies

pip install -r requirements.txt

3. Running the App

Start the Streamlit app:

streamlit run streamlit_app.py
  • Open the URL in your browser (default: http://localhost:8501)

  • Type a prompt → Detect sentiment → Generate text

Note: The first run may take 1–2 minutes as GPT-2 and the sentiment analysis models are downloaded from Hugging Face. Subsequent runs will be faster.

Usage Notes

  • Manual sentiment override allows forcing a specific sentiment for generation.

  • Sliders in the sidebar control text length and creativity.

  • Models are cached locally after the first run to improve performance.

Project Challenges & Reflections

  • Model size and load time: GPT-2 is ~500MB; the first download can take 1–2 minutes.

  • Sentiment alignment: Occasionally, GPT-2 may slightly deviate from the detected sentiment. Mitigation: prompt prefixing; further improvement possible via fine-tuning.

  • Windows symlink warnings: These are harmless and relate to Hugging Face caching system.

  • Resource considerations: Small GPT-2 works on CPU; larger models may require GPU for faster generation.

About

Mini project Using Java

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages