This project is an AI-powered text generator that produces paragraphs or essays based on the sentiment of the input prompt. The system detects positive, negative, or neutral sentiment and generates text aligned with that sentiment.
Here is a screenshot of the Sentiment-Aware AI Text Generator in action:
- Automatic sentiment detection of user prompts
- Sentiment-aligned text generation
- Interactive frontend using Streamlit
- Manual sentiment override
- Adjustable output length and creativity parameters
-
Utilizes Hugging Face
transformerspipeline:pipeline("sentiment-analysis")
-
Default model: distilbert-base-uncased-finetuned-sst-2-english
-
Outputs positive, negative, or neutral sentiment with confidence scores.
-
Uses GPT-2 (pipeline("text-generation", model="gpt2")) for generating coherent paragraphs.
-
Prompt conditioning: prepends a sentiment instruction to user input, e.g.:
"Write a positive paragraph about: <user_prompt>"
-
Adjustable parameters:
-
max_new_tokens – controls paragraph length
-
temperature – controls creativity
-
top_k and top_p – control diversity of output
Built with Streamlit:
-
Input text area for prompt
-
Detect sentiment button
-
Manual sentiment override checkbox
-
Generate text button
-
Output display and download option Installation
git clone <your-repo-url>
cd Ai_text_generator
pip install -r requirements.txt
Start the Streamlit app:
streamlit run streamlit_app.py
-
Open the URL in your browser (default: http://localhost:8501)
-
Type a prompt → Detect sentiment → Generate text
Note: The first run may take 1–2 minutes as GPT-2 and the sentiment analysis models are downloaded from Hugging Face. Subsequent runs will be faster.
-
Manual sentiment override allows forcing a specific sentiment for generation.
-
Sliders in the sidebar control text length and creativity.
-
Models are cached locally after the first run to improve performance.
-
Model size and load time: GPT-2 is ~500MB; the first download can take 1–2 minutes.
-
Sentiment alignment: Occasionally, GPT-2 may slightly deviate from the detected sentiment. Mitigation: prompt prefixing; further improvement possible via fine-tuning.
-
Windows symlink warnings: These are harmless and relate to Hugging Face caching system.
-
Resource considerations: Small GPT-2 works on CPU; larger models may require GPU for faster generation.
