Skip to content

An Autonomous AI System for Generating Humorous & Viral Tweets using Open-Source LLMs

Notifications You must be signed in to change notification settings

3ahmood/Tweeting-Agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation


🐦 Tweeting Agent

An Autonomous AI System for Generating Humorous & Viral Tweets using Open-Source LLMs


📌 Overview

Tweeting Agent is an AI-powered system designed to help users generate humorous, engaging, and potentially viral tweets on any given topic. It leverages open-source Large Language Models (LLMs) orchestrated through LangGraph, enabling an iterative generate → evaluate → optimize workflow.

Unlike single-prompt tweet generators, Tweeting Agent behaves like a creative loop, refining tweets over multiple iterations until the best possible version emerges.


🚀 Key Features

  • 🤖 Multi-Agent Architecture using LangGraph
  • 🔁 Iterative Improvement Loop (Generator → Evaluator → Optimizer)
  • 🧠 Fully Open-Source LLM Stack via Ollama
  • 🎯 Optimized for humor, virality, clarity, and engagement
  • ⚡ Modular, extensible, and framework-agnostic design
  • 🧪 Easy experimentation with different models, prompts, and scoring logic

🧠 System Architecture

The Tweeting Agent uses three specialized LLM roles, each optimized for a specific task:

1. Generator Agent

  • Model: gpt-oss:20b

  • Purpose: Generates creative, humorous tweets based on the user-provided topic.

  • Strengths:

    • Strong generative creativity
    • Produces multiple candidate tweets
    • Explores diverse humorous angles
generator_model = ChatOllama(model="gpt-oss:20b")

2. Evaluator Agent

  • Model: llama3.1:latest

  • Purpose: Critically evaluates generated tweets based on predefined quality metrics.

  • Evaluation Criteria:

    • Humor
    • Virality potential
    • Clarity
    • Brevity
    • Relevance to topic
evaluator_model = ChatOllama(model="llama3.1:latest")

3. Optimizer Agent

  • Model: llama3.2:3b

  • Purpose: Refines tweets using evaluator feedback, improving punchlines, wording, and structure.

  • Why a smaller model?

    • Faster iterations
    • Lower compute cost
    • Ideal for targeted rewriting and fine-tuning
optimizer_model = ChatOllama(model="llama3.2:3b")

🔁 LangGraph Workflow

The entire system is orchestrated using LangGraph, enabling a clean, stateful, and iterative agent flow.

Workflow Steps:

  1. Input Topic
  2. Tweet Generation
  3. Tweet Evaluation
  4. Tweet Optimization
  5. Quality Check
  6. Repeat until optimal tweet is achieved
graph TD
    A[User Topic] --> B[Generator Agent]
    B --> C[Evaluator Agent]
    C --> D[Optimizer Agent]
    D --> C
    C -->|Meets Quality Threshold| E[Final Tweet]
Loading

🛠️ Tech Stack

Component Technology
Agent Framework LangGraph
LLM Provider Ollama
Generator Model gpt-oss:20b
Evaluator Model llama3.1
Optimizer Model llama3.2:3b
Language Python

⚙️ Installation & Setup

Prerequisites

  • Python 3.9+
  • Ollama installed locally
  • Required models pulled via Ollama
ollama pull gpt-oss:20b
ollama pull llama3.1
ollama pull llama3.2:3b

▶️ Usage

python main.py --topic "AI replacing human jobs"

Example Output

“AI won’t take your job. It’ll just ask for your help… then quietly stop replying.”


🎯 Benefits

✅ For Developers

  • Demonstrates agentic AI design patterns
  • Real-world use of LangGraph
  • Easy to extend with more agents or evaluation criteria

✅ For Content Creators

  • Generates high-quality, funny tweets
  • Reduces creative fatigue
  • Increases engagement potential

✅ For Researchers

  • Experiment with self-improving LLM systems
  • Compare multi-model collaboration vs single-model prompting

⭐ Acknowledgements

  • LangGraph for agent orchestration
  • Ollama for local, open-source LLM inference
  • The open-source LLM community ❤️