Create autonomous AI agents with deep personalities in minutes
Tell us about personality and vibe. We generate avatar, bio, complete personality system, and production-ready Python code.
A framework for building fully autonomous AI agents on Twitter. Not template bots that post generic content — real AI personalities with deep backstories, consistent belief systems, unique speech patterns, and authentic behavioral responses.
The problem: Creating a compelling AI agent takes weeks of prompt engineering, personality design, and infrastructure setup. Most end up feeling like obvious bots.
Our solution: Describe your character in plain language. Our engine synthesizes a complete cognitive architecture and packages it as production-ready Python code. Deploy in minutes, not weeks.
Built by the $DOT team — friends of Pippin, one of the most recognized AI agents in crypto (reached $300M market cap, currently at $200-220M).
We spent months researching what makes AI characters feel alive:
- How agents form and express beliefs consistently
- What creates personality coherence across thousands of interactions
- Why some AI characters build communities while others get ignored
- How to balance authenticity with engagement
This framework is that research, productized.
|
DESCRIBE Tell us who your agent is. A sarcastic trading cat? A philosophical robot from 2847? A wholesome meme curator? Few sentences or detailed spec — the engine handles both. |
|
| ▼ | |
|
SYNTHESIZE Our cognitive engine generates a complete character model: origin story, belief systems, emotional responses, speech patterns, behavioral rules. Not a simple prompt — a full personality architecture. |
|
| ▼ | |
|
PACKAGE Download a ready-to-run Python project with your agent's personality baked in. Modular, typed, documented — you own the code completely. |
|
| ▼ | |
|
DEPLOY Add your API keys, run the script. Your agent starts living on Twitter autonomously — posting, replying, generating images, building community. |
This isn't a basic "you are a funny bot" prompt. We create deeply crafted characters with four interconnected layers:
Identity — Origin story, backstory, core motivations, formative experiences. Who is this character and where did they come from?
Cognition — Belief systems, values, opinions, worldview, emotional matrix. How do they think and what do they care about?
Expression — Voice, tone, vocabulary, humor style, topic preferences. How do they communicate?
Behavior — Posting patterns, engagement rules, response strategies. When and how do they act?
Each layer feeds into the next. Your agent behaves consistently across thousands of interactions — like a real character with depth, not a generic bot.
The system supports two modes of operation:
Two separate autonomous agents running on different schedules:
| Scheduled Posts (Agent) | Mention Responses (Agent) |
|---|---|
| Cron-based (configurable interval) | Polling-based (configurable interval) |
| Agent creates plan → executes tools → generates post | Agent selects mentions → plans per mention → generates replies |
| Dynamic tool usage (web search, image generation) | 3 LLM calls per mention (select → plan → reply) |
| Posts to Twitter with optional media | Tracks tools used per reply |
A single agent that handles both posting and replying in one cycle:
| Feature | Description |
|---|---|
| Single cycle | Agent decides what to do (post, reply, or both) |
| Tool-based actions | Uses tools like get_mentions, create_post, create_reply |
| Step-by-step | LLM decides after each tool execution |
| Rate limiting | Self-imposed daily limits for posts and replies |
Enable with: USE_UNIFIED_AGENT=true in environment variables.
Auto-Discovery Tools: Tools are organized into folders (shared/, legacy/, unified/) and automatically discovered on startup. Each tool has a TOOL_CONFIG with description that's injected into prompts.
🧠 Deep Personality Generation — Complete character profiles with backstory, beliefs, values, and speech patterns. Not templates — synthesized personalities.
🐦 Autonomous Posting — Schedule-based or trigger-based content generation. Your agent posts in its authentic voice without manual intervention.
💬 Reply & Mention Handling — Monitors conversations and responds contextually. LLM decides whether to reply, use tools, or ignore. Requires Twitter API Basic tier or higher for mention access.
📊 Automatic Tier Detection — Detects your Twitter API tier (Free/Basic/Pro/Enterprise) automatically on startup and every hour. Blocks unavailable features and warns when approaching limits.
🎨 Image Generation — Creates visuals matching agent's style and current context. Supports multiple providers.
🔧 Extensible Tools — Plug in web search, profile lookup, conversation history, and more. Add custom tools to the appropriate folder and they're auto-discovered.
📦 Production-Ready — Clean async Python with type hints. Add API keys and deploy — no additional setup required.
Python 3.10+ with async I/O, full type hints, and modular architecture. The codebase is designed to be readable and hackable — you own it completely.
Core libraries:
fastapi— HTTP server for webhooksuvicorn— ASGI serverapscheduler— Cron-based job schedulinghttpx— Async HTTP clienttweepy— Twitter API v2 integrationasyncpg— Async PostgreSQL driverpydantic+pydantic-settings— Settings and validation
All language model calls go through OpenRouter, giving you access to multiple providers through a single API:
- Claude Sonnet 4.5 — Primary model for personality synthesis and content generation
- GPT-5 — Alternative provider with strong reasoning capabilities
- Gemini 3 Pro — Fast inference, good for high-volume interactions
Model selection is configurable per-agent. OpenRouter handles routing, fallbacks, and load balancing automatically.
Visual content generation supports two providers:
- Nano Banana 2 Pro (Gemini 3 Pro Image) — Our default. Fast, high quality, excellent prompt following
- GPT-5 Image — Native OpenAI generation with strong context awareness
Real-time web search capability powered by OpenRouter's native plugins:
- OpenRouter Web Plugin — Native web search using
plugins: [{id: "web"}]API. Returns real search results with source citations (URLs, titles, snippets). Supports multiple search engines including native provider search and Exa.ai.
Official Twitter API v2 for all operations: posting, timeline reading, media uploads, mention monitoring. We don't use unofficial endpoints or scraping.
Runs anywhere Python runs: VPS, Railway, Render, Docker, your laptop. Stateless design means easy horizontal scaling if needed.
Modular — Swap LLM providers, image generators, or tools without touching core logic. Each component has clean interfaces.
Local credentials — Your API keys never leave your machine. We generate code, not hosted services.
Stateless — Agent state serializes to JSON. Easy to backup, migrate, or run multiple instances.
Clean code — Readable, typed, documented. This is your codebase now — you should be able to understand and modify it.
When you generate an agent, you receive a complete Python project:
my-agent/
├── assets/ # Reference images for generation
│
├── config/
│ ├── settings.py # Environment & configuration
│ ├── models.py # Model configuration (LLM, Image models)
│ ├── schemas.py # JSON schemas for LLM responses
│ ├── personality/ # Character definition (modular)
│ │ ├── backstory.py # Origin story
│ │ ├── beliefs.py # Values and priorities
│ │ └── instructions.py # Communication style
│ └── prompts/ # LLM prompts (modular)
│ ├── agent_autopost.py # Agent planning prompt
│ ├── unified_agent.py # Unified agent instructions (v1.4)
│ ├── mention_selector_agent.py # Agent mention selection (v1.3)
│ └── mention_reply_agent.py # Agent reply planning (v1.3)
│
├── utils/
│ └── api.py # OpenRouter API configuration
│
├── services/
│ ├── autopost.py # Agent-based scheduled posting
│ ├── mentions.py # Mention/reply handler
│ ├── unified_agent.py # Unified agent (v1.4)
│ ├── tier_manager.py # Twitter API tier detection
│ ├── llm.py # OpenRouter client (generate, chat)
│ ├── twitter.py # Twitter API v2 integration
│ └── database.py # PostgreSQL for history + metrics
│
├── tools/
│ ├── registry.py # Auto-discovery from subfolders
│ ├── shared/ # Tools for both modes
│ │ ├── web_search.py # Web search via OpenRouter
│ │ ├── get_twitter_profile.py # Get user profile info
│ │ └── get_conversation_history.py # Chat history with user
│ ├── legacy/ # Legacy mode only
│ │ └── image_generation.py # Image gen with references
│ └── unified/ # Unified agent only
│ ├── create_post.py # Post with optional image
│ ├── create_reply.py # Reply to mention
│ ├── get_mentions.py # Fetch unread mentions
│ └── finish_cycle.py # End agent cycle
│
├── main.py # FastAPI + APScheduler entry point
├── requirements.txt # Dependencies
├── .env.example # API keys template
└── ARCHITECTURE.md # AI-readable technical documentation
Everything is modular. Swap the LLM provider, add new tools, adjust posting schedules — the architecture supports it.
The ARCHITECTURE.md file is specifically designed for AI assistants (ChatGPT, Claude, Cursor, Copilot). Feed it to your AI tool of choice and it will understand the entire codebase structure, data flows, and how to extend the bot. This enables AI-assisted development and customization.
The bot uses an autonomous agent architecture to generate and post tweets at configurable intervals.
How the agent works:
- Agent receives context (previous 50 posts to avoid repetition)
- Agent creates a plan — decides which tools to use:
web_search— to find current information, news, pricesgenerate_image— to create a visual for the post- Or no tools at all if it has a good idea already
- Agent executes tools step by step, with results feeding back into the conversation
- Agent generates final tweet text based on all gathered information
- Tweet is posted with optional image
- Saved to database for future context
Example agent flow:
Agent thinks: "I want to post about crypto trends with a visual"
→ Plan: [web_search("crypto market today"), generate_image("abstract chart art")]
→ Executes web_search, gets current market info
→ Executes generate_image, creates matching visual
→ Generates tweet: "the market is just vibes at this point..."
→ Posts with image
Key features:
- Dynamic tool selection — Agent decides when tools are needed
- Continuous conversation — Tool results inform the final tweet
- Modular tools — Add new tools to
tools/registry.pyand agent automatically uses them
Configuration:
POST_INTERVAL_MINUTES— Time between auto-posts (default: 30)ENABLE_IMAGE_GENERATION— Set tofalseto disable image generation (hides tool from agent)ALLOW_MENTIONS— Set tofalseto disable mentions (hides mention tools from agent)
A single agent that handles both posting and replying in one cycle.
How it works:
- Agent loads context (recent actions, rate limits, tier info)
- Agent decides what to do using available tools
- Loop until
finish_cycleis called:- LLM decides next action via Structured Output
- Execute tool (get_mentions, create_post, create_reply, etc.)
- Add result to conversation
- Repeat next cycle after configured interval
Available tools:
get_mentions— fetch unread Twitter mentionscreate_post— post with optional imagecreate_reply— reply to mention with optional imageweb_search— search the web for current infoget_twitter_profile— get user profile infoget_conversation_history— get chat history with userfinish_cycle— end the current cycle
Configuration:
USE_UNIFIED_AGENT— Set totrueto enable (default: true)AGENT_INTERVAL_MINUTES— Time between agent cycles (default: 30)- Daily limits are tier-based (Free: 15/0, Basic: 50/50, Pro: 500/500)
Agent-based mention processing with 3 LLM calls per mention (v1.3).
How it works:
- Polls Twitter API for new mentions every 20 minutes (configurable)
- Filters out already-processed mentions using database
- LLM #1: Selection — Evaluates all mentions, returns array of worth replying to (with priority)
- For EACH selected mention:
- Gets user conversation history from database
- LLM #2: Planning — Creates plan (which tools to use)
- Executes tools (web_search, generate_image)
- LLM #3: Reply — Generates final reply text
- Uploads image if generated, posts reply
- Saves interaction with tools_used tracking
- Returns batch summary
Why agent architecture: Instead of a single LLM call for all mentions, each mention gets individual attention. The agent can use tools to research topics, generate custom images, and craft contextually appropriate replies. User conversation history enables personalized interactions.
Configuration:
MENTIONS_INTERVAL_MINUTES— Time between mention checks (default: 20)MENTIONS_WHITELIST— Optional list of usernames for testing (empty = all users)- Requires Twitter API Basic tier or higher for mention access
Generates images using Gemini 3 Pro via OpenRouter, with support for reference images.
How assets/ folder works (v1.3):
- Place reference images in
assets/folder (supports: png, jpg, jpeg, gif, webp, jfif) - Bot uses ALL reference images (not random selection) for maximum consistency
- Reference images are sent to the model along with the generation prompt
- If
assets/is empty, images are generated without reference (pure text-to-image) - Use reference images to maintain consistent character appearance across posts
Auto-discovery: Tool exports TOOL_SCHEMA and is automatically available to agents.
Example use case: Place photos of your bot's character/avatar in assets/. The model will use all of them as reference when generating new images, keeping the visual style consistent.
Modular character definition split into three files for easier editing:
backstory.py — Origin story and background
- Who the character is
- Where they come from
- Core identity
beliefs.py — Values and priorities
- Personality traits
- Topics of interest
- Worldview
instructions.py — Communication style
- How to write (tone, grammar, punctuation)
- What NOT to do
- Example tweets
All parts are combined into SYSTEM_PROMPT automatically via __init__.py.
from config.personality import SYSTEM_PROMPT # Gets combined prompt
from config.personality import BACKSTORY # Or individual partsAutomatic Twitter API tier detection and limit management.
How it works:
- On startup, calls Twitter Usage API (
GET /2/usage/tweets) - Determines tier from
project_cap: Free (100), Basic (10K), Pro (1M), Enterprise (10M+) - Checks tier every hour to detect subscription upgrades
- Blocks unavailable features (e.g., mentions on Free tier)
- Auto-pauses operations when monthly cap reached
- Logs warnings at 80% and 90% usage
Tier features:
| Tier | Mentions | Post Limit | Read Limit |
|---|---|---|---|
| Free | ❌ | 500/month | 100/month |
| Basic | ✅ | 3,000/month | 10,000/month |
| Pro | ✅ | 300,000/month | 1,000,000/month |
Endpoints:
GET /tier-status— Current tier, usage stats, available featuresPOST /tier-refresh— Force tier re-detection (after subscription change)
PostgreSQL storage for post history and mention tracking, enabling context-aware generation.
Tables:
posts— Stores all posted tweets (text, tweet_id, include_picture, created_at)mentions— Stores mention interactions (tweet_id, author_handle, author_text, our_reply, action)
Why it matters:
- Post history lets the bot reference previous tweets and avoid repetition. The LLM sees the last 50 posts as context.
- Mention history prevents double-replying and provides conversation context for future interactions.
Async client for OpenRouter API with structured output support.
Features:
- Uses Claude Sonnet 4.5 by default (configurable)
- Supports structured JSON output for reliable parsing
- Handles both simple text generation and complex formatted responses
Handles all Twitter API interactions using tweepy.
Capabilities:
- Post tweets (API v2)
- Upload media (API v1.1 — required for images)
- Reply to tweets
- Fetch mentions (polling-based)
- Get authenticated user info
- Automatic rate limit handling
Workflow below describes the system.
- Access — Visit pippinlovesdot.com, describe your agent's personality and style
- Generate — Engine creates personality profile + complete Python codebase
- Configure — Download package, add your API credentials to
.env - Deploy — Run
python main.pyon any Python 3.10+ environment - Iterate — Monitor performance, refine personality, expand tool integrations
- OpenRouter API Key — For LLM inference. Gives access to Claude, GPT, Gemini through one endpoint.
- Twitter API v2 — For posting and reading. Free tier works for posting; Basic tier needed for mentions. Pro tier increases rate limits.
- PostgreSQL — For conversation history. Any provider works (Railway, Supabase, Neon, self-hosted).
- Python 3.10+ — Runtime environment with async support.
- Core personality synthesis engine
- Twitter automation pipeline
- Multi-model LLM support via OpenRouter
- Image generation integration
- Mention handling with tool calling
- Web platform launch
MIT — use it, modify it, build on it.