Just shipped CodeMode Unified - a production-ready code execution platform for AI agents! 🚀
Multi-runtime support (Bun/QuickJS/Deno), full MCP integration, and 1000+ req/sec throughput.
Open source & ready to supercharge your AI workflows. 🧵
#AI #OpenSource #MCP
AI agents are powerful, but they're limited by what they can do.
They can think, plan, and generate code... but executing it safely? That's the hard part.
Enter CodeMode Unified 👇
CodeMode Unified lets AI agents execute JavaScript/TypeScript in secure sandboxes with FULL access to:
✅ External APIs (fetch) ✅ MCP tools (AutoMem, Context7, etc) ✅ Knowledge graphs ✅ Real-time data
All while maintaining security & performance 🔒⚡
Built with flexibility in mind:
MCP Server Mode → Direct integration with Claude Code via stdio
HTTP Backend → REST API for any AI platform
Same powerful engine, two ways to use it! 🎯
Choose the right runtime for your use case:
🥇 Bun - Full async, 50-100ms startup (recommended) 🪶 QuickJS - Ultra-light, 5-10ms startup 🔒 Deno - Secure by default 🎮 isolated-vm - V8 isolates ☁️ E2B - Cloud sandboxes
The magic happens with MCP tool access:
// AI executes this with full MCP access
const pokemon = await fetch('...')
.then(r => r.json());
// Store in knowledge graph
await mcp.automem.store_memory({
content: `Pokemon: ${pokemon.name}`,
tags: ['pokemon'],
metadata: {...}
});One execution. Infinite possibilities.
Real metrics from production testing:
✅ 6/6 workflows passed (100% success) ⚡ 150-300ms avg execution time 🚀 1000+ requests/second 🧠 Full MCP tool integration 🔒 Sandboxed with memory/CPU limits
No compromises on speed OR security.
What can you build?
📊 API aggregation (parallel fetches) 🧠 Knowledge graph building 📚 Research workflows (docs + analysis) 💬 Customer data analysis 📝 Content automation 🔄 Workflow orchestration
The only limit is your imagination ✨
For Claude Code users:
{
"mcpServers": {
"codemode-unified": {
"command": "node",
"args": ["path/to/mcp-server.js"]
}
}
}For HTTP API:
npm install && npm run build
npm start # Port 3001That's it! 🎉
Clean separation of concerns:
MCP Server ←→ Executor ←→ Runtime Engine
↓ ↓ ↓
stdio Orchestration Bun/QuickJS/Deno
↓ ↓ ↓
Claude MCP Aggregator Sandboxed Code
Modular, testable, extensible 💪
Huge shoutout to my testing partner who helped catch critical bugs:
✅ Return statement wrapping ✅ MCP response parsing ✅ Config loading edge cases ✅ Process cleanup
Quality comes from relentless testing! 🙏
Open Source & MIT Licensed 🎉
📦 GitHub: github.com/danieliser/codemode-unified 📚 Docs: Full architecture, examples, guides 💬 Issues/Discussions: Feedback welcome!
Built with ❤️ for the AI agent ecosystem.
Star if you find it useful! ⭐
Benchmark breakdown:
Bun Runtime: • Startup: 50-100ms • Execution: Native JS speed • Memory: 20-50MB/sandbox
QuickJS Runtime: • Startup: 5-10ms • Execution: 2-4x slower • Memory: 5-20MB/sandbox
Perfect for different use cases! ⚖️
How we keep things safe:
✅ Memory limits (default 128MB) ✅ CPU quotas (default 50%) ✅ Execution timeouts (default 30s) ✅ Network access controls ✅ Filesystem isolation ✅ OAuth 2.1 + JWT auth
Enterprise-ready security 🛡️
Looking for:
🤝 Contributors (runtime implementations!) 🐛 Bug reports & feedback 💡 Feature requests 📖 Documentation improvements 🎨 Example workflows
Let's build the future of AI agents together! 🚀
Content: Terminal showing the Pokemon showcase workflow output with:
- Console logs showing API fetch
- AutoMem storage confirmation
- Parallel starter fetches
- Final structured return object Caption: "From API fetch to knowledge graph in 261ms ⚡"
Content: The ASCII architecture diagram from README
- Showing dual service design
- Runtime selection flow
- MCP tool integration Caption: "Clean, modular architecture for maximum flexibility 🏗️"
Content: Bar chart or table showing:
- Runtime startup times
- Execution speeds
- Memory usage
- Throughput metrics Caption: "Choose the right runtime for your use case 🎯"
Content: Well-formatted JSON config with syntax highlighting
- Shows mcpServers setup
- Environment variables
- Clean, copy-paste ready Caption: "Integration is this simple 🎉"
Content: The Pokemon example with beautiful syntax highlighting
- Showing fetch, MCP calls, return statement
- Annotated with comments Caption: "Write natural JavaScript with full MCP access 💻"
Content: Terminal showing 6/6 workflows passing
- Green checkmarks
- Execution times
- Success rate: 100% Caption: "Production-tested and battle-ready ✅"
- Show terminal with
npm start - Execute Pokemon workflow via curl
- Show structured response
- Highlight speed (< 300ms) Music: Upbeat, tech-forwardOverlay: Key metrics as text
- Show project structure
- Highlight key files (executor, mcp-server, runtimes)
- Visual flow of execution
- End with "Now you can build this too"
Before: AI agent limited to text generation After: Same agent executing code, fetching APIs, building knowledge graphs Message: "This is what AI agents can do now"
Primary: #AI #OpenSource #MCP #TypeScript #JavaScript
Secondary: #AIAgents #CodeExecution #ClaudeCode #AIEngineering #DevTools
Niche: #ModelContextProtocol #AITooling #AgenticAI #LLMEngineering
Community: #100DaysOfCode #BuildInPublic #OpenSourceProject
-
Ask questions:
- "What would YOU build with this?"
- "Which runtime would you use for X use case?"
-
Share wins:
- "Just hit 1000 req/sec in testing! 🎉"
- "Someone just built X with CodeMode!"
-
Behind the scenes:
- Share debugging stories
- Architecture decisions
- Performance optimization journeys
-
Tutorial thread:
- "How to build your first AI workflow with CodeMode"
- Step-by-step with code examples
-
Comparison thread:
- "CodeMode vs Docker containers"
- "Why MCP > traditional APIs"
Day 1: Main thread (1-12) Day 2: Performance deep dive bonus tweet Day 3: Security model bonus tweet Day 4: Tutorial: "Build your first workflow" Day 5: Community call + retweet user feedback Day 6: Architecture walkthrough thread Day 7: Week recap + call for contributors
Timing:
- Morning: 9-11 AM EST (max engagement)
- Evening: 6-8 PM EST (secondary boost)
Cadence:
- Don't spam - space out by 2-3 hours
- Engage with replies immediately
- Quote-tweet interesting responses
For developers: "Star the repo if this could help your AI projects!"
For AI enthusiasts: "Imagine what Claude could build with this... 🤔"
For contributors: "Want to add a new runtime? Issues labeled 'good first issue' 🚀"
For users: "Try it out and let me know what you build! 💬"
For investors/companies: "DM for enterprise support & custom integrations 📧"
When someone asks "How is this different from X?" "Great question! CodeMode focuses on [key differentiator]. X does [their thing], we optimize for [our thing]. Both have their place!"
When someone reports a bug: "Thanks for the report! 🐛 Can you open an issue with details? We'll get it fixed ASAP!"
When someone shares what they built: "This is AMAZING! 🎉 Mind if I feature this in our examples repo?"
When someone asks for help: "Check out [link to docs]. If still stuck, feel free to open a discussion on GitHub!"
Ready to go viral? Let's ship it! 🚀