TID-Recon-Dog is an advanced deception platform built to trap, track, and analyze malicious intrusions using a powerful blend of honeypots and local AI agents.
Custom AI Model Hosting (Coming Soon)
We will be introducing RD-AI β a custom LLM trained specifically for deception & response tactics.
π§ We are training and hosting our own fine-tuned LLM for deception. ReconDog-AI will provide advanced, evasive, and intelligent responses across all honeypot services β deployable locally or via API.
You can use our AI LLM model or bring your own β such as Mistral, TinyLLaMA, GPT4All, or any OpenAI-compatible API.
Simulates real-world services like SSH, HTTP, FTP, and PostgreSQL, delivering highly believable responses powered by LLMs.
β¨ Key Features
π§ AI-Powered Deception
Local or remote LLMs simulate system responses, banners, and output with deceptive realism.
π‘οΈ Multi-Protocol Honeypots
Simulates SSH, HTTP, FTP, and PostgreSQL with authentic endpoint behavior.
ποΈ File Uploads & Listings
Attackers can interact with fake files and directories.
π΅οΈ Advanced Logging
IP, headers, auth attempts, uploaded files, and commands β geo-tagged and enriched.
π‘ External Ready (DMZ / Edge)
Deploy in any DMZ, network boundary, or deceptive edge.
π§± Modular & AI-Pluggable
Switch AI models, rotate fake content, and extend new services easily.
π Web App & Server Integration
Embed TID-Recon-Dog into existing web applications or public-facing servers to simulate realistic attack surfaces and monitor intrusion attempts.
πΌ Enterprise & Cloud Use
| Feature | Supported |
|---|---|
| π DMZ / Perimeter Deploy | β |
| π³ Docker / Compose Ready | β |
| βοΈ Cloud-Native (K8s) | β |
| π§ Local LLMs (Offline) | β |
| π SIEM Integrations (WIP) | β |
π‘ Use Cases
- Threat Intelligence Gathering
- Honeynet Deployments
- Red Team / Blue Team Defense
- AI/LLM Deception Research
- Early-Stage Recon / Fingerprinting
- Endpoint Simulation in Wargames
π¦ Tech Stack
- Node.js / TypeScript
- LangChain + Mistral, TinyLLaMA, GPT4All
- Docker / Kubernetes / LM Studio / Ollama
- Pino (Logging), Express.js, FTP-Srv
π Project Structure
TID-Recon-Dog/
βββ dist/ # Compiled TypeScript output
βββ logs/ # Stored logs from interactions
βββ models/ # AI models (Mistral, GPT4All)
βββ src/
β βββ services/
β β βββ httpService.ts # HTTP honeypot
β β βββ sshService.ts # SSH honeypot
β β βββ ftpService.ts # FTP honeypot
β β βββ pgService.ts # PostgreSQL honeypot
β βββ ai/
β β βββ aiResponder.ts # AI response engine
β βββ utils/
β β βββ logger.ts # Logging system
β βββ config/
β β βββ config.ts # Configuration file
β βββ index.ts # Entry point
βββ docker-compose.yml # Docker setup
βββ Dockerfile # Docker build instructions
βββ package.json # Dependencies
βββ tsconfig.json # TypeScript settings
βββ README.md # Documentation
git clone https://github.com/TangoisdownHQ/TID-Recon-Dog.git
cd TID-Recon-Dog
npm installnpx tscnode dist/index.jsdocker-compose up --build -d
docker logs -f tid-recon-dogTo stop:
docker-compose downDeploy TID-Recon-Dog as a microservice in your Kubernetes honeynet cluster.
- Expose services via Ingress or NodePort
- Configure baseURL for LLM in environment config
| Service | Port |
|---|---|
| HTTP | 3000 |
| SSH | 2222 |
| FTP | 2121 |
| PostgreSQL | 5432 |
Expose these via Ngrok, reverse proxy (Nginx), or Kubernetes ingress.
TID-Recon-Dog supports multiple ways to run LLMs:
- Launch LM Studio
- Load Mistral or TinyLLaMA model
- Update
.env:OPENAI_API_BASE=http://localhost:1234/v1
pip install llama-cpp-python[server]
python -m llama_cpp.server --model ./models/mistral.gguf --port 1234Set .env:
OPENAI_API_BASE=https://api.together.xyz/v1
OPENAI_API_KEY=your_api_key_hereollama run mistralSet base URL to http://localhost:11434/v1
curl http://localhost:3000
curl -X POST http://localhost:3000/upload
curl -X POST http://localhost:3000/shell -H "Content-Type: application/json" -d '{"cmd":"whoami"}'
ssh fake@localhost -p 2222
ftp localhost
psql -h localhost -p 5432 -U honeypottail -f logs/connections.log- SMB / RDP Fake Services
- Web Dashboard for Activity
- SIEM Log Forwarding (Elastic / Splunk)
- Real-time AI Threat Scoring
- Alert Webhooks / Email / Slack
- Decoy Container API tokens, Secrets
This project is commercially licensed.
- π GitHub Issues
- π§ͺ Test Portal (coming soon)
Do not deploy in environments without proper authorization.
Use at your own risk. Complies with legal deceptive defense strategies under cybersecurity frameworks.
β Like This Project?
β Star the repo
π Share with Red Teams
π Integrate it into your SOC / honeynet
