Skip to content
TangoisDown edited this page Mar 30, 2025 · 4 revisions

Welcome to the TID-Recon-Dog wiki!

🧠 TID-Recon-Dog Installation & Integration Wiki
πŸ–₯️ 1. Installing on Linux
🧱 Prerequisites:
Node.js v18+

npm

git

🧩 Setup:
bash
Copy
git clone https://github.com/TangoisdownHQ/TID-Recon-Dog.git
cd TID-Recon-Dog
npm install
npx tsc
node dist/index.js

πŸͺŸ 2. Installing on Windows
πŸ› οΈ Prerequisites:
Node.js

[Git](https://git-scm.com/)

[LM Studio](https://lmstudio.ai/) (Optional for local AI)

🧩 Setup:
bash
Copy
git clone https://github.com/TangoisdownHQ/TID-Recon-Dog.git
cd TID-Recon-Dog
npm install
npx tsc
node dist/index.js

πŸ’‘ Use PowerShell or Git Bash to run commands.

🐳 3. Installing via Docker
🧱 Requirements:
Docker

Docker Compose

πŸ“¦ Setup:
docker-compose up -d --build

πŸͺͺ View logs:
docker logs -f tid-recon-dog

☁️ 4. Deploying on Kubernetes Cluster
🧱 Requirements:
Kubernetes cluster (Minikube, EKS, GKE, etc.)

kubectl & helm

Docker image of this repo (build or use GitHub Registry)

βš™οΈ Sample deployment.yaml:
yaml
Copy
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tid-recon-dog
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tid-recon-dog
  template:
    metadata:
      labels:
        app: tid-recon-dog
    spec:
      containers:
      - name: tid-recon-dog
        image: your-docker-repo/tid-recon-dog
        ports:
        - containerPort: 3000


πŸ’‘ You can expose this using a LoadBalancer or Ingress for public interaction.


🌐 5. Integration in Web Apps / Web Servers ...
🧠 Purpose:
Simulate decoy APIs/endpoints to detect scans, tampering, injections in;
APIs, web servers, and even security layers like WAFs, proxies, and gateways.




[ S E C T I O N  Incomplete]






βš™οΈ 6. Configuration
Modify src/config/config.ts:

export const config = {
  services: {
    http: { port: 3000, host: "0.0.0.0" },
    ssh: { port: 2222, host: "0.0.0.0" },
    ftp: { port: 2121, host: "0.0.0.0" },
    postgres: { port: 5432, host: "0.0.0.0" },
  },
  ai: {
    model: "mistral-7b-instruct-v0.3",
    baseURL: "http://localhost:1234/v1",
    apiKey: "not-needed",
  },
};

πŸ“ Use .env for secrets, keys, model paths if needed.

πŸ§ͺ 7. Verify Setup
bash
Copy
curl http://localhost:3000
ssh test@localhost -p 2222
ftp localhost 2121
psql -h localhost -p 5432 -U rootadmin_user

🧠 AI Models
Mistral

TinyLlama

GPT4All

LM Studio / Ollama / OllamaHub


Compatible with OpenAI-like endpoints

Clone this wiki locally