-
Notifications
You must be signed in to change notification settings - Fork 264
5. Running Reconnaissance
The reconnaissance pipeline is RedAmon's core scanning engine — a fully automated, six-phase process that maps your target's entire attack surface. This page explains how to launch a scan, monitor its progress, and understand the results.
Make sure you have:
- A user selected (see User Management)
- A project created with a target domain configured (see Creating a Project)
- The Graph Dashboard open with your project selected (see The Graph Dashboard)
- On the Graph Dashboard, locate the Recon Actions group (blue) in the toolbar
- Click the "Start Recon" button
A confirmation modal appears showing:
- Your project name and target domain
- Current graph statistics (how many nodes of each type already exist, if any)

- Click "Confirm" to start the scan
The "Start Recon" button changes to a spinner while the scan is running.
Once the scan starts, a Logs button (terminal icon) appears in the Recon Actions group.
- Click the Logs button to open the Logs Drawer on the right side
- Watch the real-time output as each phase progresses

The logs drawer shows:
- Current phase with phase number (e.g., "Phase 3: HTTP Probing")
- Log messages streaming in real-time as the scan progresses
- A Clear button to reset the log display
While the reconnaissance runs, the graph canvas auto-refreshes every 5 seconds. You'll see nodes appearing and connecting in real-time:
- First, Domain and Subdomain nodes appear (Phase 1)
- Then IP nodes connect to subdomains (Phase 1)
- Port nodes attach to IPs (Phase 2)
- BaseURL, Service, and Technology nodes appear (Phase 3)
- Endpoint and Parameter nodes branch out (Phase 4)
- Vulnerability and CVE nodes connect to affected resources (Phase 5-6)
When the scan completes:
- The spinner stops and the "Start Recon" button reappears
- A Download button (download icon) appears in the Recon Actions group
- Click it to download the complete results as a JSON file (
recon_{projectId}.json)
Each phase builds on the previous one's output. You can control which modules run via the Scan Modules setting in your project configuration.
Purpose: Map the target's subdomain landscape.
Techniques used:
- Certificate Transparency via crt.sh — finds certificates issued for the domain
- HackerTarget API — passive DNS lookup
-
Knockpy — active subdomain brute-forcing (if
useBruteforceForSubdomainsis enabled) - WHOIS Lookup — registrar, dates, contacts, name servers
- DNS Resolution — A, AAAA, MX, NS, TXT, CNAME, SOA records for every discovered subdomain
Output: Domain, Subdomain, IP, and DNSRecord nodes in the graph.
If a specific
subdomainListis configured, the pipeline skips active discovery and only resolves those subdomains.
Purpose: Discover open ports on all resolved IP addresses.
Capabilities:
- SYN scanning (default) with CONNECT fallback
- Top-N port selection (100, 1000, or custom ranges)
- CDN/WAF detection (Cloudflare, Akamai, AWS CloudFront)
- Passive mode via Shodan InternetDB (no packets sent)
- IANA service name mapping (15,000+ entries)
Output: Port nodes linked to IP nodes.
Purpose: Determine which services are live and what software they run.
httpx probing:
- Status codes, content types, page titles, server headers
- TLS certificate inspection (subject, issuer, expiry, ciphers, JARM)
- Response times, word counts, line counts
Technology detection (dual engine):
- httpx built-in fingerprinting for major frameworks
- Wappalyzer second pass (6,000+ fingerprints) for CMS plugins, JS libraries, analytics tools
Banner grabbing:
- Raw socket connections for non-HTTP services (SSH, FTP, SMTP, MySQL, Redis)
- Protocol-specific probe strings for version extraction
Output: BaseURL, Service, Technology, Certificate, Header nodes.
Purpose: Discover every reachable endpoint. Three tools run simultaneously.
| Tool | Type | Description |
|---|---|---|
| Katana | Active | Web crawler following links to configurable depth, optionally with JavaScript rendering |
| GAU | Passive | Queries Wayback Machine, Common Crawl, AlienVault OTX, URLScan.io for historical URLs |
| Kiterunner | Active | API brute-forcer testing REST/GraphQL route wordlists |
Results are merged, deduplicated, and classified:
- Categories: auth, file_access, api, dynamic, static, admin
- Parameter typing: id, file, search, auth_param
Output: Endpoint and Parameter nodes linked to BaseURL nodes.
Purpose: Test discovered endpoints for security vulnerabilities.
Capabilities:
- 9,000+ community templates for known CVEs, misconfigurations, exposed panels
- DAST mode — active fuzzing with XSS, SQLi, RCE, LFI, SSRF, SSTI payloads
- Severity filtering — scan for critical, high, medium, and/or low findings
- Interactsh — out-of-band detection for blind vulnerabilities
- CVE enrichment — cross-references findings against NVD for CVSS scores
30+ custom security checks (configurable individually):
- Direct IP access, missing security headers (CSP, HSTS, etc.)
- TLS certificate expiry, DNS security (SPF, DMARC, DNSSEC, zone transfer)
- Open services (Redis no-auth, Kubernetes API, SMTP open relay)
- Insecure form actions, missing rate limiting
Output: Vulnerability and CVE nodes linked to Endpoints and Parameters.
Purpose: Map every CVE to its corresponding CWE weakness and CAPEC attack patterns.
- Uses the CVE2CAPEC repository (auto-updated with 24-hour cache TTL)
- Provides attack pattern classification for every vulnerability found
Output: MitreData (CWE) and Capec nodes linked to CVE nodes.
Duration varies based on target size, network conditions, and scan settings:
| Target Type | Approximate Duration |
|---|---|
| Small (1-5 subdomains, few ports) | 5-15 minutes |
| Medium (10-50 subdomains) | 15-45 minutes |
| Large (100+ subdomains) | 1-3 hours |
Key factors affecting duration:
- Bruteforce for subdomains adds significant time for large domains
- Katana depth > 2 increases crawling time exponentially
- DAST mode doubles vulnerability scanning time
- GAU with verification adds 30-60 seconds per domain
Once the scan is complete, you can:
- Explore the graph — click nodes to inspect their properties, filter by type using the bottom bar
- Switch to Data Table — view all findings in a searchable, sortable table with Excel export
- Run GVM scan — complement web-layer findings with network-level vulnerability testing (see GVM Vulnerability Scanning)
- Run GitHub Hunt — search for leaked secrets (see GitHub Secret Hunting)
- Use the AI Agent — ask the agent to analyze findings, identify attack paths, and exploit vulnerabilities (see AI Agent Guide)
- GVM Vulnerability Scanning — add network-level vulnerability testing
- AI Agent Guide — let the AI analyze and act on your findings
User Guide
- Getting Started
- User Management
- Creating a Project
- The Graph Dashboard
- Running Reconnaissance
- GVM Vulnerability Scanning
- GitHub Secret Hunting
- AI Agent Guide
- CypherFix — Automated Remediation
Reference
- Project Settings Reference
- AI Model Providers
- Attack Surface Graph
- Data Export & Import
- EvoGraph — Attack Chain Evolution
- Attack Paths
Help