Skip to content

5. Running Reconnaissance

“samuele edited this page Feb 21, 2026 · 1 revision

Running Reconnaissance

The reconnaissance pipeline is RedAmon's core scanning engine — a fully automated, six-phase process that maps your target's entire attack surface. This page explains how to launch a scan, monitor its progress, and understand the results.


Before You Start

Make sure you have:

  1. A user selected (see User Management)
  2. A project created with a target domain configured (see Creating a Project)
  3. The Graph Dashboard open with your project selected (see The Graph Dashboard)

Step 1: Start the Reconnaissance

  1. On the Graph Dashboard, locate the Recon Actions group (blue) in the toolbar
  2. Click the "Start Recon" button

A confirmation modal appears showing:

  • Your project name and target domain
  • Current graph statistics (how many nodes of each type already exist, if any)

Recon Confirmation Modal

  1. Click "Confirm" to start the scan

The "Start Recon" button changes to a spinner while the scan is running.


Step 2: Monitor Real-Time Logs

Once the scan starts, a Logs button (terminal icon) appears in the Recon Actions group.

  1. Click the Logs button to open the Logs Drawer on the right side
  2. Watch the real-time output as each phase progresses

Recon Logs Drawer

The logs drawer shows:

  • Current phase with phase number (e.g., "Phase 3: HTTP Probing")
  • Log messages streaming in real-time as the scan progresses
  • A Clear button to reset the log display

Step 3: Watch the Graph Build

While the reconnaissance runs, the graph canvas auto-refreshes every 5 seconds. You'll see nodes appearing and connecting in real-time:

  • First, Domain and Subdomain nodes appear (Phase 1)
  • Then IP nodes connect to subdomains (Phase 1)
  • Port nodes attach to IPs (Phase 2)
  • BaseURL, Service, and Technology nodes appear (Phase 3)
  • Endpoint and Parameter nodes branch out (Phase 4)
  • Vulnerability and CVE nodes connect to affected resources (Phase 5-6)

Step 4: Download Results

When the scan completes:

  1. The spinner stops and the "Start Recon" button reappears
  2. A Download button (download icon) appears in the Recon Actions group
  3. Click it to download the complete results as a JSON file (recon_{projectId}.json)

The Six Phases

Each phase builds on the previous one's output. You can control which modules run via the Scan Modules setting in your project configuration.

Phase 1: Domain Discovery

Purpose: Map the target's subdomain landscape.

Techniques used:

  • Certificate Transparency via crt.sh — finds certificates issued for the domain
  • HackerTarget API — passive DNS lookup
  • Knockpy — active subdomain brute-forcing (if useBruteforceForSubdomains is enabled)
  • WHOIS Lookup — registrar, dates, contacts, name servers
  • DNS Resolution — A, AAAA, MX, NS, TXT, CNAME, SOA records for every discovered subdomain

Output: Domain, Subdomain, IP, and DNSRecord nodes in the graph.

If a specific subdomainList is configured, the pipeline skips active discovery and only resolves those subdomains.


Phase 2: Port Scanning (Naabu)

Purpose: Discover open ports on all resolved IP addresses.

Capabilities:

  • SYN scanning (default) with CONNECT fallback
  • Top-N port selection (100, 1000, or custom ranges)
  • CDN/WAF detection (Cloudflare, Akamai, AWS CloudFront)
  • Passive mode via Shodan InternetDB (no packets sent)
  • IANA service name mapping (15,000+ entries)

Output: Port nodes linked to IP nodes.


Phase 3: HTTP Probing & Technology Detection

Purpose: Determine which services are live and what software they run.

httpx probing:

  • Status codes, content types, page titles, server headers
  • TLS certificate inspection (subject, issuer, expiry, ciphers, JARM)
  • Response times, word counts, line counts

Technology detection (dual engine):

  • httpx built-in fingerprinting for major frameworks
  • Wappalyzer second pass (6,000+ fingerprints) for CMS plugins, JS libraries, analytics tools

Banner grabbing:

  • Raw socket connections for non-HTTP services (SSH, FTP, SMTP, MySQL, Redis)
  • Protocol-specific probe strings for version extraction

Output: BaseURL, Service, Technology, Certificate, Header nodes.


Phase 4: Resource Enumeration (Parallel)

Purpose: Discover every reachable endpoint. Three tools run simultaneously.

Tool Type Description
Katana Active Web crawler following links to configurable depth, optionally with JavaScript rendering
GAU Passive Queries Wayback Machine, Common Crawl, AlienVault OTX, URLScan.io for historical URLs
Kiterunner Active API brute-forcer testing REST/GraphQL route wordlists

Results are merged, deduplicated, and classified:

  • Categories: auth, file_access, api, dynamic, static, admin
  • Parameter typing: id, file, search, auth_param

Output: Endpoint and Parameter nodes linked to BaseURL nodes.


Phase 5: Vulnerability Scanning (Nuclei)

Purpose: Test discovered endpoints for security vulnerabilities.

Capabilities:

  • 9,000+ community templates for known CVEs, misconfigurations, exposed panels
  • DAST mode — active fuzzing with XSS, SQLi, RCE, LFI, SSRF, SSTI payloads
  • Severity filtering — scan for critical, high, medium, and/or low findings
  • Interactsh — out-of-band detection for blind vulnerabilities
  • CVE enrichment — cross-references findings against NVD for CVSS scores

30+ custom security checks (configurable individually):

  • Direct IP access, missing security headers (CSP, HSTS, etc.)
  • TLS certificate expiry, DNS security (SPF, DMARC, DNSSEC, zone transfer)
  • Open services (Redis no-auth, Kubernetes API, SMTP open relay)
  • Insecure form actions, missing rate limiting

Output: Vulnerability and CVE nodes linked to Endpoints and Parameters.


Phase 6: MITRE Enrichment

Purpose: Map every CVE to its corresponding CWE weakness and CAPEC attack patterns.

  • Uses the CVE2CAPEC repository (auto-updated with 24-hour cache TTL)
  • Provides attack pattern classification for every vulnerability found

Output: MitreData (CWE) and Capec nodes linked to CVE nodes.


Scan Duration Estimates

Duration varies based on target size, network conditions, and scan settings:

Target Type Approximate Duration
Small (1-5 subdomains, few ports) 5-15 minutes
Medium (10-50 subdomains) 15-45 minutes
Large (100+ subdomains) 1-3 hours

Key factors affecting duration:

  • Bruteforce for subdomains adds significant time for large domains
  • Katana depth > 2 increases crawling time exponentially
  • DAST mode doubles vulnerability scanning time
  • GAU with verification adds 30-60 seconds per domain

After Reconnaissance

Once the scan is complete, you can:

  1. Explore the graph — click nodes to inspect their properties, filter by type using the bottom bar
  2. Switch to Data Table — view all findings in a searchable, sortable table with Excel export
  3. Run GVM scan — complement web-layer findings with network-level vulnerability testing (see GVM Vulnerability Scanning)
  4. Run GitHub Hunt — search for leaked secrets (see GitHub Secret Hunting)
  5. Use the AI Agent — ask the agent to analyze findings, identify attack paths, and exploit vulnerabilities (see AI Agent Guide)

Next Steps

Clone this wiki locally