Skip to content
Yohay Ohayon edited this page Apr 18, 2025 · 28 revisions

๐Ÿ•ท๏ธ Welcome to the wiki of Broken Links Crawler (BLC)


๐Ÿ” Project Overview

BLC (Broken Links Crawler) is a Python-based command-line tool developed as part of academic work.

The tool is built to scan websites and detect broken or problematic hyperlinks, addressing practical needs while showcasing key concepts in modern software development โ€” including multithreading, modular design, automation, and robust error handling.


โšก What BLC Offers

Although developed in an academic setting, BLC is a fully functional and production-aware tool. Itโ€™s designed to be performant, configurable, and extensible โ€” suitable for use by developers, sysadmins, QA engineers, and anyone responsible for maintaining link integrity across digital content.


โœ… Key Features

  • ๐Ÿš€ High-performance, multi-threaded crawling
    Utilizes a producer-consumer pattern to efficiently scan sites in parallel.

  • ๐Ÿ›‘ Detection of common link issues:

    • 404 Not Found
    • DNS resolution errors
    • HTTP to HTTPS mismatches
    • "False 200 OK" responses (e.g., custom error pages)
  • ๐ŸŒ External link validation
    Ensures referenced external links are reachable, without full recursion.

  • ๐ŸŽ›๏ธ Flexible configuration:

    • Crawl depth control
    • Adjustable thread count
    • Output in JSON, HTML, or human-readable formats
  • ๐Ÿ“ฌ Email-based reporting
    Automatically sends results based on customizable triggers:

    • Always
    • Only on error
    • Never
  • ๐Ÿ–ฅ๏ธ Cross-platform support

    • Built for Linux (Ubuntu) and Windows
    • Can be packaged into a standalone executable (blc, blc.exe)
  • ๐Ÿ”“ Open-source & automation-ready

    • Easily integrated into CI/CD pipelines, scheduled audits, or link monitoring tools

๐ŸŽ“ Key Concepts and Engineering Guidelines

This project demonstrates:

  • Clean and modular code structure
  • Effective use of concurrency and thread-safe data structures
  • Real-world exception handling and resilience
  • Compliance with web standards (robots.txt, SSL, email protocols)
  • Practical usage of third-party libraries (e.g., requests, certifi, PyInstaller)

๐Ÿ“ Get Started

Feel free to start with some ๐Ÿ“„Sample Outputs.

Visit the ๐Ÿš€Usage Instructions page to learn how to configure, run, and customize BLC.

Explore the sections - ๐Ÿ“Š Initial Software Requirements, ๐Ÿ“ High-Level Design to explore the project's origin and architecture.

Check out ๐Ÿ› ๏ธ Implementation Notes for insights into the tools, technologies, and key implementation decisions.

Crawling the web isnโ€™t as straightforward as it might seem. You can check out what challenges came up and how they were handled in - ๐Ÿ”ง Crawler Fetch Failures & Workarounds, and see how BLC deals with blocked access in ๐Ÿšซ Sites That Restrict Automated Crawling.

A discussion on thread number optimization can be found on ๐Ÿš€ Thread Count Optimization.


๐Ÿง‘โ€๐Ÿ’ป Contribute & Explore

Feel free to explore or extend the project further. You can find the full source code, issue tracker, and documentation in the GitHub repository.


Thank you for visiting โ€” and hereโ€™s to chasing broken links, and finishing what we started. ๐ŸŽ“โœจ

๐Ÿ“š Project Navigation

Clone this wiki locally