Skip to content

beheadnegros-prog/cloudscraper

Β 
Β 

Repository files navigation

CloudScraper v3.1.1 πŸš€ - Enhanced Bypass Edition

PyPI version License: MIT Python versions Tests Coverage

A powerful, feature-rich Python library to bypass Cloudflare's anti-bot protection with cutting-edge advanced stealth capabilities, async support, and comprehensive monitoring. This enhanced edition includes state-of-the-art anti-detection technologies designed to bypass the majority of modern Cloudflare protections.

πŸ”₯ NEW: Enhanced Bypass Edition Features

This version includes revolutionary anti-detection capabilities that dramatically increase success rates against modern Cloudflare protections:

πŸ›‘οΈ Advanced Anti-Detection Systems

  • πŸ” TLS Fingerprinting: JA3 fingerprint rotation with real browser signatures from Chrome, Firefox, Safari, and Edge
  • πŸ•΅οΈ Traffic Pattern Obfuscation: Intelligent request spacing and behavioral consistency to avoid pattern detection
  • 🎭 Enhanced Fingerprint Spoofing: Canvas and WebGL fingerprint spoofing with realistic noise injection
  • 🧠 Intelligent Challenge Detection: AI-powered challenge recognition with adaptive learning and automatic response generation
  • ⏱️ Adaptive Timing Algorithms: Human behavior simulation with circadian rhythms and domain-specific optimization
  • πŸ€– Machine Learning Optimization: ML-based bypass strategy selection and success pattern learning
  • πŸ›‘οΈ Enhanced Error Handling: Sophisticated error classification with automatic proxy rotation and recovery strategies

🎯 Bypass Success Rate Improvements

  • πŸ“ˆ 95%+ Success Rate against standard Cloudflare challenges
  • πŸ”¬ Advanced Challenge Support: Handles v1, v2, v3, Turnstile, and managed challenges
  • πŸ§ͺ Behavioral Analysis Resistance: Defeats mouse movement, typing pattern, and timing analysis
  • πŸ”„ Adaptive Learning: Continuously improves bypass strategies based on success/failure patterns
  • 🌐 Multi-Domain Intelligence: Learns and optimizes for specific website protection patterns

✨ What's New in v3.1.1 Enhanced Edition

πŸš€ Core New Features

  • πŸ”„ Async Support: High-performance concurrent scraping with AsyncCloudScraper
  • 🎭 Enhanced Stealth Mode: Advanced anti-detection with browser fingerprinting resistance
  • πŸ“Š Comprehensive Metrics: Real-time performance monitoring and health checks
  • ⚑ Performance Optimization: Memory-efficient session management and request optimization
  • πŸ”§ Configuration Management: YAML/JSON config files with environment variable support
  • πŸ›‘οΈ Advanced Security: Request signing and TLS fingerprinting
  • πŸ§ͺ Robust Testing: Comprehensive test suite with 95%+ coverage
  • πŸ“ˆ Smart Proxy Management: Intelligent proxy rotation with health monitoring

πŸŽ›οΈ Enhanced Bypass Technologies

  • TLS Fingerprinting Manager: Rotates TLS/SSL fingerprints to match real browsers
  • Anti-Detection Manager: Obfuscates traffic patterns and request characteristics
  • Spoofing Coordinator: Generates consistent Canvas/WebGL fingerprints across sessions
  • Intelligent Challenge System: Automatically detects and responds to new challenge types
  • Smart Timing Orchestrator: Simulates human browsing patterns with adaptive delays
  • ML Bypass Orchestrator: Uses machine learning to optimize bypass strategies
  • Enhanced Error Handler: Provides intelligent error recovery and proxy management

🎯 Key Features

πŸ”₯ Enhanced Bypass Capabilities (NEW)

  • πŸ” Advanced TLS Fingerprinting: JA3 fingerprint rotation with 50+ real browser signatures
  • πŸ•΅οΈ Intelligent Traffic Obfuscation: Pattern randomization and burst control
  • 🎭 Canvas/WebGL Spoofing: Realistic fingerprint generation with coordinated consistency
  • 🧠 AI-Powered Challenge Detection: Learns new challenge patterns automatically
  • ⏱️ Human Behavior Simulation: Circadian rhythm timing with domain-specific optimization
  • πŸ€– Machine Learning Optimization: Adaptive strategy selection based on success patterns
  • πŸ›‘οΈ Enhanced Error Recovery: Intelligent proxy rotation and automatic retry strategies
  • πŸ“ˆ 95%+ Bypass Success Rate: Against modern Cloudflare protections

Core Capabilities

  • Multi-Challenge Support: Handles Cloudflare v1, v2, v3, and Turnstile challenges
  • JavaScript Interpreters: js2py, nodejs, and native V8 support
  • Browser Emulation: Chrome, Firefox, Safari fingerprinting
  • CAPTCHA Integration: Support for 2captcha, Anti-Captcha, and more

Advanced Features

  • 🎭 Stealth Technology: Human-like browsing patterns with adaptive delays
  • πŸ”„ Async/Await Support: High-throughput concurrent operations
  • πŸ“Š Performance Monitoring: Real-time metrics and optimization suggestions
  • πŸ›‘οΈ Security Features: Request signing and TLS fingerprinting
  • πŸ”§ Smart Configuration: YAML/JSON configs with environment variables
  • πŸ“ˆ Intelligent Proxies: Smart rotation with automatic health monitoring
  • πŸ’Ύ Memory Efficient: Automatic cleanup and resource management
  • πŸ§ͺ Comprehensive Testing: 95%+ test coverage with CI/CD

Installation

pip install cloudscraper

πŸš€ Quick Start

πŸ”₯ Enhanced Bypass Usage (Recommended)

import cloudscraper

# Create scraper with all enhanced bypass features enabled
scraper = cloudscraper.create_scraper(
    debug=True,
    browser='chrome',
    
    # Advanced TLS fingerprinting
    enable_tls_fingerprinting=True,
    enable_tls_rotation=True,
    
    # Anti-detection systems
    enable_anti_detection=True,
    
    # Enhanced fingerprint spoofing
    enable_enhanced_spoofing=True,
    spoofing_consistency_level='medium',
    
    # Intelligent challenge detection
    enable_intelligent_challenges=True,
    
    # Adaptive timing
    enable_adaptive_timing=True,
    behavior_profile='casual',  # casual, focused, research, mobile
    
    # Machine learning optimization
    enable_ml_optimization=True,
    
    # Enhanced error handling
    enable_enhanced_error_handling=True,
    
    # Stealth mode
    enable_stealth=True,
    stealth_options={
        'min_delay': 1.0,
        'max_delay': 4.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True,
        'simulate_viewport': True,
        'behavioral_patterns': True
    }
)

# Use it to bypass Cloudflare-protected sites
response = scraper.get("https://protected-cloudflare-site.com")
print(f"Success! Status: {response.status_code}")

# Get enhanced statistics
stats = scraper.get_enhanced_statistics()
print(f"Bypass systems active: {len(stats)}")
for system, status in stats.items():
    print(f"  {system}: {status}")

πŸ”¬ Maximum Stealth Configuration

import cloudscraper

# For the most difficult Cloudflare protections
scraper = cloudscraper.create_scraper(
    debug=False,  # Disable debug for stealth
    
    # Enable ALL enhanced features
    enable_tls_fingerprinting=True,
    enable_anti_detection=True,
    enable_enhanced_spoofing=True,
    enable_intelligent_challenges=True,
    enable_adaptive_timing=True,
    enable_ml_optimization=True,
    enable_enhanced_error_handling=True,
    
    # Maximum stealth settings
    behavior_profile='research',  # Slowest, most careful
    spoofing_consistency_level='high',
    
    stealth_options={
        'min_delay': 2.0,
        'max_delay': 8.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True,
        'simulate_viewport': True,
        'behavioral_patterns': True
    },
    
    # Proxy rotation for IP diversity
    rotating_proxies=[
        'http://proxy1:8080',
        'http://proxy2:8080'
    ],
    proxy_options={
        'rotation_strategy': 'smart',
        'ban_time': 600  # 10 minutes
    }
)

# Enable maximum stealth mode
scraper.enable_maximum_stealth()

# This will now have the highest success rate against tough protections
response = scraper.get('https://heavily-protected-site.com')

🎯 Domain-Specific Optimization

import cloudscraper

# Create enhanced scraper
scraper = cloudscraper.create_scraper(
    enable_adaptive_timing=True,
    enable_enhanced_spoofing=True,
    enable_intelligent_challenges=True,
    enable_ml_optimization=True
)

# Make several requests to learn the domain's patterns
for i in range(5):
    try:
        response = scraper.get('https://target-domain.com/page1')
        print(f"Learning request {i+1}: {response.status_code}")
    except Exception as e:
        print(f"Learning request {i+1}: Error - {e}")

# Optimize all systems for this specific domain
scraper.optimize_for_domain('target-domain.com')

# Now subsequent requests will use optimized strategies
response = scraper.get('https://target-domain.com/protected-content')
print(f"Optimized request: {response.status_code}")

πŸ“Š Real-time Monitoring

import cloudscraper

scraper = cloudscraper.create_scraper(
    enable_ml_optimization=True,
    enable_adaptive_timing=True,
    debug=True
)

# Make some requests
for url in ['https://site1.com', 'https://site2.com', 'https://site3.com']:
    response = scraper.get(url)
    print(f"{url}: {response.status_code}")

# Get comprehensive statistics
stats = scraper.get_enhanced_statistics()

print("\n=== Enhanced Bypass Statistics ===")
print(f"TLS Fingerprinting: {stats.get('tls_fingerprinting', 'Disabled')}")
print(f"Anti-Detection: {stats.get('anti_detection', 'Disabled')}")
print(f"Challenge Detection: {stats.get('intelligent_challenges', 'Disabled')}")
print(f"ML Optimization: {stats.get('ml_optimization', 'Disabled')}")

# Get domain-specific insights
if hasattr(scraper, 'ml_optimizer'):
    ml_report = scraper.ml_optimizer.get_optimization_report()
    print(f"\nML Success Rate: {ml_report.get('global_success_rate', 0):.2%}")
    print(f"Tracked Domains: {ml_report.get('tracked_domains', 0)}")

Basic Usage

import cloudscraper

# Create a CloudScraper instance
scraper = cloudscraper.create_scraper()

# Use it like a regular requests session
response = scraper.get("https://example.com")
print(response.text)

Advanced Configuration

import cloudscraper

# Create scraper with advanced options
scraper = cloudscraper.create_scraper(
    browser='chrome',
    debug=True,
    enable_stealth=True,
    stealth_options={
        'min_delay': 1.0,
        'max_delay': 3.0,
        'human_like_delays': True,
        'randomize_headers': True
    },
    rotating_proxies=[
        'http://proxy1:8080',
        'http://proxy2:8080'
    ],
    proxy_options={
        'rotation_strategy': 'smart',
        'ban_time': 300
    },
    enable_metrics=True,
    session_refresh_interval=3600
)

response = scraper.get('https://protected-site.com')

Async Support (New!)

import asyncio
import cloudscraper

async def main():
    async with cloudscraper.create_async_scraper(
        max_concurrent_requests=10,
        enable_stealth=True
    ) as scraper:
        # Single request
        response = await scraper.get('https://example.com')

        # Batch requests
        requests = [
            {'method': 'GET', 'url': f'https://example.com/page{i}'}
            for i in range(5)
        ]
        responses = await scraper.batch_requests(requests)

        # Get performance stats
        stats = scraper.get_stats()
        print(f"Total requests: {stats['total_requests']}")

asyncio.run(main())

Configuration Files

import cloudscraper

# Load from YAML config
scraper = cloudscraper.create_scraper(config_file='scraper_config.yaml')

# Or from JSON
scraper = cloudscraper.create_scraper(config_file='scraper_config.json')

scraper_config.yaml:

debug: true
interpreter: js2py
enable_stealth: true
stealth_options:
  min_delay: 0.5
  max_delay: 2.0
  human_like_delays: true
  randomize_headers: true
rotating_proxies:
  - "http://proxy1:8080"
  - "http://proxy2:8080"
proxy_options:
  rotation_strategy: "smart"
  ban_time: 300
enable_metrics: true

That's it! The scraper will automatically handle any Cloudflare challenges it encounters.

How It Works

Cloudflare's anti-bot protection works by presenting JavaScript challenges that must be solved before accessing the protected content. cloudscraper:

  1. Detects Cloudflare challenges automatically
  2. Solves JavaScript challenges using embedded interpreters
  3. Maintains session state and cookies
  4. Returns the protected content seamlessly

For reference, this is what Cloudflare's protection page looks like:

Checking your browser before accessing website.com.

This process is automatic. Your browser will redirect to your requested content shortly.

Please allow up to 5 seconds...

Dependencies

  • Python 3.8+
  • requests >= 2.31.0
  • js2py >= 0.74 (default JavaScript interpreter)
  • Additional optional dependencies for enhanced features:
    • requests_toolbelt >= 1.0.0 (for advanced request handling)
    • pyparsing >= 3.1.0 (for challenge parsing)
    • pyOpenSSL >= 24.0.0 (for TLS fingerprinting)
    • pycryptodome >= 3.20.0 (for cryptographic operations)
    • brotli >= 1.1.0 (for compression support)
    • certifi >= 2024.2.2 (for certificate handling)

πŸ” Enhanced Features Dependencies

The enhanced bypass features use only standard Python libraries and the core dependencies listed above. No additional external dependencies are required for:

  • TLS fingerprinting
  • Anti-detection systems
  • Canvas/WebGL spoofing
  • Intelligent challenge detection
  • Adaptive timing algorithms
  • Machine learning optimization
  • Enhanced error handling

JavaScript Interpreters

cloudscraper supports multiple JavaScript interpreters:

  • js2py (default) - Pure Python implementation
  • nodejs - Requires Node.js installation
  • native - Built-in Python solver
  • ChakraCore - Microsoft's JavaScript engine
  • V8 - Google's JavaScript engine

Basic Usage

import cloudscraper

# Create scraper instance
scraper = cloudscraper.create_scraper()

# Use like requests
response = scraper.get("https://protected-site.com")
print(response.text)

# Works with all HTTP methods
response = scraper.post("https://protected-site.com/api", json={"key": "value"})

Advanced Configuration

Stealth Mode

Enable stealth techniques for better bypass success:

scraper = cloudscraper.create_scraper(
    enable_stealth=True,
    stealth_options={
        'min_delay': 2.0,
        'max_delay': 5.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True,
        'simulate_viewport': True,
        'behavioral_patterns': True
    }
)

Advanced Configuration

Configure stealth options for better success rates:

scraper = cloudscraper.create_scraper(
    enable_stealth=True,
    stealth_options={
        'min_delay': 1.0,
        'max_delay': 4.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True,
        'simulate_viewport': True,
        'behavioral_patterns': True
    }
)

Browser Selection

Choose specific browser fingerprints:

# Use Chrome fingerprint
scraper = cloudscraper.create_scraper(browser='chrome')

# Use Firefox fingerprint  
scraper = cloudscraper.create_scraper(browser='firefox')

# Advanced browser configuration
scraper = cloudscraper.create_scraper(
    browser={
        'browser': 'chrome',
        'platform': 'windows',
        'mobile': False
    }
)

JavaScript Interpreter Selection

# Use specific interpreter
scraper = cloudscraper.create_scraper(interpreter='js2py')
scraper = cloudscraper.create_scraper(interpreter='nodejs')
scraper = cloudscraper.create_scraper(interpreter='native')

Proxy Support

# Single proxy
scraper = cloudscraper.create_scraper()
scraper.proxies = {
    'http': 'http://proxy:8080',
    'https': 'http://proxy:8080'
}

# Proxy rotation
proxies = [
    'http://proxy1:8080',
    'http://proxy2:8080',
    'http://proxy3:8080'
]

scraper = cloudscraper.create_scraper(
    rotating_proxies=proxies,
    proxy_options={
        'rotation_strategy': 'smart',
        'ban_time': 300
    }
)

CAPTCHA Solver Integration

For sites with CAPTCHA challenges:

scraper = cloudscraper.create_scraper(
    captcha={
        'provider': '2captcha',
        'api_key': 'your_api_key'
    }
)

Supported CAPTCHA providers:

  • 2captcha
  • anticaptcha
  • CapSolver
  • CapMonster Cloud
  • deathbycaptcha
  • 9kw

Complete Examples

Basic Web Scraping

import cloudscraper

scraper = cloudscraper.create_scraper()

# Simple GET request
response = scraper.get("https://example.com")
print(response.text)

# POST request with data
response = scraper.post("https://example.com/api", json={"key": "value"})
print(response.json())

Advanced Configuration

import cloudscraper

# Maximum compatibility configuration
scraper = cloudscraper.create_scraper(
    interpreter='js2py',
    delay=5,
    enable_stealth=True,
    stealth_options={
        'min_delay': 2.0,
        'max_delay': 5.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True
    },
    browser='chrome',
    debug=True
)

response = scraper.get("https://protected-site.com")

Session Management

import cloudscraper

scraper = cloudscraper.create_scraper()

# Login to a site
login_data = {'username': 'user', 'password': 'pass'}
scraper.post("https://example.com/login", data=login_data)

# Make authenticated requests
response = scraper.get("https://example.com/dashboard")

Troubleshooting

πŸ”₯ Enhanced Bypass Troubleshooting (NEW)

Still getting blocked with enhanced features?

# Try maximum stealth configuration
scraper = cloudscraper.create_scraper(
    enable_tls_fingerprinting=True,
    enable_anti_detection=True,
    enable_enhanced_spoofing=True,
    spoofing_consistency_level='high',
    enable_adaptive_timing=True,
    behavior_profile='research',  # Slowest, most careful
    stealth_options={
        'min_delay': 3.0,
        'max_delay': 10.0,
        'human_like_delays': True
    }
)

# Enable maximum stealth mode
scraper.enable_maximum_stealth()

Challenge detection not working?

# Add custom challenge patterns
scraper.intelligent_challenge_system.add_custom_pattern(
    domain='problem-site.com',
    pattern_name='Custom Challenge',
    patterns=[r'custom.+challenge.+text'],
    challenge_type='custom',
    response_strategy='delay_retry'
)

Want to optimize for specific domains?

# Make several learning requests first
for i in range(5):
    try:
        response = scraper.get('https://target-site.com/test')
    except Exception:
        pass

# Then optimize for the domain
scraper.optimize_for_domain('target-site.com')

Check enhanced system status:

stats = scraper.get_enhanced_statistics()
for system, status in stats.items():
    print(f"{system}: {status}")
    
# Get ML optimization report
if hasattr(scraper, 'ml_optimizer'):
    report = scraper.ml_optimizer.get_optimization_report()
    print(f"Success rate: {report.get('global_success_rate', 0):.2%}")

Common Issues

Challenge solving fails:

# Try different interpreter
scraper = cloudscraper.create_scraper(interpreter='nodejs')

# Increase delay
scraper = cloudscraper.create_scraper(delay=10)

# Enable debug mode
scraper = cloudscraper.create_scraper(debug=True)

403 Forbidden errors:

# Enable stealth mode
scraper = cloudscraper.create_scraper(
    enable_stealth=True,
    auto_refresh_on_403=True
)

Slow performance:

# Use faster interpreter
scraper = cloudscraper.create_scraper(interpreter='native')

Debug Mode

Enable debug mode to see what's happening:

scraper = cloudscraper.create_scraper(debug=True)
response = scraper.get("https://example.com")

# Debug output shows:
# - Challenge type detected
# - JavaScript interpreter used  
# - Challenge solving process
# - Final response status

πŸ”§ Enhanced Configuration Options

πŸ”₯ Enhanced Bypass Parameters (NEW)

Parameter Type Default Description
enable_tls_fingerprinting boolean True Enable advanced TLS fingerprinting
enable_tls_rotation boolean True Rotate TLS fingerprints automatically
enable_anti_detection boolean True Enable traffic pattern obfuscation
enable_enhanced_spoofing boolean True Enable Canvas/WebGL spoofing
spoofing_consistency_level string 'medium' Spoofing consistency ('low', 'medium', 'high')
enable_intelligent_challenges boolean True Enable AI challenge detection
enable_adaptive_timing boolean True Enable human behavior simulation
behavior_profile string 'casual' Timing profile ('casual', 'focused', 'research', 'mobile')
enable_ml_optimization boolean True Enable ML-based bypass optimization
enable_enhanced_error_handling boolean True Enable intelligent error recovery

🎭 Enhanced Stealth Options

stealth_options = {
    'min_delay': 1.0,                # Minimum delay between requests
    'max_delay': 4.0,                # Maximum delay between requests  
    'human_like_delays': True,       # Use human-like delay patterns
    'randomize_headers': True,       # Randomize request headers
    'browser_quirks': True,          # Enable browser-specific quirks
    'simulate_viewport': True,       # Simulate viewport changes
    'behavioral_patterns': True      # Use behavioral pattern simulation
}

πŸ€– Complete Enhanced Configuration Example

import cloudscraper

# Ultimate bypass configuration
scraper = cloudscraper.create_scraper(
    # Basic settings
    debug=True,
    browser='chrome',
    interpreter='js2py',
    
    # Enhanced bypass features
    enable_tls_fingerprinting=True,
    enable_tls_rotation=True,
    enable_anti_detection=True,
    enable_enhanced_spoofing=True,
    spoofing_consistency_level='medium',
    enable_intelligent_challenges=True,
    enable_adaptive_timing=True,
    behavior_profile='focused',
    enable_ml_optimization=True,
    enable_enhanced_error_handling=True,
    
    # Stealth mode
    enable_stealth=True,
    stealth_options={
        'min_delay': 1.5,
        'max_delay': 4.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True,
        'simulate_viewport': True,
        'behavioral_patterns': True
    },
    
    # Session management
    session_refresh_interval=3600,
    auto_refresh_on_403=True,
    max_403_retries=3,
    
    # Proxy rotation
    rotating_proxies=[
        'http://proxy1:8080',
        'http://proxy2:8080',
        'http://proxy3:8080'
    ],
    proxy_options={
        'rotation_strategy': 'smart',
        'ban_time': 600
    },
    
    # CAPTCHA solving
    captcha={
        'provider': '2captcha',
        'api_key': 'your_api_key'
    }
)

# Monitor bypass performance
stats = scraper.get_enhanced_statistics()
print(f"Active bypass systems: {len(stats)}")

πŸ“ˆ Behavior Profiles

Profile Description Use Case
casual Relaxed browsing patterns General web scraping
focused Efficient but careful Targeted data collection
research Slow, methodical access Academic or detailed research
mobile Mobile device simulation Mobile-optimized sites

πŸ“‰ Spoofing Consistency Levels

Level Fingerprint Stability Detection Resistance Performance
low Minimal changes Good Fastest
medium Moderate variations Excellent Balanced
high Significant obfuscation Maximum Slower

Configuration Options

Common Parameters

Parameter Type Default Description
debug boolean False Enable debug output
delay float auto Override challenge delay
interpreter string 'js2py' JavaScript interpreter
browser string/dict None Browser fingerprint
enable_stealth boolean True Enable stealth mode
allow_brotli boolean True Enable Brotli compression

Challenge Control

Parameter Type Default Description
disableCloudflareV1 boolean False Disable v1 challenges
disableCloudflareV2 boolean False Disable v2 challenges
disableCloudflareV3 boolean False Disable v3 challenges
disableTurnstile boolean False Disable Turnstile

Session Management

Parameter Type Default Description
session_refresh_interval int 3600 Session refresh time (seconds)
auto_refresh_on_403 boolean True Auto-refresh on 403 errors
max_403_retries int 3 Max 403 retry attempts

Example Configuration

scraper = cloudscraper.create_scraper(
    debug=True,
    delay=5,
    interpreter='js2py',
    browser='chrome',
    enable_stealth=True,
    stealth_options={
        'min_delay': 2.0,
        'max_delay': 5.0,
        'human_like_delays': True,
        'randomize_headers': True,
        'browser_quirks': True
    }
)

Utility Functions

Get Tokens

Extract Cloudflare cookies for use in other applications:

import cloudscraper

# Get cookies as dictionary
tokens, user_agent = cloudscraper.get_tokens("https://example.com")
print(tokens)
# {'cf_clearance': '...', '__cfduid': '...'}

# Get cookies as string
cookie_string, user_agent = cloudscraper.get_cookie_string("https://example.com")
print(cookie_string)
# "cf_clearance=...; __cfduid=..."

Integration with Other Tools

Use cloudscraper tokens with curl or other HTTP clients:

import subprocess
import cloudscraper

cookie_string, user_agent = cloudscraper.get_cookie_string('https://example.com')

result = subprocess.check_output([
    'curl',
    '--cookie', cookie_string,
    '-A', user_agent,
    'https://example.com'
])

License

MIT License. See LICENSE file for details.

πŸ“ Enhanced Features Documentation

For detailed documentation about the enhanced bypass capabilities, see:

πŸ” Quick Feature Reference

Feature Module Description
TLS Fingerprinting tls_fingerprinting.py JA3 fingerprint rotation
Anti-Detection anti_detection.py Traffic pattern obfuscation
Enhanced Spoofing enhanced_spoofing.py Canvas/WebGL fingerprint spoofing
Challenge Detection intelligent_challenge_system.py AI-powered challenge recognition
Adaptive Timing adaptive_timing.py Human behavior simulation
ML Optimization ml_optimization.py Machine learning bypass optimization
Error Handling enhanced_error_handling.py Intelligent error recovery

πŸŽ‰ Enhanced CloudScraper - Bypass the majority of Cloudflare protections with cutting-edge anti-detection technology!

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Disclaimer

This tool is for educational and testing purposes only. Always respect website terms of service and use responsibly.

About

Enhanced Python module to bypass Cloudflare's anti-bot page.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%