launch_jobs.sh is a production-grade script for submitting large batches of jobs to the ffmpeg-rtmp distributed transcoding system. It includes comprehensive error handling, progress monitoring, batch processing, and detailed reporting.
- Batch Processing: Submit jobs in configurable batches to avoid overwhelming the master
- Progress Monitoring: Real-time progress bar and detailed logging
- Error Handling: Graceful error recovery and detailed failure reporting
- Health Checks: Pre-flight verification of master server availability
- Flexible Configuration: Support for random or fixed job parameters
- Dry Run Mode: Test without actually submitting jobs
- JSON Output: Machine-readable results for automation
- Performance Metrics: Submission rate and timing statistics
Submit 1000 jobs with default settings:
./scripts/launch_jobs.sh./scripts/launch_jobs.sh --count 100./scripts/launch_jobs.sh --master https://master.example.com:8080./scripts/launch_jobs.sh --count 500 --scenario "4K60-h264" --priority high./scripts/launch_jobs.sh \
--count 10000 \
--batch-size 100 \
--delay 50 \
--output large_batch_results.json./scripts/launch_jobs.sh \
--count 1000 \
--scenario random \
--priority mixed \
--queue mixed \
--engine auto./scripts/launch_jobs.sh --count 100 --dry-run --verbose| Option | Description | Default |
|---|---|---|
--count N |
Number of jobs to submit | 1000 |
--master URL |
Master server URL | http://localhost:8080 |
--scenario NAME |
Job scenario (see list below) | random |
--batch-size N |
Jobs per batch | 50 |
--delay MS |
Milliseconds between batches | 100 |
--priority LEVEL |
Priority: high, medium, low, mixed | mixed |
--queue TYPE |
Queue: live, default, batch, mixed | mixed |
--engine ENGINE |
Engine: auto, ffmpeg, gstreamer | auto |
--output FILE |
Output file for results | job_launch_results.json |
--dry-run |
Test mode without submission | false |
--verbose |
Enable debug logging | false |
--help |
Show help message | - |
The script supports the following predefined scenarios:
4K60-h264- 4K resolution at 60fps using H.2644K60-h265- 4K resolution at 60fps using H.2654K30-h264- 4K resolution at 30fps using H.2641080p60-h264- Full HD at 60fps1080p30-h264- Full HD at 30fps720p60-h264- HD at 60fps720p30-h264- HD at 30fps480p30-h264- SD at 30fpsrandom- Randomly select from all scenarios
When using random or mixed values, the script automatically varies parameters:
- Duration: 30-300 seconds
- Bitrate: Appropriate for scenario resolution
- 4K: 10-25 Mbps
- 1080p: 4-10 Mbps
- 720p: 2-5 Mbps
- 480p: 1-3 Mbps
- Priority: Randomly distributed (when set to "mixed")
- Queue: Randomly distributed (when set to "mixed")
The script generates a JSON file with detailed results:
[
{
"id": "job-uuid-1",
"sequence_number": 1,
"scenario": "4K60-h264",
"confidence": "auto",
"status": "queued",
"queue": "default",
"priority": "medium",
"created_at": "2026-01-05T10:30:00Z"
},
{
"id": "job-uuid-2",
"sequence_number": 2,
...
}
]Extract job IDs:
jq -r '.[] | select(.id != null) | .id' job_launch_results.jsonCount successful submissions:
jq '[.[] | select(.error == null)] | length' job_launch_results.jsonCount failures:
jq '[.[] | select(.error != null)] | length' job_launch_results.json./scripts/launch_jobs.sh \
--count 10000 \
--batch-size 200 \
--delay 20./scripts/launch_jobs.sh \
--count 5000 \
--batch-size 25 \
--delay 200./scripts/launch_jobs.sh \
--count 10 \
--batch-size 1 \
--delay 1000 \
--verboseThe script provides:
-
Real-time Progress Bar
[INFO] Progress: [========================= ] 50% (500/1000) -
Batch Completion Logs (in verbose mode)
[DEBUG] Completed batch 10, sleeping 100ms... -
Error Notifications
[ERROR] Job #532 failed with HTTP 503: Service temporarily unavailable
#!/bin/bash
# Run daily load test at 2 AM
0 2 * * * cd /opt/ffmpeg-rtmp && ./scripts/launch_jobs.sh \
--count 5000 \
--output "/var/log/ffmpeg-rtmp/jobs_$(date +\%Y\%m\%d).json"# GitHub Actions example
- name: Load Test with 100 Jobs
run: |
./scripts/launch_jobs.sh \
--count 100 \
--master ${{ secrets.MASTER_URL }} \
--output test_results/load_test.json
- name: Verify Job Submission
run: |
success_count=$(jq '[.[] | select(.error == null)] | length' test_results/load_test.json)
if [ "$success_count" -lt 95 ]; then
echo "Too many failures: only $success_count/100 succeeded"
exit 1
fi#!/bin/bash
# Benchmark different batch sizes
for batch_size in 10 25 50 100 200; do
echo "Testing batch size: $batch_size"
./scripts/launch_jobs.sh \
--count 1000 \
--batch-size "$batch_size" \
--output "benchmark_${batch_size}.json"
sleep 60 # Cool down between tests
done[ERROR] Master server health check failed at http://localhost:8080/health
Solution:
- Verify master is running:
docker compose ps master - Check master logs:
docker compose logs master - Verify URL is correct
[WARN] Total failed: 450
Solution:
- Increase
--delayto reduce load - Decrease
--batch-sizefor gentler submission - Check master server resources (CPU, memory)
- Review error details in output JSON
Solution:
- Use
--verboseto see detailed logs - Check for network connectivity issues
- Verify script has execute permissions:
chmod +x scripts/launch_jobs.sh
- Start Small: Test with
--count 10before large batches - Use Dry Run: Verify configuration with
--dry-runfirst - Monitor Resources: Watch master server CPU/memory during submission
- Tune Batching: Adjust
--batch-sizeand--delaybased on your infrastructure - Save Results: Always specify
--outputfor audit trails - Health Check First: Manually verify
curl $MASTER_URL/healthbefore large batches
Set default master URL:
export MASTER_URL="https://production-master.example.com:8080"
./scripts/launch_jobs.sh --count 1000Submit to multiple masters simultaneously:
./scripts/launch_jobs.sh --master http://master1:8080 --count 500 --output master1.json &
./scripts/launch_jobs.sh --master http://master2:8080 --count 500 --output master2.json &
waitEdit the SCENARIOS array in the script to add custom scenarios:
SCENARIOS=(
"4K60-h264"
"custom-8K-h265" # Add your custom scenario
"custom-HDR-av1" # Another custom scenario
)- bash 4.0+
- curl for HTTP requests
- jq (optional) for JSON parsing
- Running master server with accessible API
For issues or questions:
- Check the main project README
- Review master server logs
- Open an issue on GitHub with:
- Script output (with
--verbose) - Master server version
- Output JSON file
- Script output (with
- v1.0.0 (2026-01-05): Initial production release
- Batch processing
- Progress monitoring
- Error handling
- JSON output