Skip to content

Video transcript summarization from multiple sources (YouTube, Instagram, TikTok, Twitter, Reddit, Facebook, Google Drive, Dropbox, and local files). Works with any OpenAI-compatible LLM provider (even locally hosted).

License

Notifications You must be signed in to change notification settings

martinopiaggi/summarize

Repository files navigation

Video Transcript Summarizer

Transcribe and summarize videos from YouTube, Instagram, TikTok, Twitter, Reddit, Facebook, Google Drive, Dropbox, and local files.

Works with any OpenAI-compatible LLM provider (even locally hosted).

Interfaces

Interface Command
CLI python -m summarizer --source <source>
Streamlit GUI python -m streamlit run app.py
Docker docker compose up -dhttp://localhost:8501
SKILL for AI agents .agent/skills/summarize/SKILL.md to permit agents to use CLI interface

How It Works

               +--------------------+
               |  Video URL/Path    |
               +---------+----------+
                         |
                         v
               +---------+----------+
               |    Source Type?     |
               +---------+----------+
                         |
       +-----------------+-------------+
       |                 |             |
       |             X.com/IG     Local File
    YouTube           TikTok     Google Drive
       |                etc.       Dropbox
       |                 |             |
       v            +----+-----+       |
+------+----------+ | Cobalt   |       |
| Captions Exist? | +----+-----+       |
+----+----+-------+      |             |
    Yes   No             |             |
     |    +--------------+--------+----+
     |                            |
     |                            v
     |                   +--------+--------+
     |                   |     Whisper     |
     |                   |    endpoint?    |
     |                   +--------+--------+
     |                            |
     |                +-----------+-----------+
     |                |                       |
     |           Cloud Whisper          Local Whisper
     |                |                       |
     |                +----------+------------+
     |                           |
     +---------------------------+
                                 |
                            Transcript
                                 |
                                 v
                    +------------+----------+
 summarizer.yaml -> |    Prompt + LLM       |
 prompts.json    -> |    Merge              |
 .env            -> +------------+----------+
                                 |
                                 v
                          +------+-------+
                          |    Output    |
                          +--------------+
  • summarizer.yaml: Provider settings (base_url, model, chunk-size) and defaults
  • .env: API keys matched by URL keyword
  • prompts.json: Summary style templates

Notes:

  • Cloud Whisper uses Groq Cloud API (requires free Groq API key)
  • Docker image does not include Local Whisper (designed for VPS deployment without GPU)

Installation and usage

Step 0 - CLI installation:

git clone https://github.com/martinopiaggi/summarize.git
cd summarize
pip install -e .

Step 1 - Run the CLI:

python -m summarizer --source "https://youtube.com/watch?v=VIDEO_ID"

The summary is saved to summaries/watch_YYYYMMDD_HHMMSS.md. That's it!

Streamlit GUI

python -m streamlit run app.py

Visit port 8501.

Docker

git clone https://github.com/martinopiaggi/summarize.git
cd summarize
# Create [.env](./.env) with your API keys, then:
docker compose up -d

Open http://localhost:8501 for the GUI. Summaries are saved to ./summaries/.

CLI via Docker: docker compose run --rm summarizer python -m summarizer --source "URL"

Cobalt standalone: docker compose -f docker-compose.cobalt.yml up -d

Configuration

Providers (summarizer.yaml)

Define your LLM providers and defaults. CLI flags override everything.

default_provider: gemini

providers:
  gemini:
    base_url: https://generativelanguage.googleapis.com/v1beta/openai
    model: gemini-2.5-flash-lite
    chunk-size: 128000

  groq:
    base_url: https://api.groq.com/openai/v1
    model: openai/gpt-oss-20b

  ollama: 
    base_url: http://localhost:11434/v1
    model: qwen3:8b

  openrouter:
    base_url: https://openrouter.ai/api/v1
    model: google/gemini-2.0-flash-001

defaults:
  prompt-type: Questions and answers
  chunk-size: 10000
  parallel-calls: 30
  max-tokens: 4096
  output-dir: summaries

API Keys (.env)

# Required for Cloud Whisper transcription (free tier available)
groq = gsk_YOUR_KEY

# LLM providers (choose one or more)
openai = sk-proj-YOUR_KEY
generativelanguage = YOUR_GOOGLE_KEY
deepseek = YOUR_DEEPSEEK_KEY
openrouter = YOUR_OPENROUTER_KEY
perplexity = YOUR_PERPLEXITY_KEY
hyperbolic = YOUR_HYPERBOLIC_KEY

# Optional: Webshare proxy for YouTube transcript fetching
# (helps avoid IP bans when running from cloud/VPS)
WEBSHARE_PROXY_USERNAME = YOUR_WEBSHARE_USERNAME
WEBSHARE_PROXY_PASSWORD = YOUR_WEBSHARE_PASSWORD

If you pass endpoint url with --base-url flag in CLI, the api key selected from .env is auto-matched by URL keyword: for example, https://generativelanguage.googleapis.com/... matches generativelanguage.

Prompts (prompts.json)

Use with --prompt-type in CLI or select in drop menu on web interface. Add custom styles by editing prompts.json. Use {text} as the transcript placeholder.

Extra

Local Whisper

Runs transcription on your machine instead of using Cloud Whisper (Groq API). No Groq API key needed, but slower without a GPU.

# Install with GPU support
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# Use it
python -m summarizer --source "URL" --force-download --transcription "Local Whisper" --whisper-model "small"

Why not in Docker? I decided to not include local whsiper in the Docker image because in VPS deployment, GPUs are typically unavailable. Local Whisper without GPU is too slow for production use. Use Cloud Whisper (Groq API, there is also free tier) in Docker, or install locally with GPU.

Model sizes: tiny (fastest) / base / small / medium / large (most accurate). GPU should be auto-detected.

CLI Examples

With a configured summarizer.yaml, the CLI is simple:

# Uses default provider from YAML
python -m summarizer --source "https://youtube.com/watch?v=VIDEO_ID"

# Specify a provider
python -m summarizer --source "https://youtube.com/watch?v=VIDEO_ID" --provider groq

# Fact-check claims with Perplexity (use Summarize skill for AI agents)
python -m summarizer \
  --source "https://youtube.com/watch?v=VIDEO_ID" \
  --base-url "https://api.perplexity.ai" \
  --model "sonar-pro" \
  --prompt-type "Fact Checker"

# Extract key insights
python -m summarizer \
  --source "https://youtube.com/watch?v=VIDEO_ID" \
  --provider gemini \
  --prompt-type "Distill Wisdom"

# Generate a Mermaid diagram
python -m summarizer \
  --source "https://youtube.com/watch?v=VIDEO_ID" \
  --provider openrouter \
  --prompt-type "Mermaid Diagram"

# Multiple videos
python -m summarizer --source "URL1" "URL2" "URL3"

# Local files
python -m summarizer --type "Local File" --source "./lecture.mp4"

# Non-YouTube (requires Cobalt running)
python -m summarizer --type "Video URL" --source "https://www.instagram.com/reel/..."

# Specify language for YouTube captions
python -m summarizer --source "URL" --prompt-type "Distill Wisdom" --language "it"

Without YAML, pass --base-url and --model explicitly:

python -m summarizer \
  --source "https://youtube.com/watch?v=VIDEO_ID" \
  --base-url "https://generativelanguage.googleapis.com/v1beta/openai" \
  --model "gemini-2.5-flash-lite"

CLI Reference

Flag Description Default
--source Video URLs or file paths (multiple allowed) Required
--provider Provider name from YAML default_provider
--base-url API endpoint (overrides provider) From YAML
--model Model identifier (overrides provider) From YAML
--api-key API key (overrides .env) -
--type YouTube Video, Video URL, Local File, Google Drive, Dropbox YouTube Video
--prompt-type Summary style (see below) Questions and answers
--chunk-size Input text chunk size (chars) 10000
--force-download Skip captions, download audio False
--transcription Cloud Whisper (Groq API) or Local Whisper (local) Cloud Whisper
--whisper-model tiny, base, small, medium, large tiny
--language Language code for captions from yt (often useful if Youtube can't found correct captions) auto
--parallel-calls Concurrent API requests 30
--max-tokens Max output tokens per chunk 4096
--output-dir Output directory summaries
--no-save Print only, no file output False
--verbose, -v Detailed output False

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Video transcript summarization from multiple sources (YouTube, Instagram, TikTok, Twitter, Reddit, Facebook, Google Drive, Dropbox, and local files). Works with any OpenAI-compatible LLM provider (even locally hosted).

Topics

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •