Skip to content

slow-groovin/mitmproxy-llm-better-view

Repository files navigation

icon

Mitmproxy LLM Better View

A Tampermonkey script for LLM API Request/Response Visualization in mitmweb.

简体中文README

Screenshots

1: 1 2: 2 3: 3

Why Do You Need This?

When using AI applications like Claude Code or OpenCode, if you can see what context these tools aggregate and input to the LLM, and what the LLM outputs in return, you will better understand how these tools work. You will also gain a clearer idea of what to provide to the LLM to get the results you want.

Usage

Step 1. Install mitmproxy

Make sure you have mitmproxy installed.

mitmweb is the web-based GUI of mitmproxy.


Step 2. Configure the Traffic Flow

Set up your request flow like this:

Your Application  →  mitmproxy  →  Target API

1) Reverse Proxy (opencode / cline / claude code / codex / cherrystudio / vercel ai sdk / others)

Use this mode when your client supports replacing API endpoint/base URL (for example: baseURL, endpoint, api_base).

1️⃣ Configure a Reverse Proxy in mitmweb

Set up a reverse proxy in mitmweb:

mitmweb-reverse-proxy

2️⃣ Replace Your Client Endpoint

Point your client endpoint to your mitmproxy reverse proxy address.

Generic example:

{
  "baseURL": "http://localhost:9091/v1/"
}

⚠️ Note: Reverse proxy endpoint is usually http in local setup.


2) Forward Proxy

For gemini-cli, codex, or claude code under official subscription plans, the API baseURL endpoint is often fixed; in that case, use Forward Proxy.

1️⃣ Follow mitmproxy certificate setup guide

Generate/install certificates by following mitmproxy docs:
https://docs.mitmproxy.org/stable/concepts-certificates/

2️⃣ Add the mitmproxy CA certificate to host trust store

Trust the mitmproxy CA on your OS (and runtime if needed), otherwise HTTPS requests may fail TLS verification.

3️⃣ Export proxy env vars and run your client

export HTTP_PROXY=http://127.0.0.1:8080
export HTTPS_PROXY=http://127.0.0.1:8080
# optional for local loopback/no-proxy targets
export NO_PROXY=localhost,127.0.0.1

Then start your client in the same shell session.


Step 3. Install the Tampermonkey Script

If your mitmweb is not running on localhost:8081 or localhost:9091, you must manually add a matching rule in the Tampermonkey script, for example:

// @match  http://127.0.0.1:8081/*

From GreasyFork (Recommended)

Install directly from:
https://greasyfork.org/scripts/540917-mitmproxy-llm-better-view

Manual Installation

Download from the release page and install it manually.

If mitmweb is not running on ports 8081 or 9090, you must update the matching URL inside the Tampermonkey script.

  1. Install the Tampermonkey script
  2. Open mitmweb in your browser

Step 4. Use and View

Once everything is configured:

  1. mitmproxy should capture the requests.
  2. Click any captured request.
  3. You will see the enhanced panel on the right side.

view

Features

  • Request/Response Visualization: Formatted display of messages, tools, and metadata
  • Multi-Platform Support: OpenAI Chat Completion, Claude, and Gemini
  • SSE Support: Handles streaming responses
  • Collapsible Sections: Fold/unfold messages and tool calls

Support

  • openai
    • /chat/completion api
  • gemini
    • /v1beta api
  • claude
    • /v1/messages api

TODO

  • Ollama API
  • OpenAI Response API

Notes

This is developed and tested based on the author's usage patterns. It may not cover all API usage scenarios. Please submit an ISSUE to add support for uncovered parameters/types.

Related Projects

  • mitmproxy - An interactive TLS-capable intercepting HTTP proxy
  • Tampermonkey - The world's most popular userscript manager
  • If you do not want to install the tampermonkey script, you can also use this to visualize req/res online: ai-api-visualizer

About

LLM API Request/Response Visualization in mitmweb

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors