|
| 1 | +# PostHog - Tracking LLM Usage Analytics |
| 2 | + |
| 3 | +## What is PostHog? |
| 4 | + |
| 5 | +PostHog is an open-source product analytics platform that helps you track and analyze how users interact with your product. For LLM applications, PostHog provides specialized AI features to track model usage, performance, and user interactions with your AI features. |
| 6 | + |
| 7 | +## Usage with LiteLLM Proxy (LLM Gateway) |
| 8 | + |
| 9 | +**Step 1**: Create a `config.yaml` file and set `litellm_settings`: `success_callback` |
| 10 | + |
| 11 | +```yaml |
| 12 | +model_list: |
| 13 | + - model_name: gpt-3.5-turbo |
| 14 | + litellm_params: |
| 15 | + model: gpt-3.5-turbo |
| 16 | + |
| 17 | +litellm_settings: |
| 18 | + success_callback: ["posthog"] |
| 19 | + failure_callback: ["posthog"] |
| 20 | +``` |
| 21 | +
|
| 22 | +**Step 2**: Set required environment variables |
| 23 | +
|
| 24 | +```shell |
| 25 | +export POSTHOG_API_KEY="your-posthog-api-key" |
| 26 | +# Optional, defaults to https://app.posthog.com |
| 27 | +export POSTHOG_API_URL="https://app.posthog.com" # optional |
| 28 | +``` |
| 29 | + |
| 30 | +**Step 3**: Start the proxy, make a test request |
| 31 | + |
| 32 | +Start proxy |
| 33 | + |
| 34 | +```shell |
| 35 | +litellm --config config.yaml --debug |
| 36 | +``` |
| 37 | + |
| 38 | +Test Request |
| 39 | + |
| 40 | +```shell |
| 41 | +curl --location 'http://0.0.0.0:4000/chat/completions' \ |
| 42 | + --header 'Content-Type: application/json' \ |
| 43 | + --data '{ |
| 44 | + "model": "gpt-3.5-turbo", |
| 45 | + "messages": [ |
| 46 | + { |
| 47 | + "role": "user", |
| 48 | + "content": "what llm are you" |
| 49 | + } |
| 50 | + ], |
| 51 | + "metadata": { |
| 52 | + "user_id": "user-123", |
| 53 | + "custom_field": "custom_value" |
| 54 | + } |
| 55 | +}' |
| 56 | +``` |
| 57 | + |
| 58 | +## Usage with LiteLLM Python SDK |
| 59 | + |
| 60 | +### Quick Start |
| 61 | + |
| 62 | +Use just 2 lines of code, to instantly log your responses **across all providers** with PostHog: |
| 63 | + |
| 64 | +```python |
| 65 | +litellm.success_callback = ["posthog"] |
| 66 | +litellm.failure_callback = ["posthog"] # logs errors to posthog |
| 67 | +``` |
| 68 | +```python |
| 69 | +import litellm |
| 70 | +import os |
| 71 | + |
| 72 | +# from PostHog |
| 73 | +os.environ["POSTHOG_API_KEY"] = "" |
| 74 | +# Optional, defaults to https://app.posthog.com |
| 75 | +os.environ["POSTHOG_API_URL"] = "" # optional |
| 76 | + |
| 77 | +# LLM API Keys |
| 78 | +os.environ['OPENAI_API_KEY']="" |
| 79 | + |
| 80 | +# set posthog as a callback, litellm will send the data to posthog |
| 81 | +litellm.success_callback = ["posthog"] |
| 82 | + |
| 83 | +# openai call |
| 84 | +response = litellm.completion( |
| 85 | + model="gpt-3.5-turbo", |
| 86 | + messages=[ |
| 87 | + {"role": "user", "content": "Hi - i'm openai"} |
| 88 | + ], |
| 89 | + metadata = { |
| 90 | + "user_id": "user-123", # set posthog user ID |
| 91 | + } |
| 92 | +) |
| 93 | +``` |
| 94 | + |
| 95 | +### Advanced |
| 96 | + |
| 97 | +#### Set User ID and Custom Metadata |
| 98 | + |
| 99 | +Pass `user_id` in `metadata` to associate events with specific users in PostHog: |
| 100 | + |
| 101 | +**With LiteLLM Python SDK:** |
| 102 | + |
| 103 | +```python |
| 104 | +import litellm |
| 105 | + |
| 106 | +litellm.success_callback = ["posthog"] |
| 107 | + |
| 108 | +response = litellm.completion( |
| 109 | + model="gpt-3.5-turbo", |
| 110 | + messages=[ |
| 111 | + {"role": "user", "content": "Hello world"} |
| 112 | + ], |
| 113 | + metadata={ |
| 114 | + "user_id": "user-123", # Add user ID for PostHog tracking |
| 115 | + "custom_field": "custom_value" # Add custom metadata |
| 116 | + } |
| 117 | +) |
| 118 | +``` |
| 119 | + |
| 120 | +**With LiteLLM Proxy using OpenAI Python SDK:** |
| 121 | + |
| 122 | +```python |
| 123 | +import openai |
| 124 | + |
| 125 | +client = openai.OpenAI( |
| 126 | + api_key="sk-1234", # Your LiteLLM Proxy API key |
| 127 | + base_url="http://0.0.0.0:4000" # Your LiteLLM Proxy URL |
| 128 | +) |
| 129 | + |
| 130 | +response = client.chat.completions.create( |
| 131 | + model="gpt-3.5-turbo", |
| 132 | + messages=[ |
| 133 | + {"role": "user", "content": "Hello world"} |
| 134 | + ], |
| 135 | + extra_body={ |
| 136 | + "metadata": { |
| 137 | + "user_id": "user-123", # Add user ID for PostHog tracking |
| 138 | + "project_name": "my-project", # Add custom metadata |
| 139 | + "environment": "production" |
| 140 | + } |
| 141 | + } |
| 142 | +) |
| 143 | +``` |
| 144 | + |
| 145 | +#### Disable Logging for Specific Calls |
| 146 | + |
| 147 | +Use the `no-log` flag to prevent logging for specific calls: |
| 148 | + |
| 149 | +```python |
| 150 | +import litellm |
| 151 | + |
| 152 | +litellm.success_callback = ["posthog"] |
| 153 | + |
| 154 | +response = litellm.completion( |
| 155 | + model="gpt-3.5-turbo", |
| 156 | + messages=[ |
| 157 | + {"role": "user", "content": "This won't be logged"} |
| 158 | + ], |
| 159 | + metadata={"no-log": True} |
| 160 | +) |
| 161 | +``` |
| 162 | + |
| 163 | +## What's Logged to PostHog? |
| 164 | + |
| 165 | +When LiteLLM logs to PostHog, it captures detailed information about your LLM usage: |
| 166 | + |
| 167 | +### For Completion Calls |
| 168 | +- **Model Information**: Provider, model name, model parameters |
| 169 | +- **Usage Metrics**: Input tokens, output tokens, total cost |
| 170 | +- **Performance**: Latency, completion time |
| 171 | +- **Content**: Input messages, model responses (respects privacy settings) |
| 172 | +- **Metadata**: Custom fields, user ID, trace information |
| 173 | + |
| 174 | +### For Embedding Calls |
| 175 | +- **Model Information**: Provider, model name |
| 176 | +- **Usage Metrics**: Input tokens, total cost |
| 177 | +- **Performance**: Latency |
| 178 | +- **Content**: Input text (respects privacy settings) |
| 179 | +- **Metadata**: Custom fields, user ID, trace information |
| 180 | + |
| 181 | +### For Errors |
| 182 | +- **Error Details**: Error type, error message, stack trace |
| 183 | +- **Context**: Model, provider, input that caused the error |
| 184 | +- **Timing**: When the error occurred, request duration |
| 185 | + |
| 186 | +## Environment Variables |
| 187 | + |
| 188 | +| Variable | Required | Description | |
| 189 | +|----------|----------|-------------| |
| 190 | +| `POSTHOG_API_KEY` | Yes | Your PostHog project API key | |
| 191 | +| `POSTHOG_API_URL` | No | PostHog API URL (defaults to https://app.posthog.com) | |
| 192 | + |
| 193 | +## Troubleshooting |
| 194 | + |
| 195 | +### 1. Missing API Key |
| 196 | +``` |
| 197 | +Error: POSTHOG_API_KEY is not set |
| 198 | +``` |
| 199 | + |
| 200 | +Set your PostHog API key: |
| 201 | +```python |
| 202 | +import os |
| 203 | +os.environ["POSTHOG_API_KEY"] = "your-api-key" |
| 204 | +``` |
| 205 | + |
| 206 | +### 2. Custom PostHog Instance |
| 207 | +If you're using a self-hosted PostHog instance: |
| 208 | +```python |
| 209 | +import os |
| 210 | +os.environ["POSTHOG_API_URL"] = "https://your-posthog-instance.com" |
| 211 | +``` |
| 212 | + |
| 213 | +### 3. Events Not Appearing |
| 214 | +- Check that your API key is correct |
| 215 | +- Verify network connectivity to PostHog |
| 216 | +- Events may take a few minutes to appear in PostHog dashboard |
0 commit comments