Skip to content

Commit 5730f0c

Browse files
committed
feat: impact metrics
1 parent 41adbe3 commit 5730f0c

File tree

1 file changed

+52
-0
lines changed

1 file changed

+52
-0
lines changed

README.md

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -319,6 +319,58 @@ client.is_enabled("testFlag")
319319

320320
```
321321

322+
### Impact metrics
323+
324+
Impact metrics are lightweight, application-level time-series metrics stored and visualized directly inside Unleash. They allow you to connect specific application data, such as request counts, error rates, or latency, to your feature flags and release plans.
325+
326+
These metrics help validate feature impact and automate release processes. For instance, you can monitor usage patterns or performance to determine if a feature meets its goals.
327+
328+
The SDK automatically attaches context labels to metrics: `appName` and `environment`.
329+
330+
#### Counters
331+
332+
Use counters for cumulative values that only increase (total requests, errors):
333+
334+
```python
335+
client.impact_metrics.define_counter(
336+
"request_count",
337+
"Total number of HTTP requests processed"
338+
)
339+
340+
client.impact_metrics.increment_counter("request_count")
341+
```
342+
343+
#### Gauges
344+
345+
Use gauges for fluctuating values like memory usage or active connections:
346+
347+
```python
348+
import psutil
349+
350+
client.impact_metrics.define_gauge(
351+
"memory_usage",
352+
"Current memory usage in bytes"
353+
)
354+
355+
current_memory = psutil.Process().memory_info().rss
356+
client.impact_metrics.update_gauge("memory_usage", current_memory)
357+
```
358+
359+
#### Histograms
360+
361+
Histograms measure value distribution (request duration, response size):
362+
363+
```python
364+
client.impact_metrics.define_histogram(
365+
"request_time_ms",
366+
"Time taken to process a request in milliseconds"
367+
)
368+
369+
client.impact_metrics.observe_histogram("request_time_ms", 125)
370+
```
371+
372+
Impact metrics are batched and sent using the same interval as standard SDK metrics.
373+
322374
### Custom cache
323375

324376
By default, the Python SDK stores feature flags in an on-disk cache using fcache. If you need a different storage backend, for example, Redis, memory-only, or a custom database, you can provide your own cache implementation.

0 commit comments

Comments
 (0)