Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
426c39e
Add Python tracing span metrics documentation
Mar 21, 2025
302e288
add new documentation to and reorganize Python tracing docs
Mar 25, 2025
2f2680d
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Mar 26, 2025
dd4ad91
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Mar 26, 2025
0397fcf
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Mar 26, 2025
3c601a2
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Mar 26, 2025
2cdca6b
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Mar 26, 2025
6d4e3bf
Update docs/platforms/python/tracing/span-metrics/index.mdx
sfanahata Mar 26, 2025
d6e2490
Update docs/platforms/python/tracing/span-metrics/performance-metrics…
sfanahata Mar 26, 2025
4a3d732
updates based on PR feedback
Mar 26, 2025
084a25e
comment out python troubleshootredirect
Mar 26, 2025
a192aec
resolving yarn.lock issues
Mar 27, 2025
9770d5a
update yarn.lock - adding a blank line at the end
Mar 27, 2025
f69c5ea
fixing redirect 404 for Python profiling page
Mar 27, 2025
d808dd9
break out span lifecycle doc and resolve feedback from PR
Apr 1, 2025
c430801
resolving broken link in instrumentation index file
Apr 1, 2025
2f791b2
Update docs/platforms/python/tracing/instrumentation/index.mdx
szokeasaurusrex Apr 1, 2025
03f5d6a
order span lifecycle doc
Apr 1, 2025
137dbb8
order span lifecycle doc
Apr 1, 2025
5117a30
Update index.mdx
sfanahata Apr 1, 2025
44289dd
Update index.mdx
sfanahata Apr 1, 2025
c8dba3b
Update docs/platforms/python/tracing/span-lifecycle/index.mdx
sfanahata Apr 1, 2025
92a9a3d
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Apr 1, 2025
407c560
Update docs/platforms/python/tracing/instrumentation/index.mdx
sfanahata Apr 1, 2025
0a89965
Update index.mdx
sfanahata Apr 1, 2025
b131bbf
Update python.mdx
sfanahata Apr 1, 2025
fef0ae1
Update index.mdx
sfanahata Apr 1, 2025
61dd75e
Update index.mdx
sfanahata Apr 1, 2025
1a9da90
Update docs/platforms/python/tracing/span-metrics/index.mdx
sfanahata Apr 1, 2025
dd5d44a
Update docs/platforms/python/tracing/span-metrics/index.mdx
sfanahata Apr 1, 2025
8b00344
Update index.mdx
sfanahata Apr 1, 2025
ab9ebf7
fix span.attributes mix-up, more feedback from reviewers
Apr 3, 2025
6de450f
update span lifecycle
Apr 3, 2025
e9bb6af
updated examples based on feedback. Reverted yarn.lock change
Apr 4, 2025
e39d413
Reset yarn.lock to match master
Apr 4, 2025
d8e046c
Merge remote-tracking branch 'origin/master' into ShannonA-python-tra…
Apr 4, 2025
7e5c6e4
yarn.lock revert
Apr 4, 2025
d09e2b4
Force commit for yarn.lock
Apr 4, 2025
1519f54
update yarn.lock
Apr 4, 2025
8f17aa2
update yarn.lock
Apr 4, 2025
a2e0e4d
reset yarn.lock to match master
Apr 4, 2025
75033d4
Update yarn.lock from master
Apr 4, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/platforms/python/profiling/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ For Profiling to work, you have to first enable [Sentry’s tracing](/concepts/k

### Upgrading from Older Python SDK Versions

Profiling was experimental in SDK versions `1.17.0` and older. Learn how to upgrade <PlatformLink to="/profiling/troubleshooting/#ipgrading-from-older-sdk-versions">here</PlatformLink>.
Profiling was experimental in SDK versions `1.17.0` and older. Learn how to upgrade <PlatformLink to="/troubleshooting">here</PlatformLink>.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can probably remove this section entirely at this point. 1.17.0 is a very old Python SDK version – I am sure we have other pages that describe how to upgrade the SDK


## Enable Continuous Profiling

Expand Down
288 changes: 288 additions & 0 deletions docs/platforms/python/tracing/configure-sampling/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,288 @@
---
title: Configure Sampling
description: "Learn how to configure sampling in your app."
sidebar_order: 30
---

Sentry's tracing functionality helps you monitor application performance by capturing distributed traces, attaching attributes, and adding span performance metrics across your application. However, capturing traces for every transaction can generate significant volumes of data. Sampling allows you to control the amount of spans that are sent to Sentry from your application.

Effective sampling is key to getting the most value from Sentry's performance monitoring while minimizing overhead. The `traces_sampler` function gives you precise control over which transactions to record, allowing you to focus on the most important parts of your application.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should already be explaining the benefits of sampling on the top-level page, so we can greatly simplify this introductory paragraph and more quickly get to the point of sampling. I would also omit the second paragraph, since traces_sampler is a more advanced feature, and it is described in depth further down the page.

Suggested change
Sentry's tracing functionality helps you monitor application performance by capturing distributed traces, attaching attributes, and adding span performance metrics across your application. However, capturing traces for every transaction can generate significant volumes of data. Sampling allows you to control the amount of spans that are sent to Sentry from your application.
Effective sampling is key to getting the most value from Sentry's performance monitoring while minimizing overhead. The `traces_sampler` function gives you precise control over which transactions to record, allowing you to focus on the most important parts of your application.
If you find that Sentry's tracing functionality is generating too much data – for example, if you notice your spans quota is being quickly exhausted – you can opt to sample your traces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. I like how to the point it is about exhausting your quota!


## Sampling Configuration Options

The Python SDK provides two main options for controlling the sampling rate:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The Python SDK provides two main options for controlling the sampling rate:
The Python SDK provides two ways to control how your traces are sampled.


1. Uniform Sample Rate (`traces_sample_rate`)

This option sets a fixed percentage of transactions to be captured:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple things here:

  • First of all, it is incorrect to call the traces_sample_rate a percentage, since it is a value between 0 and 1, not 0% and 100%.
  • It would be more accurate to state that this controls the "probability" with which each transaction is sampled
  • I would use "sampled" rather than "captured"

Something like the following would work:

Suggested change
This option sets a fixed percentage of transactions to be captured:
`traces_sample_rate` is a floating-point value between `0.0` and `1.0`, inclusive, which controls the probability with which each transaction will be sampled:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestion, and thanks for clarifying the functionality.


<PlatformContent includePath="/performance/traces-sample-rate" />

With `traces_sample_rate` set to `0.25`, approximately 25% of transactions will be recorded and sent to Sentry. This provides an even cross-section of transactions regardless of where in your app they occur.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] I would reformulate as follows to make this more technically accurate and specific

Suggested change
With `traces_sample_rate` set to `0.25`, approximately 25% of transactions will be recorded and sent to Sentry. This provides an even cross-section of transactions regardless of where in your app they occur.
With `traces_sample_rate` set to `0.25`, each transaction in your application is randomly sampled with a probability of `0.25`, so you can expect that one in every four transactions will be sent to Sentry.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is much clearer. Thanks!


2. Sampling Function (`traces_sampler`)

For more granular control, you can use the `traces_sampler` function. This approach allows you to:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current wording may be confusing. traces_sampler is a function that the user has to define and provide, saying they can "use the" function makes it sound like we provide a function called traces_sampler that users can use (we don't)

Suggested change
For more granular control, you can use the `traces_sampler` function. This approach allows you to:
For more granular control, you can provide a `traces_sampler` function. This approach allows you to:


- Apply different sampling rates to different types of transactions
- Filter out specific transactions entirely
- Make sampling decisions based on transaction data
- Control the inheritance of sampling decisions in distributed traces

<PlatformContent includePath="/performance/traces-sampler-as-sampler" />

### Trace Sampler Examples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nit] I think we might be including too many examples here, for a page which should be an introductory page. I would include at most one simple example on this page. We can create and link to a separate page to list more examples for users who are interested

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to leave the examples in, but I collapsed them under an expandable section.


1. Prioritizing Critical User Flows

```python
def traces_sampler(sampling_context):
ctx = sampling_context.get("transaction_context", {})
name = ctx.get("name", "")

# Sample all checkout transactions
if name and ('/checkout' in name or
ctx.get("op") == 'checkout'):
return 1.0

# Sample 50% of login transactions
if name and ('/login' in name or
ctx.get("op") == 'login'):
return 0.5

# Sample 10% of everything else
return 0.1

sentry_sdk.init(
dsn="your-dsn",
traces_sampler=traces_sampler,
)
```

2. Handling Different Environments and Error Rates

```python
def traces_sampler(sampling_context):
ctx = sampling_context.get("transaction_context", {})
environment = os.environ.get("ENVIRONMENT", "development")

# Sample all transactions in development
if environment == "development":
return 1.0

# Sample more transactions if there are recent errors
if ctx.get("data", {}).get("hasRecentErrors"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the hasRecentErrors coming from? If it's a custom attribute, this part is a bit confusing since it makes it seems like it's something the SDK sets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is meant to be an example of a custom attribute to customize sampling decisions. Are you saying it reads like it's something the SDK sets because it's provided in the example, or is it the way the method is written that makes it confusing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically if I was coming to this example as someone who hasn't used the SDK yet, I'd see no distinction between the data that the SDK puts in the sampling_context vs what I myself as a user need to put there explicitly. I'd expect some introductory paragraph that clarifies that not all the keys in this example will be there out of the box. See also #13125 (comment)

return 0.8

# Sample based on environment
if environment == "production":
return 0.05 # 5% in production
elif environment == "staging":
return 0.2 # 20% in staging

return 0.1 # 10% default

sentry_sdk.init(
dsn="your-dsn",
traces_sampler=traces_sampler,
)
```

3. Controlling Sampling Based on User and Transaction Properties

```python
def traces_sampler(sampling_context):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this example it's also not 100% clear what's coming from the SDK and what the user needs to have set for it to appear in the sampling context (user.tier, hasRecentErrors) -- might lead to misunderstandings if folks expect this data to be there from the get go. Maybe we can clarify that some of these are custom attributes that need to be set by the user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback! I'll add more details in the sample comments that makes it clearer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also added a bullet at the top of the section to call out using custom attributes as a part of useing traces_sampler.

Copy link
Contributor

@sentrivana sentrivana Apr 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, have the changes already been applied? As is the examples are still mixing prepopulated keys with custom ones and it's still not clear which is which. If I was a new user I'd be confused as to why there's a parent_sampled in my sampling_context but no data.hasRecentErrors.

I feel like the current page https://docs.sentry.io/platforms/python/configuration/sampling/#sampling-context-data does a great job of making the distinction as well as making it clear how you can get custom data into the sampling_context. I'm not sure if this section stays as is after the reorganization -- if not, can we somehow port it to this page? And if it does, can we reference it at the start of this page when mentioning custom attributes?

Copy link
Contributor Author

@sfanahata sfanahata Apr 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for that example. I see what you mean now! The custom_sampling_context would be a better way to group the custom data together in the examples. I'll update them, and link to the sampling page.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the examples to more explicitly use transaction.set_data and added a note about set_data vs custom_sampling_context, with a link to the sampling page.

ctx = sampling_context.get("transaction_context", {})
data = ctx.get("data", {})

# Always sample for premium users
if data.get("user", {}).get("tier") == "premium":
return 1.0

# Sample more transactions for users experiencing errors
if data.get("hasRecentErrors"):
return 0.8

# Sample less for high-volume, low-value paths
if ctx.get("name", "").startswith("/api/metrics"):
return 0.01

# Sample more for slow transactions
if data.get("duration_ms", 0) > 1000: # Transactions over 1 second
return 0.5

# If there's a parent sampling decision, respect it
if sampling_context.get("parent_sampled") is not None:
return sampling_context["parent_sampled"]

# Default sampling rate
return 0.2

sentry_sdk.init(
dsn="your-dsn",
traces_sampler=traces_sampler,
)
```

4. Complex Business Logic Sampling

```python
def traces_sampler(sampling_context):
ctx = sampling_context.get("transaction_context", {})
data = ctx.get("data", {})

# Always sample critical business operations
if ctx.get("op") in ["payment.process", "order.create", "user.verify"]:
return 1.0

# Sample based on user segment
user_segment = data.get("user", {}).get("segment")
if user_segment == "enterprise":
return 0.8
elif user_segment == "premium":
return 0.5

# Sample based on transaction value
transaction_value = data.get("transaction", {}).get("value", 0)
if transaction_value > 1000: # High-value transactions
return 0.7

# Sample based on error rate in the service
error_rate = data.get("service", {}).get("error_rate", 0)
if error_rate > 0.05: # Error rate above 5%
return 0.9

# Inherit parent sampling decision if available
if sampling_context.get("parent_sampled") is not None:
return sampling_context["parent_sampled"]

# Default sampling rate
return 0.1

sentry_sdk.init(
dsn="your-dsn",
traces_sampler=traces_sampler,
)
```

5. Performance-Based Sampling

```python
def traces_sampler(sampling_context):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd also make clear the attributes here have to be added and are not there by default (db_connections etc.).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only picked db_connections randomly, the other attrs are also custom (duration_ms, memory_usage_mb, cpu_percent) -- so we should probably also mark them as such.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a line as a part of the section's intro that this outlined how to use custom attributes, but happy to add notes in the examples too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

ctx = sampling_context.get("transaction_context", {})
data = ctx.get("data", {})

# Sample all slow transactions
if data.get("duration_ms", 0) > 2000: # Over 2 seconds
return 1.0

# Sample more transactions with high memory usage
if data.get("memory_usage_mb", 0) > 500: # Over 500MB
return 0.8

# Sample more transactions with high CPU usage
if data.get("cpu_percent", 0) > 80: # Over 80% CPU
return 0.8

# Sample more transactions with high database load
if data.get("db_connections", 0) > 100: # Over 100 connections
return 0.7

# Default sampling rate
return 0.1

sentry_sdk.init(
dsn="your-dsn",
traces_sampler=traces_sampler,
)
```

## The Sampling Context Object

When the `traces_sampler` function is called, the Sentry SDK passes a `sampling_context` object with information from the relevant span to help make sampling decisions:

```python
{
"transaction_context": {
"name": str, # transaction title at creation time
"op": str, # short description of transaction type, like "http.request"
# other transaction data...
},
"parent_sampled": bool, # whether the parent transaction was sampled (if any)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this was incorrect in the docs before, as well

Suggested change
"parent_sampled": bool, # whether the parent transaction was sampled (if any)
"parent_sampled": bool | None, # whether the parent transaction was sampled, `None` if no parent

"parent_sample_rate": float, # the sample rate used by the parent (if any)
# Custom context as passed to start_transaction
}
```

The sampling context contains:

- `transaction_context`: Includes the transaction name, operation type, and other metadata
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd either remove the code comments from the snippet above or delete this section. We don't need to have the same information twice

- `parent_sampled`: Whether the parent transaction was sampled (for distributed tracing)
- `parent_sample_rate`: The sample rate used in the parent transaction
- Any custom sampling context data passed to `start_transaction`

## Inheritance in Distributed Tracing

In distributed systems, trace information is propagated between services. You can implement inheritance logic like this:

```python
def traces_sampler(sampling_context):
# Examine provided context data
if "transaction_context" in sampling_context:
name = sampling_context["transaction_context"].get("name", "")

# Apply specific rules first
if "critical-path" in name:
return 1.0 # Always sample

# Inherit parent sampling decision if available
if sampling_context.get("parent_sampled") is not None:
return sampling_context["parent_sampled"]

# Otherwise use a default rate
return 0.1
```

This approach ensures consistent sampling decisions across your entire distributed trace. All transactions in a given trace will share the same sampling decision, preventing broken or incomplete traces.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned in a previous comment, our introductory examples should include the code to ensure inheritance in distributed tracing. Most people will probably only look at the first example (assuming that is the default recommendation), and that default recommendation should include the code for inheritance, since that is the best practice.

My opinion is we should completely delete this section.

Suggested change
## Inheritance in Distributed Tracing
In distributed systems, trace information is propagated between services. You can implement inheritance logic like this:
```python
def traces_sampler(sampling_context):
# Examine provided context data
if "transaction_context" in sampling_context:
name = sampling_context["transaction_context"].get("name", "")
# Apply specific rules first
if "critical-path" in name:
return 1.0 # Always sample
# Inherit parent sampling decision if available
if sampling_context.get("parent_sampled") is not None:
return sampling_context["parent_sampled"]
# Otherwise use a default rate
return 0.1
```
This approach ensures consistent sampling decisions across your entire distributed trace. All transactions in a given trace will share the same sampling decision, preventing broken or incomplete traces.

We can perhaps replace this section with a section about overriding the parent sampling decision. However, such a section likely belongs on a more advanced page, not on the introductory page.

In short: the default best practice is to always inherit the parent sampling decision. The current structure makes it look like this is something which is "nice to have" for more advanced users. In fact, the opposite is true: novice users should inherit the parent sampling decision; overriding this decision is an advanced topic

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left it for now, to explain how it works, but I also added the snippet from the above comment higher up in the docs under the explainer on traces_sampler

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk, I do still think having this snippet adds too much complexity to the page

what do you all think @antonpirker @sentrivana?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The section reads a bit confusing to me. The headline says it's about Inheritance in Distributed Tracing, but the example then explicitly ignores the parent sampling decision, so out of all examples on this page (which always start with: if there's a parent decision, just use that), this actually uses inheritance the least? Maybe we need a better headline/description for this section that's actually about when -- if ever -- it makes sense to ignore the parent sampling decision.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like it actually makes the most sense to remove this section as an explicit call-out, and add a bit more description higher up the page with the call-out of the parent_sampled method. I'll give that a try.


## Sampling Decision Precedence

When multiple sampling mechanisms could apply, Sentry follows this order of precedence:

1. If a sampling decision is passed to `start_transaction`, that decision is used
2. If `traces_sampler` is defined, its decision is used (can consider parent sampling)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
2. If `traces_sampler` is defined, its decision is used (can consider parent sampling)
2. If `traces_sampler` is defined, its decision is used. Although the `traces_sampler` can override the parent sampling decision, most users will want to ensure their `traces_sampler` respects the parent sampling decision.

3. If no `traces_sampler` but parent sampling is available, parent decision is used
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
3. If no `traces_sampler` but parent sampling is available, parent decision is used
3. If no `traces_sampler` is defined, but there is a parent sampling decision from an incoming distributed trace, we use the parent sampling decision

4. If neither of the above, `traces_sample_rate` is used
5. If none of the above are set, no transactions are sampled (0%)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
5. If none of the above are set, no transactions are sampled (0%)
5. If none of the above are set, no transactions are sampled. This is equivalent to setting `traces_sample_rate=0.0`.


## How Sampling Propagates in Distributed Traces

Sentry uses a "head-based" sampling approach:

- A sampling decision is made in the originating service (the "head")
- This decision is propagated to all downstream services via HTTP headers
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically, the decision can also be propagated via other means (e.g. headers in task queue, environment variables). HTTP headers are only used when the downstream service is contacted over the network via HTTP request; although this might be the most common case, it is not the only one.

As this is an introductory page, we can omit the details of how the trace is propagated

Suggested change
- This decision is propagated to all downstream services via HTTP headers
- This decision is propagated to all downstream services


The two key headers are:
- `sentry-trace`: Contains trace ID, span ID, and sampling decision
- `baggage`: Contains additional trace metadata including sample rate

The Sentry Python SDK automatically attaches these headers to outgoing HTTP requests when using auto-instrumentation with libraries like `requests`, `urllib3`, or `httpx`. For other communication channels, you can manually propagate trace information:

```python
# Extract trace data from the current scope
trace_data = sentry_sdk.get_current_scope().get_trace_context()
sentry_trace_header = trace_data.get("sentry-trace")
baggage_header = trace_data.get("baggage")

# Add to your custom request (example using a message queue)
message = {
"data": "Your data here",
"metadata": {
"sentry_trace": sentry_trace_header,
"baggage": baggage_header
}
}
queue.send(json.dumps(message))
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really advanced stuff – not sure we should mention it at all on this page tbh

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For most users, the distributed tracing gets handled automatically by the SDK

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The two key headers are:
- `sentry-trace`: Contains trace ID, span ID, and sampling decision
- `baggage`: Contains additional trace metadata including sample rate
The Sentry Python SDK automatically attaches these headers to outgoing HTTP requests when using auto-instrumentation with libraries like `requests`, `urllib3`, or `httpx`. For other communication channels, you can manually propagate trace information:
```python
# Extract trace data from the current scope
trace_data = sentry_sdk.get_current_scope().get_trace_context()
sentry_trace_header = trace_data.get("sentry-trace")
baggage_header = trace_data.get("baggage")
# Add to your custom request (example using a message queue)
message = {
"data": "Your data here",
"metadata": {
"sentry_trace": sentry_trace_header,
"baggage": baggage_header
}
}
queue.send(json.dumps(message))
```

Just noticed there is a separate page about distributed tracing. This entire section should therefore be deleted from this page. If needed, we can move it to the distributed tracing page.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cleaned this up to add a brief explainer about how sampling shows up in distributed tracing, and linked to that section.


By implementing a thoughtful sampling strategy, you'll get the performance insights you need without overwhelming your systems or your Sentry quota.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure this line is needed here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed.

Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Custom Instrumentation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sfanahata - In the updated JavaScript tracing docs, we changed this section to be "Custom Trace Popagation" to make it clearer what someone was getting out of this doc. We should match that.

sidebar_order: 40
sidebar_order: 10
---

<PlatformContent includePath="distributed-tracing/custom-instrumentation/" />
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
title: Trace Propagation
title: Set Up Distributed Tracing
description: "Learn how to connect events across applications/services."
sidebar_order: 3000
sidebar_order: 20
---

If the overall application landscape that you want to observe with Sentry consists of more than just a single service or application, distributed tracing can add a lot of value.
<PlatformContent includePath="distributed-tracing/explanation" />

## What is Distributed Tracing?

Expand Down
Loading
Loading