|
| 1 | +# Telemetry |
| 2 | + |
| 3 | +See [aws-toolkit-common/telemetry](https://github.com/aws/aws-toolkit-common/tree/main/telemetry#telemetry) for full details about defining telemetry metrics. |
| 4 | + |
| 5 | +- You can define new metrics during development by adding items to |
| 6 | + [telemetry/vscodeTelemetry.json](https://github.com/aws/aws-toolkit-vscode/blob/21ca0fca26d677f105caef81de2638b2e4796804/src/shared/telemetry/vscodeTelemetry.json). |
| 7 | + - Building the project will trigger the `generateClients` build task, which generates new symbols in `shared/telemetry/telemetry`, which you can import via: |
| 8 | + ``` |
| 9 | + import { telemetry } from '../../shared/telemetry/telemetry' |
| 10 | + ``` |
| 11 | + - The metrics defined in `vscodeTelemetry.json` should be upstreamed to [aws-toolkit-common](https://github.com/aws/aws-toolkit-common/blob/main/telemetry/definitions/commonDefinitions.json) after launch (at the latest). |
| 12 | +- Metrics are dropped (not posted to the service) if the extension is running in [CI or other |
| 13 | + automation tasks](https://github.com/aws/aws-toolkit-vscode/blob/21ca0fca26d677f105caef81de2638b2e4796804/src/shared/vscode/env.ts#L71-L73). |
| 14 | + - You can always _test_ telemetry via [assertTelemetry()](https://github.com/aws/aws-toolkit-vscode/blob/21ca0fca26d677f105caef81de2638b2e4796804/src/test/testUtil.ts#L164), regardless of the current environment. |
| 15 | +
|
| 16 | +### Incrementally Building a Metric |
| 17 | +
|
| 18 | +In certain scenarios, you may have some code that has multiple stages/steps in its execution. |
| 19 | +
|
| 20 | +For example, `setupThing()` has multiple steps until it is completed, ending with `lastSetupStep()`. |
| 21 | +
|
| 22 | +```typescript |
| 23 | +function setupThing() { |
| 24 | + setupStep1() |
| 25 | + setupStep2() |
| 26 | + ... |
| 27 | + lastSetupStep() |
| 28 | +} |
| 29 | +``` |
| 30 | + |
| 31 | +<br> |
| 32 | + |
| 33 | +If we want to send a metric event, lets call it `metric_setupThing`, then the code could look something like this: |
| 34 | + |
| 35 | +```typescript |
| 36 | +function setupThing() { |
| 37 | + try { |
| 38 | + ... |
| 39 | + lastSetupStep() |
| 40 | + telemetry.metric_setupThing.emit({result: 'Succeeded', ...}) |
| 41 | + } |
| 42 | + catch (e) { |
| 43 | + telemetry.metric_setupThing.emit({result: 'Failed', reason: 'Not Really Sure Why' ...}) |
| 44 | + } |
| 45 | +} |
| 46 | +``` |
| 47 | + |
| 48 | +Here we emitted a final metric based on the failure or success of the entire execution. |
| 49 | + |
| 50 | +<br> |
| 51 | + |
| 52 | +But usually code is not flat and there are many nested calls. If something goes wrong during the execution it would be useful to have more specific information at the area of failure. So what we can do is use `run()` along with `record()`. |
| 53 | + |
| 54 | +`run()` takes in a callable, and when the callable is executed, any uses of `record()` within that callable will update the |
| 55 | +attributes of the specific metric. And at the end we will emit a single metric with the last updated attributes. |
| 56 | + |
| 57 | +For example: |
| 58 | + |
| 59 | +```typescript |
| 60 | +setupThing() |
| 61 | + |
| 62 | +function setupThing() { |
| 63 | + // Start the run() for metric_setupThing |
| 64 | + telemetry.metric_setupThing.run(span => { |
| 65 | + // Update the metric with initial attributes |
| 66 | + span.record({result: 'Failed', reason: 'This is the start so it is not successful yet'}) |
| 67 | + ... |
| 68 | + setupStep2() |
| 69 | + ... |
| 70 | + // Update the metric with the final success attributes since it made it to the end |
| 71 | + span.record({result: 'Succeeded', ...}) |
| 72 | + }) |
| 73 | + // At this point the final values from the `record()` calls are used to emit a the final metric |
| 74 | +} |
| 75 | + |
| 76 | +function setupStep2() { |
| 77 | + try { |
| 78 | + // do work |
| 79 | + } |
| 80 | + catch (e) { |
| 81 | + // Here we can update the metric with more specific information regarding the failure. |
| 82 | + |
| 83 | + // Also notice we are able to use `telemetry.metric_setupThing` versus `span`. |
| 84 | + // This is due to `metric_setupThing` being added to the "context" from the above run() |
| 85 | + // callback argument. So when we use record() below it will update the same |
| 86 | + // thing that span.record() does. |
| 87 | + |
| 88 | + // Keep in mind record() must be run inside the callback argument of run() for |
| 89 | + // the attributes of that specific metric to be updated. |
| 90 | + telemetry.metric_setupThing.record({ |
| 91 | + reason: 'Something failed in setupStep2()' |
| 92 | + }) |
| 93 | + } |
| 94 | +} |
| 95 | +``` |
| 96 | + |
| 97 | +<br> |
| 98 | + |
| 99 | +Finally, if `setupStep2()` was the thing that failed we would see a metric like: |
| 100 | + |
| 101 | +``` |
| 102 | +{ |
| 103 | + "metadata.metricName": "metric_setupThing", |
| 104 | + "result": "Failed", |
| 105 | + "reason": "Something failed in setupStep2()", |
| 106 | + ... |
| 107 | +} |
| 108 | +``` |
0 commit comments