You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. For now let's test a **GET** request. Fill in the URL field. You can use one of your own, or one of ours like [https://frontend-eu.splunko11y.com](https://frontend-eu.splunko11y.com), [https://frontend-us.splunko11y.com](https://frontend-us.splunko11y.com), or [https://www.splunk.com](https://www.splunk.com).
1. Click {{% button %}}Try now{{% /button %}} to validate that the endpoint is accessible before the selected location before saving the test. {{% button %}}Try now{{% /button %}} does not count against your subscription, so this is a good practice to make sure you're not wasting test runs on a misconfigured test.
1. Make sure "Round-robin" is on so the test will run from one location at a time, rather than from all locations at once. If an endpoint is **highly** critical, think about if it is worth it to have all locations tested at the same time every single minute. If you have automations built in with a webhook from a detector, or if you have strict SLAs you need to track, this *could* be worth it to have as much coverage as possible. But if you are doing more manual investigation, or if this is a less critical endpoint, you could be wasting test runs that are executing while an issue is being investigated.
29
+
1. Make sure "Round-robin" is on so the test will run from one location at a time, rather than from all locations at once.
30
+
- If an endpoint is **highly** critical, think about if it is worth it to have all locations tested at the same time every single minute. If you have automations built in with a webhook from a detector, or if you have strict SLAs you need to track, this *could* be worth it to have as much coverage as possible. But if you are doing more manual investigation, or if this is a less critical endpoint, you could be wasting test runs that are executing while an issue is being investigated.
31
+
- Remember that your license is based on the number of test runs per month. Turning Round-robin off will multiply the number of test runs by the number of locations you have selected.
31
32
32
33
1. When you are ready for the test to start running, make sure "Active" is on, then scroll down and click {{% button style="blue" %}}Submit{{% /button %}} to save the test configuration. Now the test will start running with your saved configuration. Take a water break, then we'll look at the results!
Copy file name to clipboardExpand all lines: content/en/scenarios/3-optimize-end-user-experiences/1-synthetics/1-uptime/2-understand-uptime-results.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,18 +1,19 @@
1
1
---
2
2
title: Understanding results
3
+
linkTitle: 1.2 Understanding results
3
4
weight: 2
4
5
---
5
6
6
-
1. Click into a test summary view and play with the [Performance KPIs chart](https://docs.splunk.com/observability/en/synthetics/uptime-test/uptime-test-results.html#customize-the-performance-kpis-chart) filters to see how you can slice and dice your data. This is a good place to get started understanding trends in your data. Later we will see what custom charts look like, so that you can have a consistent view of what you care about most.
7
+
1. Click into a test summary view and play with the [Performance KPIs chart](https://docs.splunk.com/observability/en/synthetics/uptime-test/uptime-test-results.html#customize-the-performance-kpis-chart) filters to see how you can slice and dice your data. This is a good place to get started understanding trends in your data. Later, we will see what custom charts look like, so that you can have a consistent view of what you care about most.
How does the data look? Is it consistent across time and locations? Do certain locations run slower than others? Are there any spikes or failures?
9
+
{{% notice title="Workshop Question: Using the Performance KPIs chart" style="tip" icon="question" %}}
10
+
What metrics are available? Is your data consistent across time and locations? Do certain locations run slower than others? Are there any spikes or failures?
10
11
{{% /notice %}}
11
12
12
-
1. Click into a recent run either in the chart or in the table below. If there are failures, look at the response to see if you need to add a response code assertion (302 is a common one), if there is some authorization needed, or different request headers added.
13
+
1. Click into a recent run either in the chart or in the table below.
1. Here we have information about this particular test run including if it succeeded or failed, the location, timestamp, and duration in addition to the other Uptime test metrics. Click through to see the response, request, and connection info as well.
16
+
1.If there are failures, look at the response to see if you need to add a response code assertion (302 is a common one), if there is some authorization needed, or different request headers added. Here we have information about this particular test run including if it succeeded or failed, the location, timestamp, and duration in addition to the other Uptime test metrics. Click through to see the response, request, and connection info as well.
If you need to edit the test for it to run successfully, click the test name in the top left breadcrumb on this run result page, then click {{% button %}}Edit test{{% /button %}} on the top right of the test overview page. Remember to scroll down and click {{% button style="blue" %}}Submit{{% /button %}} to save your changes after editing the test configuration.
18
19
@@ -21,7 +22,6 @@ If you need to edit the test for it to run successfully, click the test name in
21
22
1. Go back to the test overview page and change the Performance KPIs chart to display First Byte time, and change the interval if needed to better see trends in the data.
22
23

23
24
In the example above, we can see that TTFB varies consistently between locations. Knowing this, we can keep location in mind when reporting on metrics. We could also improve the experience, for example by serving users in those locations an endpoint hosted closer to them, which should reduce network latency. We can also see some slight variations in the results over time, but overall we already have a good idea of our baseline for this endpoint's KPIs. When we have a baseline, we can alert on worsening metrics as well as visualize improvements.
We are not setting a detector on this test yet, to make sure it is running consistently and successfully. If you are testing a highly critical endpoint and want to be alerted on it ASAP (and have tolerance for potential alert noise), jump to Single Test Detectors.
Copy file name to clipboardExpand all lines: content/en/scenarios/3-optimize-end-user-experiences/1-synthetics/1-uptime/_index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ weight: 1
5
5
6
6
## Introduction
7
7
8
-
The simplest way to keep an eye on endpoint availability is with an Uptime test. This lightweight test can run internally or from around the world, as frequently as every minute. Because this is the easiest (and cheapest!) test to set up, and because this is ideal for monitoring availability of your most critical enpoints and ports, let's start here.
8
+
The simplest way to keep an eye on endpoint availability is with an [Uptime test](https://docs.splunk.com/observability/en/synthetics/uptime-test/uptime-test.html). This lightweight test can run internally or externally around the world, as frequently as every minute. Because this is the easiest (and cheapest!) test to set up, and because this is ideal for monitoring availability of your most critical enpoints and ports, let's start here.
Copy file name to clipboardExpand all lines: content/en/scenarios/3-optimize-end-user-experiences/1-synthetics/2-api-test/1-global-variables.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
1
---
2
2
title: Global Variables
3
-
linkTitle: 1.1 Global Variables
3
+
linkTitle: 2.1 Global Variables
4
4
weight: 1
5
5
---
6
6
7
-
## Global Variables
7
+
[Global variables](https://docs.splunk.com/observability/en/synthetics/test-config/global-variables.html) allow us to use stored strings in multiple tests, so we only need to update them in one place.
8
8
9
9
View the global variable that we'll use to perform our API test. Click on **Global Variables** under the cog. The global variable named `env.encoded_auth` will be the one that we'll use to build the spotify API transaction.
Create a new API test by clicking on the {{< button style="blue" >}}Add new test{{< /button >}} button and select **API test** from the dropdown. Name the test using **your initials** followed by **Spotify API** e.g. **RWC - Spotify API**
Click on {{< button style="blue" >}}< Return to test{{< /button >}} to return to the test configuration page. And then click {{< button style="blue" >}}Save{{< /button >}} to save the API test.
30
+
To validate the test before saving, change the location as needed and click {{< button >}}Try now{{< /button >}}. See the docs for more information on the [try now feature](https://docs.splunk.com/observability/en/synthetics/test-config/try-now.html).
31
+
32
+

33
+
34
+
When the validation is successful, click on {{< button style="blue" >}}< Return to test{{< /button >}} to return to the test configuration page. And then click {{< button style="blue" >}}Save{{< /button >}} to save the API test.
Have more time to work on this test? Take a look at the Response Body in one of your run results. What additional steps would make this test more thorough? Edit the test, and use the {{< button >}}Try now{{< /button >}} feature to validate any changes you make before you save the test.
Copy file name to clipboardExpand all lines: content/en/scenarios/3-optimize-end-user-experiences/1-synthetics/2-api-test/_index.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,8 @@ hidden: false
7
7
8
8
The API Test provides a flexible way to check the functionality and performance of API endpoints. The shift toward API-first development has magnified the necessity to monitor the back-end services that provide your core front-end functionality.
9
9
10
-
Whether you're interested in testing the multi-step API interactions or you want to gain visibility into the performance of your endpoints, the API Test can help you accomplish your goals.
10
+
Whether you're interested in testing multi-step API interactions or you want to gain visibility into the performance of your endpoints, the API Test can help you accomplish your goals.
11
+
12
+
This excercise will walk through a multi-step test on the Spotify API. You can also use it as an example when you are building tests on your own APIs or on those of your critical third parties.
0 commit comments