Skip to content

Commit dead575

Browse files
[solutions] Fix external links (#683)
In the migration tool, external URLs that contained `.html` were incorrectly modified to use `.md`. To (attempt) to fix these links: 1. I ran a script to find URLs that started with `http` (i.e. is not an internal or cross-repo link) and included `.md` somewhere in the URL. 2. Then I ran all the resulting URLs through an external link-checker. i. If I got a [`200` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/200) back, I didn't do anything to the link. ii. If I got a [`404` status code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404) back, I swapped out the `.md` with `.html`. 3. I reran the link-checker to make sure all the updated links were now returning `200`. Co-authored-by: Liam Thompson <[email protected]>
1 parent 57ea966 commit dead575

28 files changed

+62
-62
lines changed

solutions/observability/apps/create-custom-links.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ This link creates a new task on the Engineering board in Jira. It populates the
138138
| Label | `Open a task in Jira` |
139139
| Link | `https://test-site-33.atlassian.net/secure/CreateIssueDetails!init.jspa?pid=10000&issuetype=10001&summary=Created+via+APM&description=Investigate+the+following+APM+trace%3A%0D%0A%0D%0Aservice.name%3A+{{service.name}}%0D%0Atransaction.id%3A+{{transaction.id}}%0D%0Acontainer.id%3A+{{container.id}}%0D%0Aurl.full%3A+{{url.full}}` |
140140

141-
See the [Jira application administration knowledge base](https://confluence.atlassian.com/jirakb/how-to-create-issues-using-direct-html-links-in-jira-server-159474.md) for a full list of supported query parameters.
141+
See the [Jira application administration knowledge base](https://confluence.atlassian.com/jirakb/how-to-create-issues-using-direct-html-links-in-jira-server-159474.html) for a full list of supported query parameters.
142142

143143

144144
### Dashboards [custom-links-examples-kib]

solutions/observability/apps/create-upload-source-maps-rum.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -58,7 +58,7 @@ It can also be any other unique string that indicates a specific version of your
5858

5959
## Generate a source map [apm-source-map-rum-generate]
6060

61-
To be compatible with Elastic APM, source maps must follow the [source map revision 3 proposal spec](https://sourcemaps.info/spec.md).
61+
To be compatible with Elastic APM, source maps must follow the [source map revision 3 proposal spec](https://sourcemaps.info/spec.html).
6262

6363
Source maps can be generated and configured in many different ways. For example, parcel automatically generates source maps by default. If you’re using webpack, some configuration may be needed to generate a source map:
6464

solutions/observability/apps/monitoring-aws-lambda-functions.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ AWS Lambda uses a special execution model to provide a scalable, on-demand compu
1818
1. To avoid data loss, APM data collected by APM agents needs to be flushed before the execution environment of a lambda function is frozen.
1919
2. Flushing APM data must be fast so as not to impact the response times of lambda function requests.
2020

21-
To accomplish the above, Elastic APM agents instrument AWS Lambda functions and dispatch APM data via an [AWS Lambda extension](https://docs.aws.amazon.com/lambda/latest/dg/using-extensions.md).
21+
To accomplish the above, Elastic APM agents instrument AWS Lambda functions and dispatch APM data via an [AWS Lambda extension](https://docs.aws.amazon.com/lambda/latest/dg/using-extensions.html).
2222

2323
Normally, during the execution of a Lambda function, there’s only a single language process running in the AWS Lambda execution environment. With an AWS Lambda extension, Lambda users run a *second* process alongside their main service/application process.
2424

solutions/observability/apps/tutorial-monitor-java-application.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -917,7 +917,7 @@ You have now learned about parsing logs in either {{beats}} or {{es}}. What if w
917917
918918
Writing out logs as plain text works and is easy to read for humans. However, first writing them out as plain text, parsing them using the `dissect` processors, and then creating a JSON again sounds tedious and burns unneeded CPU cycles.
919919
920-
While log4j2 has a [JSONLayout](https://logging.apache.org/log4j/2.x/manual/layouts.md#JSONLayout), you can go further and use a Library called [ecs-logging-java](https://github.com/elastic/ecs-logging-java). The advantage of ECS logging is that it uses the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/index.md). ECS defines a standard set of fields used when storing event data in {{es}}, such as logs and metrics.
920+
While log4j2 has a [JSONLayout](https://logging.apache.org/log4j/2.x/manual/layouts.html#JSONLayout), you can go further and use a Library called [ecs-logging-java](https://github.com/elastic/ecs-logging-java). The advantage of ECS logging is that it uses the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/index.md). ECS defines a standard set of fields used when storing event data in {{es}}, such as logs and metrics.
921921
922922
1. Instead of writing our logging standard, use an existing one. Let’s add the logging dependency to our Javalin application.
923923
@@ -1561,7 +1561,7 @@ A programmatic setup allows you to attach the agent via a line of java in your s
15611561

15621562
This looks much better, having differences between endpoints.
15631563

1564-
4. Add another endpoint to see the power of transactions, which polls another HTTP service. You may have heard of [wttr.in](https://wttr.in/), a service to poll weather information from. Let’s implement a proxy HTTP method that forwards the request to that endpoint. Let’s use [Apache HTTP client](https://hc.apache.org/httpcomponents-client-4.5.x/quickstart.md), one of the most typical HTTP clients out there.
1564+
4. Add another endpoint to see the power of transactions, which polls another HTTP service. You may have heard of [wttr.in](https://wttr.in/), a service to poll weather information from. Let’s implement a proxy HTTP method that forwards the request to that endpoint. Let’s use [Apache HTTP client](https://hc.apache.org/httpcomponents-client-4.5.x/quickstart.html), one of the most typical HTTP clients out there.
15651565

15661566
```gradle
15671567
implementation 'org.apache.httpcomponents:fluent-hc:4.5.12'

solutions/observability/cicd.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,7 @@ To learn more, see the [integration of Maven builds with Elastic {{observability
370370

371371
The Ansible OpenTelemetry plugin integration provides visibility into all your Ansible playbooks. The plugin generates traces for each run and performance metrics to help you understand which Ansible tasks or roles are run the most, how often they fail, and how long they take to complete.
372372

373-
You can configure your Ansible playbooks with the [Ansible OpenTelemetry callback plugin](https://docs.ansible.com/ansible/latest/collections/community/general/opentelemetry_callback.md). It’s required to install the OpenTelemetry python libraries and configure the callback as stated in the [examples](https://docs.ansible.com/ansible/latest/collections/community/general/opentelemetry_callback.md#examples) section.
373+
You can configure your Ansible playbooks with the [Ansible OpenTelemetry callback plugin](https://docs.ansible.com/ansible/latest/collections/community/general/opentelemetry_callback.html). It’s required to install the OpenTelemetry python libraries and configure the callback as stated in the [examples](https://docs.ansible.com/ansible/latest/collections/community/general/opentelemetry_callback.html#examples) section.
374374

375375
The context propagation from the Jenkins job or pipeline is passed to the Ansible run. Therefore, everything that happens in the CI is also shown in the traces.
376376

@@ -492,7 +492,7 @@ pytest --otel-session-name='My_Test_cases'
492492

493493
### Concourse CI [ci-cd-concourse-ci]
494494

495-
To configure Concourse CI to send traces, refer to the [tracing](https://concourse-ci.org/tracing.md) docs. In the Concourse configuration, you just need to define `CONCOURSE_TRACING_OTLP_ADDRESS` and `CONCOURSE_TRACING_OTLP_HEADERS`.
495+
To configure Concourse CI to send traces, refer to the [tracing](https://concourse-ci.org/tracing.html) docs. In the Concourse configuration, you just need to define `CONCOURSE_TRACING_OTLP_ADDRESS` and `CONCOURSE_TRACING_OTLP_HEADERS`.
496496

497497
```bash
498498
CONCOURSE_TRACING_OTLP_ADDRESS=elastic-apm-server.example.com:8200

solutions/observability/cloud/monitor-amazon-cloud-compute-ec2.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Amazon EC2 instances can be run in various locations. The location is composed o
1515

1616
Like most AWS services, Amazon EC2 sends its metrics to Amazon CloudWatch. The Elastic [Amazon EC2 integration](https://docs.elastic.co/en/integrations/aws/ec2) collects metrics from Amazon CloudWatch using {{agent}}.
1717

18-
CloudWatch, by default, uses basic monitoring that publishes metrics at five-minute intervals. You can enable detailed monitoring to increase that resolution to one-minute, at an additional cost. To learn how to enable detailed monitoring, refer to the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.md).
18+
CloudWatch, by default, uses basic monitoring that publishes metrics at five-minute intervals. You can enable detailed monitoring to increase that resolution to one-minute, at an additional cost. To learn how to enable detailed monitoring, refer to the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html).
1919

2020
CloudWatch does not expose metrics related to EC2 instance memory. You can install {{agent}} on the EC2 instances to collect detailed system metrics.
2121

@@ -159,7 +159,7 @@ Here are the key status check metrics you should monitor and what to look for:
159159

160160

161161
`aws.ec2.metrics.StatusCheckFailed_Instance.avg`
162-
: This check monitors the software and network configuration of the instance. Problems that can cause instance status checks to fail may include: incorrect networking or startup configuration, exhausted memory, corrupted file system, incompatible kernel, and so on. When an instance status check fails, you typically must address the problem yourself. You may need to reboot the instance or make instance configuration changes. To troubleshoot instances with failed status checks, refer to the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.md).
162+
: This check monitors the software and network configuration of the instance. Problems that can cause instance status checks to fail may include: incorrect networking or startup configuration, exhausted memory, corrupted file system, incompatible kernel, and so on. When an instance status check fails, you typically must address the problem yourself. You may need to reboot the instance or make instance configuration changes. To troubleshoot instances with failed status checks, refer to the [Amazon EC2 documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html).
163163

164164
This check returns 0 (passed) if an instance passes the system status check or 1 (failed) if it fails.
165165

solutions/observability/cloud/monitor-amazon-kinesis-data-streams.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ By default, Kinesis Data Streams sends stream-level (basic level) metrics to Clo
1919
aws kinesis enable-enhanced-monitoring --stream-name samplestream --shard-level-metrics ALL
2020
```
2121

22-
For more details, refer to the [EnableEnhancedMonitoring](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_EnableEnhancedMonitoring.md) documentation.
22+
For more details, refer to the [EnableEnhancedMonitoring](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_EnableEnhancedMonitoring.html) documentation.
2323

2424

2525
## Get started [get-started-kinesis]

solutions/observability/cloud/monitor-amazon-simple-storage-service-s3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ With the Amazon S3 integration, you can collect these S3 metrics from CloudWatch
2121

2222
## Get started [get-started-s3]
2323

24-
If you plan to collect request metrics, enable them for the S3 buckets you want to monitor. To learn how, refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-request-metrics-bucket.md).
24+
If you plan to collect request metrics, enable them for the S3 buckets you want to monitor. To learn how, refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/configure-request-metrics-bucket.html).
2525

2626
To collect S3 metrics, you typically need to install the Elastic [Amazon S3 integration](https://docs.elastic.co/en/integrations/aws/s3) and deploy an {{agent}} locally or on an EC2 instance.
2727

solutions/observability/cloud/monitor-amazon-web-services-aws-with-amazon-data-firehose.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -66,13 +66,13 @@ From the **Destination settings** panel, specify the following settings:
6666

6767
## Step 4: Send data to the Firehose delivery stream [firehose-step-four]
6868

69-
You can configure a variety of log sources to send data to Firehose streams directly for example VPC flow logs. Some services don’t support publishing logs directly to Firehose but they do support publishing logs to CloudWatch logs, such as CloudTrail and Lambda. Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.md) for more information.
69+
You can configure a variety of log sources to send data to Firehose streams directly for example VPC flow logs. Some services don’t support publishing logs directly to Firehose but they do support publishing logs to CloudWatch logs, such as CloudTrail and Lambda. Refer to the [AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AWS-logs-and-resource-policy.html) for more information.
7070

7171
For example, a typical workflow for sending CloudTrail logs to Firehose would be the following:
7272

73-
* Publish CloudTrail logs to a Cloudwatch log group. Refer to the AWS documentation [about publishing CloudTrail logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.md).
74-
* Create a subscription filter in the CloudWatch log group to the Firehose stream. Refer to the AWS documentation [about using subscription filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.md#FirehoseExample).
73+
* Publish CloudTrail logs to a Cloudwatch log group. Refer to the AWS documentation [about publishing CloudTrail logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/monitor-cloudtrail-log-files-with-cloudwatch-logs.html).
74+
* Create a subscription filter in the CloudWatch log group to the Firehose stream. Refer to the AWS documentation [about using subscription filters](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html#FirehoseExample).
7575

76-
We also added support for sending CloudWatch monitoring metrics to Elastic using Firehose. For example, you can configure metrics ingestion by creating a metric stream through CloudWatch. You can select an existing Firehose stream by choosing the option **Custom setup with Firehose**. For more information, refer to the AWS documentation [about the custom setup with Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-setup-datalake.md).
76+
We also added support for sending CloudWatch monitoring metrics to Elastic using Firehose. For example, you can configure metrics ingestion by creating a metric stream through CloudWatch. You can select an existing Firehose stream by choosing the option **Custom setup with Firehose**. For more information, refer to the AWS documentation [about the custom setup with Firehose](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-metric-streams-setup-datalake.html).
7777

7878
For more information on Amazon Data Firehose, you can also check the [Amazon Data Firehose Integrations documentation](https://docs.elastic.co/integrations/awsfirehose).

solutions/observability/cloud/monitor-amazon-web-services-aws-with-beats.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Create an [{{ech}}](https://cloud.elastic.co/registration?page=docs&placement=do
3030
With this tutorial, we assume that your logs and your infrastructure data are already shipped to CloudWatch. We are going to show you how you can stream your data from CloudWatch to {{es}}. If you don’t know how to put your AWS logs and infrastructure data in CloudWatch, Amazon provides a lot of documentation around this specific topic:
3131

3232
* Collect your logs and infrastructure data from specific [AWS services](https://www.youtube.com/watch?v=vAnIhIwE5hY)
33-
* Export your logs [to an S3 bucket](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.md)
33+
* Export your logs [to an S3 bucket](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html)
3434

3535

3636
## Step 1: Create an S3 Bucket [aws-step-one]
@@ -269,7 +269,7 @@ Edit the `modules.d/aws.yml` file with the following configurations.
269269
```
270270

271271
1. Enables the `ec2` fileset.
272-
2. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.md).
272+
2. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
273273
3. Add the URL to the queue containing notifications around the bucket containing the EC2 logs
274274

275275

@@ -293,7 +293,7 @@ Make sure that the AWS user used to collect the logs from S3 has at least the fo
293293
}
294294
```
295295

296-
You can now upload your logs to the S3 bucket. If you are using CloudWatch, make sure to edit the policy of your bucket as shown in [step 3 of the AWS user guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.md). This will help you avoid permissions issues.
296+
You can now upload your logs to the S3 bucket. If you are using CloudWatch, make sure to edit the policy of your bucket as shown in [step 3 of the AWS user guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html). This will help you avoid permissions issues.
297297

298298
Start {{filebeat}} to collect the logs.
299299

@@ -344,7 +344,7 @@ Copy the URL of the queue you created. Edit the `modules.d/aws.yml`file with the
344344
```
345345

346346
1. Enables the `ec2` fileset.
347-
2. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.md).
347+
2. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
348348
3. Add the URL to the queue containing notifications around the bucket containing the EC2 logs
349349
4. Add the URL to the queue containing notifications around the bucket containing the S3 access logs
350350

@@ -532,7 +532,7 @@ To collect metrics from your AWS infrastructure, we’ll use the [{{metricbeat}}
532532
1. Defines the module that is going to be used.
533533
2. Defines the period at which the metrics are going to be collected
534534
3. Defines the metricset that is going to be used.
535-
4. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.md).
535+
4. This is the AWS profile defined following the [AWS standard](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html).
536536

537537

538538
Make sure that the AWS user used to collect the metrics from CloudWatch has at least the following permissions attached to it:

0 commit comments

Comments
 (0)