diff --git a/troubleshoot/deployments/esf/elastic-serverless-forwarder.md b/troubleshoot/deployments/esf/elastic-serverless-forwarder.md index f905b53f34..6e4339d205 100644 --- a/troubleshoot/deployments/esf/elastic-serverless-forwarder.md +++ b/troubleshoot/deployments/esf/elastic-serverless-forwarder.md @@ -25,7 +25,7 @@ For example, if you don’t increase the visibility timeout for an SQS queue as ## Prevent unexpected costs [preventing-unexpected-costs] -It is important to monitor the Elastic Serverless Forwarder Lambda function for timeouts to prevent unexpected costs. You can use the [AWS Lambda integration](https://docs.elastic.co/en/integrations/aws/lambda) for this. If the timeouts are constant, you should throttle the Lambda function to stop its execution before proceeding with any troubleshooting steps. In most cases, constant timeouts will cause the records and messages from the event triggers to go back to their sources and trigger the function again, which will cause further timeouts and force a loop that will incure unexpected high costs. For more information on throttling Lambda functions, refer to [AWS docs](https://docs.aws.amazon.com/lambda/latest/operatorguide/throttling.md). +It is important to monitor the Elastic Serverless Forwarder Lambda function for timeouts to prevent unexpected costs. You can use the [AWS Lambda integration](https://docs.elastic.co/en/integrations/aws/lambda) for this. If the timeouts are constant, you should throttle the Lambda function to stop its execution before proceeding with any troubleshooting steps. In most cases, constant timeouts will cause the records and messages from the event triggers to go back to their sources and trigger the function again, which will cause further timeouts and force a loop that will incure unexpected high costs. For more information on throttling Lambda functions, refer to [AWS docs](https://docs.aws.amazon.com/lambda/latest/operatorguide/throttling.html). ## Increase debug information [_increase_debug_information] diff --git a/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md b/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md index ded112925c..5d3be9b6ab 100644 --- a/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md +++ b/troubleshoot/elasticsearch/elasticsearch-client-java-api-client/typed-keys-serialization.md @@ -5,7 +5,7 @@ mapped_pages: # Typed keys serialization [serialize-without-typed-keys] -{{es}} search requests accept a `typed_key` parameter that allow returning type information along with the name in aggregation and suggestion results (see the [aggregations documentation](https://www.elastic.co/guide/en/elasticsearch/reference/master/search-aggregations.html#return-agg-type) for additional details). +{{es}} search requests accept a `typed_key` parameter that allow returning type information along with the name in aggregation and suggestion results (see the [aggregations documentation](/explore-analyze/query-filter/aggregations.md#return-agg-type) for additional details). The Java API Client always adds this parameter to search requests, as type information is needed to know the concrete class that should be used to deserialize aggregation and suggestion results. diff --git a/troubleshoot/elasticsearch/security/trb-security-kerberos.md b/troubleshoot/elasticsearch/security/trb-security-kerberos.md index 4ace23459d..c74f1477b5 100644 --- a/troubleshoot/elasticsearch/security/trb-security-kerberos.md +++ b/troubleshoot/elasticsearch/security/trb-security-kerberos.md @@ -36,7 +36,7 @@ Make sure that: * You have installed curl version 7.49 or above as older versions of curl have known Kerberos bugs. * The curl installed on your machine has `GSS-API`, `Kerberos` and `SPNEGO` features listed when you invoke command `curl -V`. If not, you will need to compile `curl` version with this support. -To download latest curl version visit [https://curl.haxx.se/download.html](https://curl.haxx.se/download.md) +To download latest curl version visit [https://curl.haxx.se/download.html](https://curl.haxx.se/download.html) As Kerberos logs are often cryptic in nature and many things can go wrong as it depends on external services like DNS and NTP. You might have to enable additional debug logs to determine the root cause of the issue. diff --git a/troubleshoot/elasticsearch/security/trb-security-maccurl.md b/troubleshoot/elasticsearch/security/trb-security-maccurl.md index 05b815db41..aa58551d45 100644 --- a/troubleshoot/elasticsearch/security/trb-security-maccurl.md +++ b/troubleshoot/elasticsearch/security/trb-security-maccurl.md @@ -12,7 +12,7 @@ mapped_pages: **Resolution:** -Apple’s integration of `curl` with their keychain technology disables the `--cacert` option. See [http://curl.haxx.se/mail/archive-2013-10/0036.html](http://curl.haxx.se/mail/archive-2013-10/0036.md) for more information. +Apple’s integration of `curl` with their keychain technology disables the `--cacert` option. See [http://curl.haxx.se/mail/archive-2013-10/0036.html](http://curl.haxx.se/mail/archive-2013-10/0036.html) for more information. You can use another tool, such as `wget`, to test certificates. Alternately, you can add the certificate for the signing certificate authority MacOS system keychain, using a procedure similar to the one detailed at the [Apple knowledge base](http://support.apple.com/kb/PH14003). Be sure to add the signing CA’s certificate and not the server’s certificate. diff --git a/troubleshoot/observability/amazon-data-firehose.md b/troubleshoot/observability/amazon-data-firehose.md index 6ccc62d680..33331def57 100644 --- a/troubleshoot/observability/amazon-data-firehose.md +++ b/troubleshoot/observability/amazon-data-firehose.md @@ -16,7 +16,7 @@ The backup settings in the delivery stream specify how failed delivery requests ## Scaling [aws-firehose-troubleshooting-scaling] -Firehose can [automatically scale](https://docs.aws.amazon.com/firehose/latest/dev/limits.md) to handle very high throughput. If your Elastic deployment is not properly configured for the data volume coming from Firehose, it could cause a bottleneck, which may lead to increased ingest times or indexing failures. +Firehose can [automatically scale](https://docs.aws.amazon.com/firehose/latest/dev/limits.html) to handle very high throughput. If your Elastic deployment is not properly configured for the data volume coming from Firehose, it could cause a bottleneck, which may lead to increased ingest times or indexing failures. There are several facets to optimizing the underlying Elasticsearch performance, but Elastic Cloud provides several ready-to-use hardware profiles which can provide a good starting point. Other factors which can impact performance are [shard sizing](../../deploy-manage/production-guidance/optimize-performance/size-shards.md), [indexing configuration](../../deploy-manage/production-guidance/optimize-performance/indexing-speed.md), and [index lifecycle management (ILM)](../../manage-data/lifecycle/index-lifecycle-management.md). diff --git a/troubleshoot/observability/apm-agent-python/apm-python-agent.md b/troubleshoot/observability/apm-agent-python/apm-python-agent.md index 1400b9fa82..b0f1744078 100644 --- a/troubleshoot/observability/apm-agent-python/apm-python-agent.md +++ b/troubleshoot/observability/apm-agent-python/apm-python-agent.md @@ -14,12 +14,12 @@ Below are some resources and tips for troubleshooting and debugging the python a * [Disable the Agent](#disable-agent) -## Easy Fixes [easy-fixes] +## Easy Fixes [easy-fixes] Before you try anything else, go through the following sections to ensure that the agent is configured correctly. This is not an exhaustive list, but rather a list of common problems that users run into. -### Debug Mode [debug-mode] +### Debug Mode [debug-mode] Most frameworks support a debug mode. Generally, this mode is intended for non-production environments and provides detailed error messages and logging of potentially sensitive data. Because of these security issues, the agent will not collect traces if the app is in debug mode by default. @@ -34,7 +34,7 @@ apm = ElasticAPM(app, service_name="flask-app") ``` -### `psutil` for Metrics [psutil-metrics] +### `psutil` for Metrics [psutil-metrics] To get CPU and system metrics on non-Linux systems, `psutil` must be installed. The agent should automatically show a warning on start if it is not installed, but sometimes this warning can be suppressed. Install `psutil` and metrics should be collected by the agent and sent to the APM Server. @@ -43,19 +43,19 @@ python3 -m pip install psutil ``` -### Credential issues [apm-server-credentials] +### Credential issues [apm-server-credentials] In order for the agent to send data to the APM Server, it may need an [`API_KEY`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-api-key) or a [`SECRET_TOKEN`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-secret-token). Double check your APM Server settings and make sure that your credentials are configured correctly. Additionally, check that [`SERVER_URL`](asciidocalypse://docs/apm-agent-python/docs/reference/configuration.md#config-server-url) is correct. -## Django `check` and `test` [django-test] +## Django `check` and `test` [django-test] When used with Django, the agent provides two management commands to help debug common issues. Head over to the [Django troubleshooting section](asciidocalypse://docs/apm-agent-python/docs/reference/django-support.md#django-troubleshooting) for more information. -## Agent logging [agent-logging] +## Agent logging [agent-logging] -To get the agent to log more data, all that is needed is a [Handler](https://docs.python.org/3/library/logging.md#handler-objects) which is attached either to the `elasticapm` logger or to the root logger. +To get the agent to log more data, all that is needed is a [Handler](https://docs.python.org/3/library/logging.html#handler-objects) which is attached either to the `elasticapm` logger or to the root logger. Note that if you attach the handler to the root logger, you also need to explicitly set the log level of the `elasticapm` logger: @@ -66,7 +66,7 @@ apm_logger.setLevel(logging.DEBUG) ``` -### Django [django-agent-logging] +### Django [django-agent-logging] The simplest way to log more data from the agent is to add a console logging Handler to the `elasticapm` logger. Here’s a (very simplified) example: @@ -88,14 +88,14 @@ LOGGING = { ``` -### Flask [flask-agent-logging] +### Flask [flask-agent-logging] Flask [recommends using `dictConfig()`](https://flask.palletsprojects.com/en/1.1.x/logging/) to set up logging. If you’re using this format, adding logging for the agent will be very similar to the [instructions for Django above](#django-agent-logging). Otherwise, you can use the [generic instructions below](#generic-agent-logging). -### Generic instructions [generic-agent-logging] +### Generic instructions [generic-agent-logging] Creating a console Handler and adding it to the `elasticapm` logger is easy: @@ -119,10 +119,10 @@ console_handler.setLevel(logging.DEBUG) logger.addHandler(console_handler) ``` -See the [python logging docs](https://docs.python.org/3/library/logging.md) for more details about Handlers (and information on how to format your logs using Formatters). +See the [python logging docs](https://docs.python.org/3/library/logging.html) for more details about Handlers (and information on how to format your logs using Formatters). -## Disable the Agent [disable-agent] +## Disable the Agent [disable-agent] In the unlikely event the agent causes disruptions to a production application, you can disable the agent while you troubleshoot.