diff --git a/404.md b/404.md
deleted file mode 100644
index 5fe1bb5c11..0000000000
--- a/404.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-layout: not-found
----
-
-# Page not found
diff --git a/raw-migrated-files/docs-content/serverless/attack-discovery.md b/raw-migrated-files/docs-content/serverless/attack-discovery.md
deleted file mode 100644
index 4768bc5785..0000000000
--- a/raw-migrated-files/docs-content/serverless/attack-discovery.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# Attack Discovery [attack-discovery]
-
-::::{warning}
-This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features.
-::::
-
-
-Attack Discovery leverages large language models (LLMs) to analyze alerts in your environment and identify threats. Each "discovery" represents a potential attack and describes relationships among multiple alerts to tell you which users and hosts are involved, how alerts correspond to the MITRE ATT&CK matrix, and which threat actor might be responsible. This can help make the most of each security analyst’s time, fight alert fatigue, and reduce your mean time to respond.
-
-For a demo, refer to the following video.
-
-::::{admonition}
-
-
-
-::::
-
-
-This page describes:
-
-* [How to generate discoveries](../../../solutions/security/ai/attack-discovery.md#attack-discovery-generate-discoveries)
-* [What information each discovery includes](../../../solutions/security/ai/attack-discovery.md#attack-discovery-what-info)
-* [How you can interact with discoveries to enhance {{elastic-sec}} workflows](../../../solutions/security/ai/attack-discovery.md#attack-discovery-workflows)
-
-
-## Role-based access control (RBAC) for Attack Discovery [attack-discovery-rbac]
-
-The `Attack Discovery: All` privilege allows you to use Attack Discovery.
-
-:::{image} ../../../images/serverless-attck-disc-rbac.png
-:alt: Attack Discovery's RBAC settings
-:::
-
-
-## Generate discoveries [attack-discovery-generate-discoveries]
-
-When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md). To get started:
-
-1. Click the **Attack Discovery** page from {{elastic-sec}}'s navigation menu.
-2. Select an existing connector from the dropdown menu, or add a new one.
-
- ::::{admonition} Recommended models
- While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best.
-
- ::::
-
-
- :::{image} ../../../images/serverless-attck-disc-select-model-empty.png
- :alt: attck disc select model empty
- :::
-
-3. Once you’ve selected a connector, click **Generate** to start the analysis.
-
-It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected.
-
-::::{important}
-By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon () next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error.
-::::
-
-
-:::{image} ../../../images/serverless-attck-disc-alerts-number-menu.png
-:alt: Attack Discovery's settings menu
-:::
-
-::::{important}
-Attack Discovery uses the same data anonymization settings as [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md). To configure which alert fields are sent to the LLM and which of those fields are obfuscated, use the Elastic AI Assistant settings. Consider the privacy policies of third-party LLMs before sending them sensitive data.
-::::
-
-
-Once the analysis is complete, any threats it identifies will appear as discoveries. Click each one’s title to expand or collapse it. Click **Generate** at any time to start the Attack Discovery process again with the most current alerts.
-
-
-## What information does each discovery include? [attack-discovery-what-info]
-
-Each discovery includes the following information describing the potential threat, generated by the connected LLM:
-
-1. A descriptive title and a summary of the potential threat.
-2. The number of associated alerts and which parts of the [MITRE ATT&CK matrix](https://attack.mitre.org/) they correspond to.
-3. The implicated entities (users and hosts), and what suspicious activity was observed for each.
-
-:::{image} ../../../images/serverless-attck-disc-example-disc.png
-:alt: Attack Discovery detail view
-:::
-
-
-## Incorporate discoveries with other workflows [attack-discovery-workflows]
-
-There are several ways you can incorporate discoveries into your {{elastic-sec}} workflows:
-
-* Click an entity’s name to open the user or host details flyout and view more details that may be relevant to your investigation.
-* Hover over an entity’s name to either add the entity to Timeline () or copy its field name and value to the clipboard ().
-* Click **Take action**, then select **Add to new case** or **Add to existing case** to add a discovery to a [case](../../../solutions/security/investigate/cases.md). This makes it easy to share the information with your team and other stakeholders.
-* Click **Investigate in timeline** to explore the discovery in [Timeline](../../../solutions/security/investigate/timeline.md).
-* Click **View in AI Assistant** to attach the discovery to a conversation with AI Assistant. You can then ask follow-up questions about the discovery or associated alerts.
-
-:::{image} ../../../images/serverless-add-discovery-to-assistant.gif
-:alt: Attack Discovery view in AI Assistant
-:::
diff --git a/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md b/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md
deleted file mode 100644
index d4fa16bb94..0000000000
--- a/raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md
+++ /dev/null
@@ -1,213 +0,0 @@
-# Connect to your own local LLM [connect-to-byo-llm]
-
-This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within {{elastic-sec}}. You’ll first need to set up a reverse proxy to communicate with {{elastic-sec}}, then set up LM Studio on a server, and finally configure the connector in your Elastic deployment. [Learn more about the benefits of using a local LLM](https://www.elastic.co/blog/ai-assistant-locally-hosted-models).
-
-This example uses a single server hosted in GCP to run the following components:
-
-* LM Studio with the [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) model
-* A reverse proxy using Nginx to authenticate to Elastic Cloud
-
-:::{image} ../../../images/serverless-lms-studio-arch-diagram.png
-:alt: Architecture diagram for this guide
-:::
-
-::::{note}
-For testing, you can use alternatives to Nginx such as [Azure Dev Tunnels](https://learn.microsoft.com/en-us/azure/developer/dev-tunnels/overview) or [Ngrok](https://ngrok.com/), but using Nginx makes it easy to collect additional telemetry and monitor its status by using Elastic’s native Nginx integration. While this example uses cloud infrastructure, it could also be replicated locally without an internet connection.
-::::
-
-
-::::{note}
-For information about the performance of open-source models on tasks within {{elastic-sec}}, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md).
-::::
-
-
-
-## Configure your reverse proxy [_configure_your_reverse_proxy]
-
-::::{note}
-If your Elastic instance is on the same host as LM Studio, you can skip this step. Also, check out our [blog post](https://www.elastic.co/blog/herding-llama-3-1-with-elastic-and-lm-studio) that walks through the whole process of setting up a single-host implementation.
-::::
-
-
-You need to set up a reverse proxy to enable communication between LM Studio and Elastic. For more complete instructions, refer to a guide such as [this one](https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-as-a-reverse-proxy-on-ubuntu-22-04).
-
-The following is an example Nginx configuration file:
-
-```txt
-server {
- listen 80;
- listen [::]:80;
- server_name ;
- server_tokens off;
- add_header x-xss-protection "1; mode=block" always;
- add_header x-frame-options "SAMEORIGIN" always;
- add_header X-Content-Type-Options "nosniff" always;
- return 301 https://$server_name$request_uri;
-}
-
-server {
-
- listen 443 ssl http2;
- listen [::]:443 ssl http2;
- server_name ;
- server_tokens off;
- ssl_certificate /etc/letsencrypt/live//fullchain.pem;
- ssl_certificate_key /etc/letsencrypt/live//privkey.pem;
- ssl_session_timeout 1d;
- ssl_session_cache shared:SSL:50m;
- ssl_session_tickets on;
- ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
- ssl_protocols TLSv1.3 TLSv1.2;
- ssl_prefer_server_ciphers on;
- add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
- add_header x-xss-protection "1; mode=block" always;
- add_header x-frame-options "SAMEORIGIN" always;
- add_header X-Content-Type-Options "nosniff" always;
- add_header Referrer-Policy "strict-origin-when-cross-origin" always;
- ssl_stapling on;
- ssl_stapling_verify on;
- ssl_trusted_certificate /etc/letsencrypt/live//fullchain.pem;
- resolver 1.1.1.1;
- location / {
-
- if ($http_authorization != "Bearer ") {
- return 401;
-}
-
- proxy_pass http://localhost:1234/;
- }
-
-}
-```
-
-::::{important}
-If using the example configuration file above, you must replace several values:
-
-* Replace `` with your actual token, and keep it safe since you’ll need it to set up the {{elastic-sec}} connector.
-* Replace `` with your actual domain name.
-* Update the `proxy_pass` value at the bottom of the configuration if you decide to change the port number in LM Studio to something other than 1234.
-
-::::
-
-
-
-### (Optional) Set up performance monitoring for your reverse proxy [_optional_set_up_performance_monitoring_for_your_reverse_proxy]
-
-You can use Elastic’s [Nginx integration](https://docs.elastic.co/en/integrations/nginx) to monitor performance and populate monitoring dashboards in the {{security-app}}.
-
-
-## Configure LM Studio and download a model [_configure_lm_studio_and_download_a_model]
-
-First, install [LM Studio](https://lmstudio.ai/). LM Studio supports the OpenAI SDK, which makes it compatible with Elastic’s OpenAI connector, allowing you to connect to any model available in the LM Studio marketplace.
-
-You must launch the application using its GUI before doing so using the CLI. For example, use Chrome RDP with an [X Window System](https://cloud.google.com/architecture/chrome-desktop-remote-on-compute-engine). After you’ve opened the application the first time using the GUI, you can start it by using `sudo lms server start` in the CLI.
-
-Once you’ve launched LM Studio:
-
-1. Go to LM Studio’s Search window.
-2. Search for an LLM (for example, `Mistral-Nemo-Instruct-2407`). Your chosen model must include `instruct` in its name in order to work with Elastic.
-3. After you find a model, view download options and select a recommended version (green). For best performance, select one with the thumbs-up icon that indicates good performance on your hardware.
-4. Download one or more models.
-
-::::{important}
-For security reasons, before downloading a model, verify that it is from a trusted source. It can be helpful to review community feedback on the model (for example using a site like Hugging Face).
-::::
-
-
-:::{image} ../../../images/serverless-lms-model-select.png
-:alt: The LM Studio model selection interface
-:::
-
-In this example we used [`mistralai/Mistral-Nemo-Instruct-2407`](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407). It has 12B total parameters, a 128,000 token context window, and uses GGUF [quanitization](https://huggingface.co/docs/transformers/main/en/quantization/overview). For more information about model names and format information, refer to the following table.
-
-| Model Name | Parameter Size | Tokens/Context Window | Quantization Format |
-| --- | --- | --- | --- |
-| Name of model, sometimes with a version number. | LLMs are often compared by their number of parameters — higher numbers mean more powerful models. | Tokens are small chunks of input information. Tokens do not necessarily correspond to characters. You can use [Tokenizer](https://platform.openai.com/tokenizer) to see how many tokens a given prompt might contain. | Quantization reduces overall parameters and helps the model to run faster, but reduces accuracy. |
-| Examples: Llama, Mistral, Phi-3, Falcon. | The number of parameters is a measure of the size and the complexity of the model. The more parameters a model has, the more data it can process, learn from, generate, and predict. | The context window defines how much information the model can process at once. If the number of input tokens exceeds this limit, input gets truncated. | Specific formats for quantization vary, most models now support GPU rather than CPU offloading. |
-
-
-## Load a model in LM Studio [_load_a_model_in_lm_studio]
-
-After downloading a model, load it in LM Studio using the GUI or LM Studio’s [CLI tool](https://lmstudio.ai/blog/lms).
-
-
-### Option 1: load a model using the CLI (Recommended) [_option_1_load_a_model_using_the_cli_recommended]
-
-It is a best practice to download models from the marketplace using the GUI, and then load or unload them using the CLI. The GUI allows you to search for models, whereas the CLI allows you to use `lms get` to search for models. The CLI provides a good interface for loading and unloading.
-
-Once you’ve downloaded a model, use the following commands in your CLI:
-
-1. Verify LM Studio is installed: `lms`
-2. Check LM Studio’s status: `lms status`
-3. List all downloaded models: `lms ls`
-4. Load a model: `lms load`.
-
-:::{image} ../../../images/serverless-lms-cli-welcome.png
-:alt: The CLI interface during execution of initial LM Studio commands
-:::
-
-After the model loads, you should see a `Model loaded successfully` message in the CLI. Select a model using the arrow and **Enter** keys.
-
-:::{image} ../../../images/serverless-lms-studio-model-loaded-msg.png
-:alt: The CLI message that appears after a model loads
-:::
-
-To verify which model is loaded, use the `lms ps` command.
-
-:::{image} ../../../images/serverless-lms-ps-command.png
-:alt: The CLI message that appears after running lms ps
-:::
-
-If your model uses NVIDIA drivers, you can check the GPU performance with the `sudo nvidia-smi` command.
-
-
-### Option 2: load a model using the GUI [_option_2_load_a_model_using_the_gui]
-
-Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**.
-
-::::{admonition}
-
-
-
-::::
-
-
-
-## (Optional) Collect logs using Elastic’s Custom Logs integration [_optional_collect_logs_using_elastics_custom_logs_integration]
-
-You can monitor the performance of the host running LM Studio using Elastic’s [Custom Logs integration](https://docs.elastic.co/en/integrations/log). This can also help with troubleshooting. Note that the default path for LM Studio logs is `/tmp/lmstudio-server-log.txt`, as in the following screenshot:
-
-:::{image} ../../../images/serverless-lms-custom-logs-config.png
-:alt: The configuration window for the custom logs integration
-:::
-
-
-## Configure the connector in your Elastic deployment [_configure_the_connector_in_your_elastic_deployment]
-
-Finally, configure the connector:
-
-1. Log in to your Elastic deployment.
-2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **OpenAI**. The OpenAI connector enables this use case because LM Studio uses the OpenAI SDK.
-3. Name your connector to help keep track of the model version you are using.
-4. Under **Select an OpenAI provider**, select **Other (OpenAI Compatible Service)**.
-5. Under **URL**, enter the domain name specified in your Nginx configuration file, followed by `/v1/chat/completions`.
-6. Under **Default model**, enter `local-model`.
-7. Under **API key**, enter the secret token specified in your Nginx configuration file.
-8. Click **Save**.
-
-:::{image} ../../../images/serverless-lms-edit-connector.png
-:alt: The Edit connector page in the {security-app}
-:::
-
-Setup is now complete. You can use the model you’ve loaded in LM Studio to power Elastic’s generative AI features. You can test a variety of models as you interact with AI Assistant to see what works best without having to update your connector.
-
-::::{note}
-While local models work well for [AI Assistant](../../../solutions/security/ai/ai-assistant.md), we recommend you use one of [these models](../../../solutions/security/ai/large-language-model-performance-matrix.md) for interacting with [Attack discovery](../../../solutions/security/ai/attack-discovery.md). As local models become more performant over time, this is likely to change.
-::::
diff --git a/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md b/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md
deleted file mode 100644
index 01bfe51013..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Generate, customize, and learn about {{esql}} queries [security-ai-assistant-esql-queries]
-
-Elastic AI Assistant can help you learn about and leverage the Elasticsearch Query Language ({{esql}}) in many ways, including:
-
-* **Education and training**: AI Assistant can serve as a powerful {{esql}} learning tool. Ask it for examples, explanations of complex queries, and best practices.
-* **Writing new queries**: Prompt AI Assistant to provide a query that accomplishes a particular task, and it will generate a query matching your description. For example: "Write a query to identify documents with `curl.exe` usage and calculate the sum of `destination.bytes`" or "What query would return all user logins to [a host] in the last six hours?"
-* **Providing feedback to optimize existing queries**: Send AI Assistant a query you want to work on and ask it for improvements, refactoring, a general assessment, or to optimize the query’s performance with large data sets.
-* **Customizing queries for your environment**: Since each environment is unique, you may need to customize queries that you used in other contexts. AI Assistant can suggest necessary modifications based on contextual information you provide.
-* **Troubleshooting**: Having trouble with a query or getting unexpected results? Ask AI Assistant to help you troubleshoot.
-
-In these ways and others, AI Assistant can enable you to make use of {{esql}}'s advanced search capabilities to accomplish goals across {{elastic-sec}}.
diff --git a/raw-migrated-files/docs-content/serverless/security-ai-for-security.md b/raw-migrated-files/docs-content/serverless/security-ai-for-security.md
deleted file mode 100644
index f5d8af4380..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-ai-for-security.md
+++ /dev/null
@@ -1,8 +0,0 @@
-# AI for Security [security-ai-for-security]
-
-You can use {{elastic-sec}}’s built-in AI tools to speed up your work and augment your team’s capabilities. The pages in this section describe [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md), which answers questions and enhances your workflows throughout Elastic Security, and [*Attack Discovery*](../../../solutions/security/ai/attack-discovery.md), which speeds up the triage process by finding patterns and identifying attacks spanning multiple alerts.
-
-
-
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md b/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md
deleted file mode 100644
index 8cd24a7d5f..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-ai-use-cases.md
+++ /dev/null
@@ -1,11 +0,0 @@
-# Use cases [security-ai-use-cases]
-
-The guides in this section describe use cases for AI Assistant and Attack discovery. Refer to them for examples of each tool’s individual capabilities, and of what they can do together.
-
-* [Identify, investigate, and document threats](../../../solutions/security/ai/identify-investigate-document-threats.md)
-* [Triage alerts](../../../solutions/security/ai/triage-alerts.md)
-* [Generate, customize, and learn about {{esql}} queries](../../../solutions/security/ai/generate-customize-learn-about-esorql-queries.md)
-
-
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md b/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md
deleted file mode 100644
index 7d51d50b88..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md
+++ /dev/null
@@ -1,69 +0,0 @@
-# Identify, investigate, and document threats [security-ai-usecase-incident-reporting]
-
-Together, [Elastic AI Assistant](../../../solutions/security/ai/ai-assistant.md) and [Attack Discovery](../../../solutions/security/ai/attack-discovery.md) can help you identify and mitigate threats, investigate incidents, and generate incident reports in various languages so you can monitor and protect your environment.
-
-In this guide, you’ll learn how to:
-
-* [Use Attack discovery to identify threats](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-use-attack-discovery-to-identify-threats)
-* [Use AI Assistant to analyze a threat](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-use-ai-assistant-to-analyze-a-threat)
-* [Create a case using AI Assistant](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-create-a-case-using-ai-assistant)
-* [Translate incident information to a different human language using AI Assistant](../../../solutions/security/ai/identify-investigate-document-threats.md#use-case-incident-reporting-translate)
-
-
-## Use Attack discovery to identify threats [use-case-incident-reporting-use-attack-discovery-to-identify-threats]
-
-Attack Discovery can detect a wide range of threats by finding relationships among alerts that may indicate a coordinated attack. This enables you to comprehend how threats move through and affect your systems. Attack Discovery generates a detailed summary of each potential threat, which can serve as the basis for further analysis. Learn how to [get started with Attack Discovery](../../../solutions/security/ai/attack-discovery.md).
-
-:::{image} ../../../images/serverless-attck-disc-11-alerts-disc.png
-:alt: An Attack discovery card showing an attack with 11 related alerts
-:screenshot:
-:::
-
-In the example above, Attack discovery found connections between thirteen alerts, and used them to identify and describe an attack chain.
-
-After Attack discovery outlines your threat landscape, use Elastic AI Assistant to quickly analyze a threat in detail.
-
-
-## Use AI Assistant to analyze a threat [use-case-incident-reporting-use-ai-assistant-to-analyze-a-threat]
-
-From a discovery on the Attack discovery page, click **View in AI Assistant** to start a chat that includes the discovery as context.
-
-:::{image} ../../../images/serverless-attck-disc-remediate-threat.gif
-:alt: A dialogue with AI Assistant that has the attack discovery as context
-:screenshot:
-:::
-
-AI Assistant can quickly compile essential data and provide suggestions to help you generate an incident report and plan an effective response. You can ask it to provide relevant data or answer questions, such as “How can I remediate this threat?” or “What {{esql}} query would isolate actions taken by this user?”
-
-:::{image} ../../../images/serverless-attck-disc-esql-query-gen-example.png
-:alt: An AI Assistant dialogue in which the user asks for a purpose-built ES|QL query
-:screenshot:
-:::
-
-The image above shows an {{esql}} query generated by AI Assistant in response to a user prompt. Learn more about [using AI Assistant for ES|QL](../../../solutions/security/ai/generate-customize-learn-about-esorql-queries.md).
-
-At any point in a conversation with AI Assistant, you can add data, narrative summaries, and other information from its responses to {{elastic-sec}}'s case management system to generate incident reports.
-
-
-## Generate reports [use-case-incident-reporting-create-a-case-using-ai-assistant]
-
-From the AI Assistant dialog window, click **Add to case** () next to a message to add the information in that message to a [case](../../../solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders.
-
-If you add a message that contains a discovery to a case, AI Assistant automatically adds the attack summary and all associated alerts to the case. You can also add AI Assistant messages that contain remediation steps and relevant data to the case.
-
-
-## Translate incident information to a different human language using AI Assistant [use-case-incident-reporting-translate]
-
-:::{image} ../../../images/serverless-attck-disc-translate-japanese.png
-:alt: An AI Assistant dialogue in which the assistant translates from English to Japanese
-:screenshot:
-:::
-
-AI Assistant can translate its findings into other human languages, helping to enable collaboration among global security teams, and making it easier to operate within multilingual organizations.
-
-After AI Assistant provides information in one language, you can ask it to translate its responses. For example, if it provides remediation steps for an incident, you can instruct it to “Translate these remediation steps into Japanese.” You can then add the translated output to a case. This can help team members receive the same information and insights regardless of their primary language.
-
-::::{note}
-In our internal testing, AI Assistant translations preserved the accuracy of the original content. However, all LLMs can make mistakes, so use caution.
-
-::::
diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md b/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md
deleted file mode 100644
index 295a6da198..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md
+++ /dev/null
@@ -1,99 +0,0 @@
-# Connect to Azure OpenAI [security-connect-to-azure-openai]
-
-This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure Azure, then configure the connector in {{kib}}.
-
-
-## Configure Azure [security-connect-to-azure-openai-configure-azure]
-
-
-### Configure a deployment [security-connect-to-azure-openai-configure-a-deployment]
-
-First, set up an Azure OpenAI deployment:
-
-1. Log in to the Azure console and search for Azure OpenAI.
-2. In **Azure AI services**, select **Create**.
-3. For the **Project Details**, select your subscription and resource group. If you don’t have a resource group, select **Create new** to make one.
-4. For **Instance Details**, select the desired region and specify a name, such as `example-deployment-openai`.
-5. Select the **Standard** pricing tier, then click **Next**.
-6. Configure your network settings, click **Next**, optionally add tags, then click **Next**.
-7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**.
-
-The following video demonstrates these steps.
-
-
-
-### Configure keys [security-connect-to-azure-openai-configure-keys]
-
-Next, create access keys for the deployment:
-
-1. From within your Azure OpenAI deployment, select **Click here to manage keys**.
-2. Store your keys in a secure location.
-
-The following video demonstrates these steps.
-
-
-
-### Configure a model [security-connect-to-azure-openai-configure-a-model]
-
-Now, set up the Azure OpenAI model:
-
-1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**.
-2. On the **Deployments** page, select **Create new deployment**.
-3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`.
-4. Set the model version to "Auto-update to default".
-5. Under **Deployment type**, select **Standard**.
-6. Name your deployment.
-7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits.
-8. Click **Create**.
-
-::::{important}
-The models available to you will depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md).
-
-::::
-
-
-The following video demonstrates these steps.
-
-
-
-## Configure Elastic AI Assistant [security-connect-to-azure-openai-configure-elastic-ai-assistant]
-
-Finally, configure the connector in {{kib}}:
-
-1. Log in to {{kib}}.
-2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **OpenAI**.
-3. Give your connector a name to help you keep track of different models, such as `Azure OpenAI (GPT-4 Turbo v. 0125)`.
-4. For **Select an OpenAI provider**, choose **Azure OpenAI**.
-5. Update the **URL** field. We recommend doing the following:
-
- * Navigate to your deployment in Azure AI Studio and select **Open in Playground**. The **Chat playground** screen displays.
- * Select **View code**, then from the drop-down, change the **Sample code** to `Curl`.
- * Highlight and copy the URL without the quotes, then paste it into the **URL** field in {{kib}}.
- * (Optional) Alternatively, refer to the [API documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference) to learn how to create the URL manually.
-
-6. Under **API key**, enter one of your API keys.
-7. Click **Save & test**, then click **Run**.
-
-The following video demonstrates these steps.
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md b/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md
deleted file mode 100644
index 8f3b15d1a8..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md
+++ /dev/null
@@ -1,146 +0,0 @@
-# Connect to Amazon Bedrock [security-connect-to-bedrock]
-
-This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure AWS, then configure the connector in {{kib}}.
-
-::::{note}
-Only Amazon Bedrock’s `Anthropic` models are supported: `Claude` and `Claude instant`.
-
-::::
-
-
-
-## Configure AWS [security-connect-to-bedrock-configure-aws]
-
-
-### Configure an IAM policy [security-connect-to-bedrock-configure-an-iam-policy]
-
-First, configure an IAM policy with the necessary permissions:
-
-1. Log into the AWS console and search for Identity and Access Management (IAM).
-2. From the **IAM** menu, select **Policies** → **Create policy**.
-3. To provide the necessary permissions, paste the following JSON into the **Specify permissions** menu.
-
-```json
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Sid": "VisualEditor0",
- "Effect": "Allow",
- "Action": [
- "bedrock:InvokeModel",
- "bedrock:InvokeModelWithResponseStream"
- ],
- "Resource": "*"
- }
- ]
-}
-```
-
-::::{note}
-These are the minimum required permissions. IAM policies with additional permissions are also supported.
-
-::::
-
-
-1. Click **Next**. Name your policy.
-
-The following video demonstrates these steps.
-
-
-
-### Configure an IAM User [security-connect-to-bedrock-configure-an-iam-user]
-
-Next, assign the policy you just created to a new user:
-
-1. Return to the **IAM** menu. Select **Users** from the navigation menu, then click **Create User**.
-2. Name the user, then click **Next**.
-3. Select **Attach policies directly**.
-4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**.
-5. Review the configuration then click **Create user**.
-
-The following video demonstrates these steps.
-
-
-
-### Create an access key [security-connect-to-bedrock-create-an-access-key]
-
-Create the access keys that will authenticate your Elastic connector:
-
-1. Return to the **IAM** menu. Select **Users** from the navigation menu.
-2. Search for the user you just created, and click its name.
-3. Go to the **Security credentials** tab.
-4. Under **Access keys**, click **Create access key**.
-5. Select **Third-party service**, check the box under **Confirmation***, click ***Next**, then click **Create access key**.
-6. Click **Download .csv file** to download the key. Store it securely.
-
-The following video demonstrates these steps.
-
-
-
-### Enable model access [security-connect-to-bedrock-enable-model-access]
-
-Make sure the supported Amazon Bedrock LLMs are enabled:
-
-1. Search the AWS console for Amazon Bedrock.
-2. From the Amazon Bedrock page, click **Get started**.
-3. Select **Model access** from the left navigation menu, then click **Manage model access**.
-4. Check the boxes for **Claude** and/or **Claude Instant**, depending which model or models you plan to use.
-5. Click **Save changes**.
-
-The following video demonstrates these steps.
-
-
-
-## Configure the Amazon Bedrock connector [security-connect-to-bedrock-configure-the-amazon-bedrock-connector]
-
-Finally, configure the connector in {{kib}}:
-
-1. Log in to {{kib}}.
-2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **Amazon Bedrock**.
-3. Name your connector.
-4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`.
-5. (Optional) Add one of the following strings if you want to use a model other than the default:
-
- * For Haiku: `anthropic.claude-3-haiku-20240307-v1:0`
- * For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0`
- * For Opus: `anthropic.claude-3-opus-20240229-v1:0`
-
-6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**.
-
-Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](https://docs.elastic.co/security/ai-assistant).
-
-::::{important}
-If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/latest/userguide/prov-throughput.html), your ARN becomes the model ID, and the connector settings **URL** value must be [encoded](https://www.urlencoder.org/) to work. For example, if the non-encoded ARN is `arn:aws:bedrock:us-east-2:123456789102:provisioned-model/3Ztr7hbzmkrqy1`, the encoded ARN would be `arn%3Aaws%3Abedrock%3Aus-east-2%3A123456789102%3Aprovisioned-model%2F3Ztr7hbzmkrqy1`.
-
-::::
-
-
-The following video demonstrates these steps.
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md b/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md
deleted file mode 100644
index fd4af758ca..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md
+++ /dev/null
@@ -1,98 +0,0 @@
-# Connect to Google Vertex AI [security-connect-to-google-vertex]
-
-This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your {{elastic-sec}} project.
-
-::::{important}
-Before continuing, you should have an active project in one of Google Vertex AI’s [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
-
-::::
-
-
-
-## Enable the Vertex AI API [security-connect-to-google-vertex-enable-the-vertex-ai-api]
-
-1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**.
-2. If you’re new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**.
-
-The following video demonstrates these steps.
-
-
-
-::::{note}
-For more information about enabling the Vertex AI API, refer to [Google’s documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment).
-
-::::
-
-
-
-## Create a Vertex AI service account [security-connect-to-google-vertex-create-a-vertex-ai-service-account]
-
-1. In the GCP console, navigate to **APIs & Services → Library**.
-2. Search for **Vertex AI API**, select it, and click **MANAGE**.
-3. In the left menu, navigate to **Credentials** then click **+ CREATE CREDENTIALS** and select **Service account**.
-4. Name the new service account, then click **CREATE AND CONTINUE**.
-5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**.
-6. Click **Done**.
-
-The following video demonstrates these steps.
-
-
-
-
-## Generate an API key [security-connect-to-google-vertex-generate-an-api-key]
-
-1. Return to Vertex AI’s **Credentials** menu and click **Manage service accounts**.
-2. Search for the service account you just created, select it, then click the link that appears under **Email**.
-3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**.
-4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure.
-
-The following video demonstrates these steps.
-
-
-
-
-## Configure the Google Gemini connector [security-connect-to-google-vertex-configure-the-google-gemini-connector]
-
-Finally, configure the connector in {{kib}}:
-
-1. Log in to {{kib}}.
-2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **Google Gemini**.
-3. Name your connector to help keep track of the model version you are using.
-4. Under **URL**, enter the URL for your region.
-5. Enter your **GCP Region** and **GCP Project ID**.
-6. Under **Default model**, specify either `gemini-1.5.pro` or `gemini-1.5-flash`. [Learn more about the models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models).
-7. Under **Authentication**, enter your API key.
-8. Click **Save**.
-
-The following video demonstrates these steps.
-
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md b/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md
deleted file mode 100644
index 67a31b9de2..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-connect-to-openai.md
+++ /dev/null
@@ -1,58 +0,0 @@
-# Connect to OpenAI [security-connect-to-openai]
-
-This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI’s large language models (LLMs) within {{kib}}. You’ll first need to create an OpenAI API key, then configure the connector in {{kib}}.
-
-
-## Configure OpenAI [security-connect-to-openai-configure-openai]
-
-
-### Select a model [security-connect-to-openai-select-a-model]
-
-Before creating an API key, you must choose a model. Refer to the [OpenAI docs](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) to select a model. Take note of the specific model name (for example `gpt-4-turbo`); you’ll need it when configuring {{kib}}.
-
-::::{note}
-`GPT-4o` offers increased performance over previous versions. For more information on how different models perform for different tasks, refer to the [LLM performance matrix](../../../solutions/security/ai/large-language-model-performance-matrix.md).
-
-::::
-
-
-
-### Create an API key [security-connect-to-openai-create-an-api-key]
-
-To generate an API key:
-
-1. Log in to the OpenAI platform and navigate to **API keys**.
-2. Select **Create new secret key**.
-3. Name your key, select an OpenAI project, and set the desired permissions.
-4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen.
-
-The following video demonstrates these steps.
-
-
-
-## Configure the OpenAI connector [security-connect-to-openai-configure-the-openai-connector]
-
-Finally, configure the connector in {{kib}}:
-
-1. Log in to {{kib}}.
-2. Find **Connectors** in the navigation menu or use the global search field. Then click **Create Connector**, and select **OpenAI**.
-3. Provide a name for your connector, such as `OpenAI (GPT-4 Turbo Preview)`, to help keep track of the model and version you are using.
-4. Under **Select an OpenAI provider**, choose **OpenAI**.
-5. The **URL** field can be left as default.
-6. Under **Default model**, specify which [model](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4) you want to use.
-7. Paste the API key that you created into the corresponding field.
-8. Click **Save**.
-
-The following video demonstrates these steps.
-
-
diff --git a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md b/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md
deleted file mode 100644
index 9041a6d1e9..0000000000
--- a/raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md
+++ /dev/null
@@ -1,49 +0,0 @@
-# Triage alerts [security-triage-alerts-with-elastic-ai-assistant]
-
-Elastic AI Assistant can help you enhance and streamline your alert triage workflows by assessing multiple recent alerts in your environment, and helping you interpret an alert and its context.
-
-When you view an alert in {{elastic-sec}}, details such as related documents, hosts, and users appear alongside a synopsis of the events that triggered the alert. This data provides a starting point for understanding a potential threat. AI Assistant can answer questions about this data and offer insights and actionable recommendations to remediate the issue.
-
-To enable AI Assistant to answer questions about alerts, you need to provide alert data as context for your prompts. You can either provide multiple alerts using the [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md) feature, or provide individual alerts directly.
-
-
-## Use AI Assistant to triage multiple alerts [ai-assistant-triage-alerts-knowledge-base]
-
-Enable the [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md) **Alerts** setting to send AI Assistant data for up to 500 alerts as context for each of your prompts. Use the slider on the Security AI settings' **Knowledge Base** tab to select the number of alerts to send to AI Assistant.
-
-For more information, refer to [Knowledge Base](../../../solutions/security/ai/ai-assistant-knowledge-base.md).
-
-
-## Use AI Assistant to triage a specific alert [use-ai-assistant-to-triage-an-alert]
-
-Once you have chosen an alert to investigate:
-
-1. Click its **View details** button from the Alerts table.
-2. In the alert details flyout, click **Chat** to launch the AI assistant. Data related to the selected alert is automatically added to the prompt.
-3. Click **Alert (from summary)** to view which alert fields will be shared with AI Assistant.
-
- ::::{note}
- For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](../../../solutions/security/ai/ai-assistant.md).
- ::::
-
-4. (Optional) Click a quick prompt to use it as a starting point for your query, for example **Alert summarization**. Improve the quality of AI Assistant’s response by customizing the prompt and adding detail.
-
- Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions.
-
-5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report.
-
-
-## Generate triage reports [generate-triage-reports]
-
-Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once the AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as:
-
-* “Generate a detailed report about this incident, including timeline, impact analysis, and response actions. Also, include a diagram of events.”
-* “Generate a summary of this incident/alert and include diagrams of events.”
-* “Provide more details on the mitigation strategies used.”
-
-After you review the report, click **Add to existing case** at the top of AI Assistant’s response. This allows you to save a record of the report and make it available to your team.
-
-:::{image} ../../../images/serverless-ai-triage-add-to-case.png
-:alt: An AI Assistant dialogue with the add to existing case button highlighted
-:screenshot:
-:::
diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml
index e5863b2def..b1a1ca6398 100644
--- a/raw-migrated-files/toc.yml
+++ b/raw-migrated-files/toc.yml
@@ -145,8 +145,6 @@ toc:
children:
- file: docs-content/serverless/intro.md
- file: docs-content/serverless/ai-assistant-knowledge-base.md
- - file: docs-content/serverless/attack-discovery.md
- - file: docs-content/serverless/connect-to-byo-llm.md
- file: docs-content/serverless/elasticsearch-differences.md
- file: docs-content/serverless/elasticsearch-explore-your-data.md
- file: docs-content/serverless/elasticsearch-http-apis.md
@@ -164,18 +162,12 @@ toc:
- file: docs-content/serverless/project-setting-data.md
- file: docs-content/serverless/project-settings-alerts.md
- file: docs-content/serverless/project-settings-content.md
- - file: docs-content/serverless/security-ai-assistant-esql-queries.md
- - file: docs-content/serverless/security-ai-assistant.md
- - file: docs-content/serverless/security-ai-for-security.md
- - file: docs-content/serverless/security-ai-use-cases.md
- - file: docs-content/serverless/security-ai-usecase-incident-reporting.md
+ - file: docs-content/serverless/security-about-rules.md
+ - file: docs-content/serverless/security-add-manage-notes.md
+ - file: docs-content/serverless/security-advanced-settings.md
+ - file: docs-content/serverless/security-agent-tamper-protection.md
- file: docs-content/serverless/security-automatic-import.md
- - file: docs-content/serverless/security-connect-to-azure-openai.md
- - file: docs-content/serverless/security-connect-to-bedrock.md
- - file: docs-content/serverless/security-connect-to-google-vertex.md
- - file: docs-content/serverless/security-connect-to-openai.md
- file: docs-content/serverless/security-detection-engine-overview.md
- - file: docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md
- file: docs-content/serverless/security-vuln-management-faq.md
- file: docs-content/serverless/spaces.md
- file: docs-content/serverless/what-is-observability-serverless.md
diff --git a/solutions/security/ai/attack-discovery.md b/solutions/security/ai/attack-discovery.md
index 59d92420db..86736d6d1f 100644
--- a/solutions/security/ai/attack-discovery.md
+++ b/solutions/security/ai/attack-discovery.md
@@ -6,43 +6,15 @@ mapped_urls:
# Attack Discovery
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/attack-discovery.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/attack-discovery.md
-
-% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
-
-$$$attack-discovery-generate-discoveries$$$
-
-$$$attack-discovery-what-info$$$
-
-$$$attack-discovery-workflows$$$
-
::::{warning}
This feature is in technical preview. It may change in the future, and you should exercise caution when using it in production environments. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of GA features.
::::
-
Attack Discovery leverages large language models (LLMs) to analyze alerts in your environment and identify threats. Each "discovery" represents a potential attack and describes relationships among multiple alerts to tell you which users and hosts are involved, how alerts correspond to the MITRE ATT&CK matrix, and which threat actor might be responsible. This can help make the most of each security analyst’s time, fight alert fatigue, and reduce your mean time to respond.
-For a demo, refer to the following video.
-
-::::{admonition}
-
-
-
-::::
+For a demo, refer to the following video (click to view).
+[](https://videos.elastic.co/watch/eT92arEbpRddmSM4JeyzdX?)
This page describes:
@@ -55,34 +27,34 @@ This page describes:
The `Attack Discovery: All` privilege allows you to use Attack Discovery.
-:::{image} ../../../images/security-attck-disc-rbac.png
-:alt: Attack Discovery's RBAC settings
-:::
+
+
+
## Generate discoveries [attack-discovery-generate-discoveries]
-When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [*AI Assistant*](/solutions/security/ai/ai-assistant.md). To get started:
+When you access Attack Discovery for the first time, you’ll need to select an LLM connector before you can analyze alerts. Attack Discovery uses the same LLM connectors as [AI Assistant](/solutions/security/ai/ai-assistant.md). To get started:
1. Click the **Attack Discovery** page from {{elastic-sec}}'s navigation menu.
2. Select an existing connector from the dropdown menu, or add a new one.
- ::::{admonition} Recommended models
- While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best.
+ :::{admonition} Recommended models
+ While Attack Discovery is compatible with many different models, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md) to see which models perform best.
- ::::
+ :::
- :::{image} ../../../images/security-attck-disc-select-model-empty.png
- :alt: attck disc select model empty
- :::
+ :::{image} ../../../images/security-attck-disc-select-model-empty.png
+ :alt: attck disc select model empty
+ :::
3. Once you’ve selected a connector, click **Generate** to start the analysis.
It may take from a few seconds up to several minutes to generate discoveries, depending on the number of alerts and the model you selected.
::::{important}
-By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon () next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error.
+By default, Attack Discovery analyzes up to 100 alerts within this timeframe, but you can expand this up to 500 by clicking the settings icon () next to the model selection menu and adjusting the **Alerts** slider. Note that sending more alerts than your chosen LLM can handle may result in an error.
::::
@@ -116,7 +88,7 @@ Each discovery includes the following information describing the potential threa
There are several ways you can incorporate discoveries into your {{elastic-sec}} workflows:
* Click an entity’s name to open the user or host details flyout and view more details that may be relevant to your investigation.
-* Hover over an entity’s name to either add the entity to Timeline () or copy its field name and value to the clipboard ().
+* Hover over an entity’s name to either add the entity to Timeline () or copy its field name and value to the clipboard ().
* Click **Take action**, then select **Add to new case** or **Add to existing case** to add a discovery to a [case](/solutions/security/investigate/cases.md). This makes it easy to share the information with your team and other stakeholders.
* Click **Investigate in timeline** to explore the discovery in [Timeline](/solutions/security/investigate/timeline.md).
* Click **View in AI Assistant** to attach the discovery to a conversation with AI Assistant. You can then ask follow-up questions about the discovery or associated alerts.
diff --git a/solutions/security/ai/connect-to-amazon-bedrock.md b/solutions/security/ai/connect-to-amazon-bedrock.md
index 543f3c0888..1432cbee14 100644
--- a/solutions/security/ai/connect-to-amazon-bedrock.md
+++ b/solutions/security/ai/connect-to-amazon-bedrock.md
@@ -6,13 +6,6 @@ mapped_urls:
# Connect to Amazon Bedrock
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-bedrock.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-bedrock.md
-
This page provides step-by-step instructions for setting up an Amazon Bedrock connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure AWS, then configure the connector in {{kib}}.
::::{note}
@@ -55,20 +48,9 @@ First, configure an IAM policy with the necessary permissions:
4. Click **Next**. Name your policy.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+
+[](https://videos.elastic.co/watch/ek6NpHaj6u4keZyEjPWXcT?)
@@ -82,22 +64,9 @@ Next, assign the policy you just created to a new user:
4. In the **Permissions policies** field, search for the policy you created earlier, select it, and click **Next**.
5. Review the configuration then click **Create user**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
-
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/5BQb2P818SMddRo6gA79hd?)
### Create an access key [_create_an_access_key]
@@ -107,24 +76,12 @@ Create the access keys that will authenticate your Elastic connector:
2. Search for the user you just created, and click its name.
3. Go to the **Security credentials** tab.
4. Under **Access keys**, click **Create access key**.
-5. Select **Third-party service**, check the box under **Confirmation***, click ***Next**, then click **Create access key**.
+5. Select **Third-party service**, check the box under **Confirmation**, click **Next**, then click **Create access key**.
6. Click **Download .csv file** to download the key. Store it securely.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/8oXgP1fbaQCqjWUgncF9at?)
### Enable model access [_enable_model_access]
@@ -137,21 +94,9 @@ Make sure the supported Amazon Bedrock LLMs are enabled:
4. Check the boxes for **Claude** and/or **Claude Instant**, depending which model or models you plan to use.
5. Click **Save changes**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/Z7zpHq4N9uvUxegBUMbXDj?)
## Configure the Amazon Bedrock connector [_configure_the_amazon_bedrock_connector]
@@ -159,18 +104,18 @@ The following video demonstrates these steps.
Finally, configure the connector in {{kib}}:
1. Log in to {{kib}}.
-2. . Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**.
+2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**.
3. Name your connector.
4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`.
5. (Optional) Add one of the following strings if you want to use a model other than the default:
- 1. For Haiku: `anthropic.claude-3-haiku-20240307-v1:0`
- 2. For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0`
- 3. For Opus: `anthropic.claude-3-opus-20240229-v1:0`
+ * For Haiku: `anthropic.claude-3-haiku-20240307-v1:0`
+ * For Sonnet: `anthropic.claude-3-sonnet-20240229-v1:0`
+ * For Opus: `anthropic.claude-3-opus-20240229-v1:0`
6. Enter the **Access Key** and **Secret** that you generated earlier, then click **Save**.
- Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md).
+Your LLM connector is now configured. For more information on using Elastic AI Assistant, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md).
::::{important}
@@ -178,17 +123,6 @@ If you’re using [provisioned throughput](https://docs.aws.amazon.com/bedrock/l
::::
-The following video demonstrates these steps.
+The following video demonstrates these steps (click to watch).
-::::{admonition}
-
-
-
-::::
+[](https://videos.elastic.co/watch/QJe4RcTJbp6S6m9CkReEXs?)
\ No newline at end of file
diff --git a/solutions/security/ai/connect-to-azure-openai.md b/solutions/security/ai/connect-to-azure-openai.md
index 892ec43960..e68a6f6c91 100644
--- a/solutions/security/ai/connect-to-azure-openai.md
+++ b/solutions/security/ai/connect-to-azure-openai.md
@@ -6,13 +6,6 @@ mapped_urls:
# Connect to Azure OpenAI
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-azure-openai.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-azure-openai.md
-
This page provides step-by-step instructions for setting up an Azure OpenAI connector for the first time. This connector type enables you to leverage large language models (LLMs) within {{kib}}. You’ll first need to configure Azure, then configure the connector in {{kib}}.
@@ -31,22 +24,9 @@ First, set up an Azure OpenAI deployment:
6. Configure your network settings, click **Next**, optionally add tags, then click **Next**.
7. Review your deployment settings, then click **Create**. When complete, select **Go to resource**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
-
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/7NEa5VkVJ67RHWBuK8qMXA?)
### Configure keys [_configure_keys]
@@ -55,21 +35,9 @@ Next, create access keys for the deployment:
1. From within your Azure OpenAI deployment, select **Click here to manage keys**.
2. Store your keys in a secure location.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/cQXw96XjaeF4RiB3V4EyTT?)
### Configure a model [_configure_a_model]
@@ -81,29 +49,18 @@ Now, set up the Azure OpenAI model:
3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`.
4. Set the model version to "Auto-update to default".
- ::::{important}
- The models available to you depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md).
- ::::
+ :::{important}
+ The models available to you depend on [region availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability). For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the [Large language model performance matrix](/solutions/security/ai/large-language-model-performance-matrix.md).
+ :::
5. Under **Deployment type**, select **Standard**.
6. Name your deployment.
7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits.
8. Click **Create**.
-The following video demonstrates these steps.
+The following video demonstrates these steps (click to watch).
-::::{admonition}
-
-
-
-::::
+[](https://videos.elastic.co/watch/PdadFyV1p1DbWRyCr95whT?)
@@ -125,17 +82,6 @@ Finally, configure the connector in {{kib}}:
6. Under **API key**, enter one of your API keys.
7. Click **Save & test**, then click **Run**.
-Your LLM connector is now configured. The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+Your LLM connector is now configured. The following video demonstrates these steps (click to watch).
+
+[](https://videos.elastic.co/watch/RQZVcnXHokC3RcV6ZB2pmF?)
\ No newline at end of file
diff --git a/solutions/security/ai/connect-to-google-vertex.md b/solutions/security/ai/connect-to-google-vertex.md
index 3e34d0ddea..e8090bb0f5 100644
--- a/solutions/security/ai/connect-to-google-vertex.md
+++ b/solutions/security/ai/connect-to-google-vertex.md
@@ -6,14 +6,7 @@ mapped_urls:
# Connect to Google Vertex
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/connect-to-vertex.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md
-
-This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate a key, and finally configure the connector in your {{elastic-sec}} project.
+This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate an API key, and finally configure the connector in your {{elastic-sec}} project.
::::{important}
Before continuing, you should have an active project in one of Google Vertex AI’s [supported regions](https://cloud.google.com/vertex-ai/docs/general/locations#feature-availability).
@@ -26,20 +19,9 @@ Before continuing, you should have an active project in one of Google Vertex AI
1. Log in to the GCP console and navigate to **Vertex AI → Vertex AI Studio → Overview**.
2. If you’re new to Vertex AI, the **Get started with Vertex AI Studio** popup appears. Click **Vertex AI API**, then click **ENABLE**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+
+[](https://videos.elastic.co/watch/vFhtbiCZiKhvdZGy2FjyeT?)
::::{note}
@@ -57,21 +39,9 @@ For more information about enabling the Vertex AI API, refer to [Google’s docu
5. Under **Select a role**, select **Vertex AI User**, then click **CONTINUE**.
6. Click **Done**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/tmresYYiags2w2nTv3Gac8?)
## Generate a key [_generate_an_api_key]
@@ -81,21 +51,9 @@ The following video demonstrates these steps.
3. Go to the **KEYS** tab, click **ADD KEY**, then select **Create new key**.
4. Select **JSON**, then click **CREATE** to download the key. Keep it somewhere secure.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/hrcy3F9AodwhJcV1i2yqbG?)
## Configure the Google Gemini connector [_configure_the_google_gemini_connector]
@@ -111,17 +69,6 @@ Finally, configure the connector in your Elastic deployment:
7. Under **Authentication**, enter your credentials JSON.
8. Click **Save**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+
+[](https://videos.elastic.co/watch/8L2WPm2HKN1cH872Gs5uvL?)
\ No newline at end of file
diff --git a/solutions/security/ai/connect-to-openai.md b/solutions/security/ai/connect-to-openai.md
index b8acafb1ba..2434b76867 100644
--- a/solutions/security/ai/connect-to-openai.md
+++ b/solutions/security/ai/connect-to-openai.md
@@ -6,13 +6,6 @@ mapped_urls:
# Connect to OpenAI
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/assistant-connect-to-openai.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-openai.md
-
This page provides step-by-step instructions for setting up an OpenAI connector for the first time. This connector type enables you to leverage OpenAI’s large language models (LLMs) within {{kib}}. You’ll first need to create an OpenAI API key, then configure the connector in {{kib}}.
@@ -38,21 +31,9 @@ To generate an API key:
3. Name your key, select an OpenAI project, and set the desired permissions.
4. Click **Create secret key** and then copy and securely store the key. It will not be accessible after you leave this screen.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+[](https://videos.elastic.co/watch/vbD7fGBGgyxK4TRbipeacL?)
## Configure the OpenAI connector [_configure_the_openai_connector]
@@ -68,17 +49,6 @@ To integrate with {{kib}}:
7. Paste the API key that you created into the corresponding field.
8. Click **Save**.
-The following video demonstrates these steps.
-
-::::{admonition}
-
-
-
-::::
+The following video demonstrates these steps (click to watch).
+
+[](https://videos.elastic.co/watch/BGaQ73KBJCzeqWoxXkQvy9?)
\ No newline at end of file
diff --git a/solutions/security/ai/connect-to-own-local-llm.md b/solutions/security/ai/connect-to-own-local-llm.md
index a6196d835c..5f235b29ad 100644
--- a/solutions/security/ai/connect-to-own-local-llm.md
+++ b/solutions/security/ai/connect-to-own-local-llm.md
@@ -6,13 +6,6 @@ mapped_urls:
# Connect to your own local LLM
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/connect-to-byo-llm.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/connect-to-byo-llm.md
-
This page provides instructions for setting up a connector to a large language model (LLM) of your choice using LM Studio. This allows you to use your chosen model within {{elastic-sec}}. You’ll first need to set up a reverse proxy to communicate with {{elastic-sec}}, then set up LM Studio on a server, and finally configure the connector in your Elastic deployment. [Learn more about the benefits of using a local LLM](https://www.elastic.co/blog/ai-assistant-locally-hosted-models).
This example uses a single server hosted in GCP to run the following components:
@@ -46,7 +39,7 @@ You need to set up a reverse proxy to enable communication between LM Studio and
The following is an example Nginx configuration file:
-```txt
+```nginx
server {
listen 80;
listen [::]:80;
@@ -176,21 +169,9 @@ If your model uses NVIDIA drivers, you can check the GPU performance with the `s
### Option 2: load a model using the GUI [_option_2_load_a_model_using_the_gui]
-Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**.
-
-::::{admonition}
-
-
-
-::::
+Refer to the following video to see how to load a model using LM Studio’s GUI. You can change the **port** setting, which is referenced in the Nginx configuration file. Note that the **GPU offload** was set to **Max**. The following video demonstrates this process (click to watch).
+[](https://videos.elastic.co/watch/c4AxH8d9tWMnwNp5J6bcfX?)
## (Optional) Collect logs using Elastic’s Custom Logs integration [_optional_collect_logs_using_elastics_custom_logs_integration]
diff --git a/solutions/security/ai/generate-customize-learn-about-esorql-queries.md b/solutions/security/ai/generate-customize-learn-about-esorql-queries.md
index d291b563d4..cb106d4139 100644
--- a/solutions/security/ai/generate-customize-learn-about-esorql-queries.md
+++ b/solutions/security/ai/generate-customize-learn-about-esorql-queries.md
@@ -6,13 +6,6 @@ mapped_urls:
# Generate, customize, and learn about ES|QL queries
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/esql-queries-assistant.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-assistant-esql-queries.md
-
Elastic AI Assistant can help you learn about and leverage the Elasticsearch Query Language ({{esql}}) in many ways, including:
* **Education and training**: AI Assistant can serve as a powerful {{esql}} learning tool. Ask it for examples, explanations of complex queries, and best practices.
diff --git a/solutions/security/ai/identify-investigate-document-threats.md b/solutions/security/ai/identify-investigate-document-threats.md
index 065faf29c4..85c0a4d8c9 100644
--- a/solutions/security/ai/identify-investigate-document-threats.md
+++ b/solutions/security/ai/identify-investigate-document-threats.md
@@ -6,13 +6,6 @@ mapped_urls:
# Identify, investigate, and document threats
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/attack-discovery-ai-assistant-incident-reporting.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-usecase-incident-reporting.md
-
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
$$$use-case-incident-reporting-create-a-case-using-ai-assistant$$$
@@ -67,7 +60,7 @@ At any point in a conversation with AI Assistant, you can add data, narrative su
## Generate reports [use-case-incident-reporting-create-a-case-using-ai-assistant]
-From the AI Assistant dialog window, click **Add to case** () next to a message to add the information in that message to a [case](/solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders.
+From the AI Assistant dialog window, click **Add to case** () next to a message to add the information in that message to a [case](/solutions/security/investigate/cases.md). Cases help centralize relevant details in one place for easy sharing with stakeholders.
If you add a message that contains a discovery to a case, AI Assistant automatically adds the attack summary and all associated alerts to the case. You can also add AI Assistant messages that contain remediation steps and relevant data to the case.
diff --git a/solutions/security/ai/large-language-model-performance-matrix.md b/solutions/security/ai/large-language-model-performance-matrix.md
index 1ab04f1470..9e95b9860c 100644
--- a/solutions/security/ai/large-language-model-performance-matrix.md
+++ b/solutions/security/ai/large-language-model-performance-matrix.md
@@ -6,13 +6,6 @@ mapped_urls:
# Large language model performance matrix
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/llm-performance-matrix.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-llm-performance-matrix.md
-
This page describes the performance of various large language models (LLMs) for different use cases in {{elastic-sec}}, based on our internal testing. To learn more about these use cases, refer to [Attack discovery](/solutions/security/ai/attack-discovery.md) or [AI Assistant](/solutions/security/ai/ai-assistant.md).
::::{note}
diff --git a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md
index d66d9fbb60..1805aa9eaa 100644
--- a/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md
+++ b/solutions/security/ai/set-up-connectors-for-large-language-models-llm.md
@@ -6,13 +6,6 @@ mapped_urls:
# Set up connectors for large language models (LLM)
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/llm-connector-guides.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-llm-connector-guides.md
-
This section contains instructions for setting up connectors for LLMs so you can use [Elastic AI Assistant](/solutions/security/ai/ai-assistant.md) and [Attack discovery](/solutions/security/ai/attack-discovery.md).
Setup guides are available for the following LLM providers:
diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md
index c1e532aa8f..84cd757f6e 100644
--- a/solutions/security/ai/triage-alerts.md
+++ b/solutions/security/ai/triage-alerts.md
@@ -6,13 +6,6 @@ mapped_urls:
# Triage alerts
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/assistant-triage.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-triage-alerts-with-elastic-ai-assistant.md
-
Elastic AI Assistant can help you enhance and streamline your alert triage workflows by assessing multiple recent alerts in your environment, and helping you interpret an alert and its context.
When you view an alert in {{elastic-sec}}, details such as related documents, hosts, and users appear alongside a synopsis of the events that triggered the alert. This data provides a starting point for understanding a potential threat. AI Assistant can answer questions about this data and offer insights and actionable recommendations to remediate the issue.
@@ -35,20 +28,20 @@ Once you have chosen an alert to investigate:
2. In the alert details flyout, click **Chat** to launch the AI assistant. Data related to the selected alert is automatically added to the prompt.
3. Click **Alert (from summary)** to view which alert fields will be shared with AI Assistant.
- ::::{note}
- For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md).
- ::::
+ :::{note}
+ For more information about selecting which fields to send, and to learn about anonymizing your data, refer to [AI Assistant](/solutions/security/ai/ai-assistant.md).
+ :::
4. (Optional) Click a quick prompt to use it as a starting point for your query, for example **Alert summarization**. Improve the quality of AI Assistant’s response by customizing the prompt and adding detail.
- Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions.
+ Once you’ve submitted your query, AI Assistant will process the information and provide a detailed response. Depending on your prompt and the alert data that you included, its response can include a thorough analysis of the alert that highlights key elements such as the nature of the potential threat, potential impact, and suggested response actions.
5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report.
## Generate triage reports [ai-triage-reportgen]
-Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once the AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as:
+Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as:
* “Generate a detailed report about this incident including timeline, impact analysis, and response actions. Also, include a diagram of events.”
* “Generate a summary of this incident/alert and include diagrams of events.”
diff --git a/solutions/security/ai/use-cases.md b/solutions/security/ai/use-cases.md
index 8b21da9838..6198771029 100644
--- a/solutions/security/ai/use-cases.md
+++ b/solutions/security/ai/use-cases.md
@@ -6,13 +6,6 @@ mapped_urls:
# Use cases
-% What needs to be done: Lift-and-shift
-
-% Use migrated content from existing pages that map to this page:
-
-% - [x] ./raw-migrated-files/security-docs/security/assistant-use-cases.md
-% - [ ] ./raw-migrated-files/docs-content/serverless/security-ai-use-cases.md
-
The guides in this section describe use cases for AI Assistant and Attack discovery. Refer to them for examples of each tool’s individual capabilities and of what they can do together.
* [Triage alerts](/solutions/security/ai/triage-alerts.md)