From 2382ebb2b42bd2def14da8d4a469e5056e700563 Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein Date: Thu, 13 Nov 2025 16:27:58 -0800 Subject: [PATCH 1/3] AI-feedback updates to Security AI docs --- solutions/security/ai/ai-assistant.md | 15 ++++--- .../security/ai/connect-to-amazon-bedrock.md | 2 +- .../security/ai/connect-to-google-vertex.md | 7 ---- solutions/security/ai/triage-alerts.md | 41 ++++++++++++++++++- 4 files changed, 51 insertions(+), 14 deletions(-) diff --git a/solutions/security/ai/ai-assistant.md b/solutions/security/ai/ai-assistant.md index dbdd51f343..9605d3b992 100644 --- a/solutions/security/ai/ai-assistant.md +++ b/solutions/security/ai/ai-assistant.md @@ -395,7 +395,14 @@ To modify Anonymization settings, you need the **Elastic AI Assistant: All** pri :::: -The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated. +When you send alert data to AI Assistant, you may want to obfuscate sensitive information before it reaches the LLM provider. + +The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated (replaced by placeholders), so AI Assistant won't have access to their actual values. + +This can help with: +- **Compliance**: Avoid sending PII or sensitive data to third-party LLM providers. +- **Privacy**: Protect internal data while still enabling AI analysis. +- **Policy**: Meet your organization's data handling requirements. ::::{note} You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings (![Settings icon](/solutions/images/security-icon-settings.png "title =20x20")) button next to the model selection dropdown menu. @@ -406,9 +413,9 @@ You can access anonymization settings directly from the **Attack Discovery** pag :screenshot: ::: -The fields on this list are among those most likely to provide relevant context to AI Assistant. Fields with **Allowed** toggled on are included. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated. +These fields are among those most likely to provide relevant context to AI Assistant, and are included by default. -The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings. It also doesn’t affect how event fields appear *before* being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you. +The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. In other words, it controls how fields that were already sent and obfuscated appear to you. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings. When you include a particular event as context, such as an alert from the Alerts page, you can adjust anonymization behavior for the specific event. Be sure the anonymization behavior meets your specifications before sending a message with the event attached. @@ -434,6 +441,4 @@ In addition to practical advice, AI Assistant can offer conceptual advice, tips, ## Learn more -- For more information about how AI Assistant works in Observability and Search, refer to [{{obs-ai-assistant}}](/solutions/observability/observability-ai-assistant.md). - The capabilities and ways to interact with AI Assistant can differ for each solution. For more information about how AI Assistant works in Observability and Search, refer to [{{obs-ai-assistant}}](/solutions/observability/observability-ai-assistant.md). diff --git a/solutions/security/ai/connect-to-amazon-bedrock.md b/solutions/security/ai/connect-to-amazon-bedrock.md index ab2fcc30b8..862fff26aa 100644 --- a/solutions/security/ai/connect-to-amazon-bedrock.md +++ b/solutions/security/ai/connect-to-amazon-bedrock.md @@ -102,7 +102,7 @@ Finally, configure the connector in {{kib}}: 2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**. 3. Name your connector. 4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`. -5. (Optional) Add one of the following strings if you want to use a model other than the default. Note that these URLs should have a prefix of `us.` or `eu.`, depending on your region, for example `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`. +5. (Optional) Add one of the following strings if you want to use a model other than the default. Note that these model IDs should have a prefix of `us.` or `eu.`, depending on your region, for example `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`. * Sonnet 3.5: `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0` * Sonnet 3.5 v2: `us.anthropic.claude-3-5-sonnet-20241022-v2:0` or `eu.anthropic.claude-3-5-sonnet-20241022-v2:0` diff --git a/solutions/security/ai/connect-to-google-vertex.md b/solutions/security/ai/connect-to-google-vertex.md index a72513119b..a46cfdfdf9 100644 --- a/solutions/security/ai/connect-to-google-vertex.md +++ b/solutions/security/ai/connect-to-google-vertex.md @@ -13,13 +13,6 @@ products: # Connect to Google Vertex -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [x] ./raw-migrated-files/security-docs/security/connect-to-vertex.md -% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md - This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate a key, and finally configure the connector in your {{elastic-sec}} project. ::::{important} diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md index a86e43cd43..44436b08d2 100644 --- a/solutions/security/ai/triage-alerts.md +++ b/solutions/security/ai/triage-alerts.md @@ -45,7 +45,6 @@ Once you have chosen an alert to investigate: 5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report. - ## Generate triage reports [ai-triage-reportgen] Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as: @@ -60,3 +59,43 @@ After you review the report, click **Add to existing case** at the top of AI Ass :alt: An AI Assistant dialogue with the add to existing case button highlighted :screenshot: ::: + + +## Example alert triage workflow + +This section shows an example workflow for triaging a specific alert. + +**Scenario:** You are investigating an alert: "Multiple Failed Logins Followed by Success - user: jsmith" + +**Step 1: Open Alert and Generate Initial Analysis** +1. From the **Alerts** table, click **View details**. +2. Click **Chat** to open AI Assistant. The alert information is automatically attached. +3. Click the **Alert summarization** quick prompt. AI Assistant shared an initial alert assessment. + +**Step 2: Assess Criticality and Context** +Ask AI Assistant: +- "Is user jsmith typically logging in from [this IP/location]?" +- "Are there other suspicious activities from this user in the last 24 hours?" +- "What's the risk score for the source IP?" + +**Step 3: Investigate Related Activity** +If AI Assistant flags concerns, investigate further. Ask AI Assistant to: +- "Generate an {{esql}} query to find all recent activity from user jsmith". +- "Generate an {{esql}} query to find other users logging in from this IP". + +**Step 4: Make a Determination** +Based on your initial AI-assisted analysis, determine whether you're dealing with a potential threat: + +- **False Positive**: User was traveling, this is expected behavior. + - Immediate action: Add note to alert, close as false positive. + - Future action: Add a rule exception to prevent similar alerts. + +- **True Positive**: Behavior indicates a potential attack. +In response to a potential credential compromise, immediately: + - Escalate according to your organization's incident response plan. + - Create a case to track the investigation. + +**Step 5: Document Your Findings** +1. From AI Assistant, click **Add to case** on key messages. +2. Go to **Cases**, add your case notes. +3. Update alert status. \ No newline at end of file From fabef51d6f5fd7525d7a63d1f4bd57dead7b7984 Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein <91905639+benironside@users.noreply.github.com> Date: Fri, 14 Nov 2025 08:45:32 -0800 Subject: [PATCH 2/3] Update solutions/security/ai/triage-alerts.md Co-authored-by: Nastasha Solomon <79124755+nastasha-solomon@users.noreply.github.com> --- solutions/security/ai/triage-alerts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md index 44436b08d2..c182ba681d 100644 --- a/solutions/security/ai/triage-alerts.md +++ b/solutions/security/ai/triage-alerts.md @@ -98,4 +98,4 @@ In response to a potential credential compromise, immediately: **Step 5: Document Your Findings** 1. From AI Assistant, click **Add to case** on key messages. 2. Go to **Cases**, add your case notes. -3. Update alert status. \ No newline at end of file +3. Go back to the alert and change its status to `Acknowledged`. \ No newline at end of file From 4bfc80947c38401eb95cef12a159c2d8c14ccc96 Mon Sep 17 00:00:00 2001 From: Benjamin Ironside Goldstein Date: Mon, 17 Nov 2025 12:49:10 -0800 Subject: [PATCH 3/3] Incorporates Nastasha's review --- solutions/security/ai/triage-alerts.md | 28 +++++++++++++++++--------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/solutions/security/ai/triage-alerts.md b/solutions/security/ai/triage-alerts.md index c182ba681d..0546e5cf5c 100644 --- a/solutions/security/ai/triage-alerts.md +++ b/solutions/security/ai/triage-alerts.md @@ -67,35 +67,45 @@ This section shows an example workflow for triaging a specific alert. **Scenario:** You are investigating an alert: "Multiple Failed Logins Followed by Success - user: jsmith" -**Step 1: Open Alert and Generate Initial Analysis** +:::::{stepper} + +::::{step} Open Alert and Generate Initial Analysis 1. From the **Alerts** table, click **View details**. 2. Click **Chat** to open AI Assistant. The alert information is automatically attached. 3. Click the **Alert summarization** quick prompt. AI Assistant shared an initial alert assessment. +:::: -**Step 2: Assess Criticality and Context** +::::{step} Assess Criticality and Context Ask AI Assistant: + - "Is user jsmith typically logging in from [this IP/location]?" - "Are there other suspicious activities from this user in the last 24 hours?" - "What's the risk score for the source IP?" +:::: -**Step 3: Investigate Related Activity** +::::{step} Investigate Related Activity If AI Assistant flags concerns, investigate further. Ask AI Assistant to: + - "Generate an {{esql}} query to find all recent activity from user jsmith". - "Generate an {{esql}} query to find other users logging in from this IP". +:::: -**Step 4: Make a Determination** +::::{step} Make a Determination Based on your initial AI-assisted analysis, determine whether you're dealing with a potential threat: - **False Positive**: User was traveling, this is expected behavior. - Immediate action: Add note to alert, close as false positive. - Future action: Add a rule exception to prevent similar alerts. - -- **True Positive**: Behavior indicates a potential attack. -In response to a potential credential compromise, immediately: + +- **True Positive**: Behavior indicates a potential attack. In response: - Escalate according to your organization's incident response plan. - Create a case to track the investigation. +:::: -**Step 5: Document Your Findings** +::::{step} Document Your Findings 1. From AI Assistant, click **Add to case** on key messages. 2. Go to **Cases**, add your case notes. -3. Go back to the alert and change its status to `Acknowledged`. \ No newline at end of file +3. Go back to the alert and change its status to `Acknowledged`. +:::: + +:::::