Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 10 additions & 5 deletions solutions/security/ai/ai-assistant.md
Original file line number Diff line number Diff line change
Expand Up @@ -395,7 +395,14 @@ To modify Anonymization settings, you need the **Elastic AI Assistant: All** pri

::::

The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated.
When you send alert data to AI Assistant, you may want to obfuscate sensitive information before it reaches the LLM provider.

The **Anonymization** tab of the Security AI settings menu allows you to define default data anonymization behavior for events you send to AI Assistant. Fields with **Allowed** toggled on are included in events provided to AI Assistant. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated (replaced by placeholders), so AI Assistant won't have access to their actual values.

This can help with:
- **Compliance**: Avoid sending PII or sensitive data to third-party LLM providers.
- **Privacy**: Protect internal data while still enabling AI analysis.
- **Policy**: Meet your organization's data handling requirements.

::::{note}
You can access anonymization settings directly from the **Attack Discovery** page by clicking the settings (![Settings icon](/solutions/images/security-icon-settings.png "title =20x20")) button next to the model selection dropdown menu.
Expand All @@ -406,9 +413,9 @@ You can access anonymization settings directly from the **Attack Discovery** pag
:screenshot:
:::

The fields on this list are among those most likely to provide relevant context to AI Assistant. Fields with **Allowed** toggled on are included. **Allowed** fields with **Anonymized** set to **Yes** are included, but with their values obfuscated.
These fields are among those most likely to provide relevant context to AI Assistant, and are included by default.

The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings. It also doesn’t affect how event fields appear *before* being sent to AI Assistant. Instead, it controls how fields that were already sent and obfuscated appear to you.
The **Show anonymized** toggle controls whether you see the obfuscated or plaintext versions of the fields you sent to AI Assistant. In other words, it controls how fields that were already sent and obfuscated appear to you. It doesn’t control what gets obfuscated — that’s determined by the anonymization settings.

When you include a particular event as context, such as an alert from the Alerts page, you can adjust anonymization behavior for the specific event. Be sure the anonymization behavior meets your specifications before sending a message with the event attached.

Expand All @@ -434,6 +441,4 @@ In addition to practical advice, AI Assistant can offer conceptual advice, tips,

## Learn more

- For more information about how AI Assistant works in Observability and Search, refer to [{{obs-ai-assistant}}](/solutions/observability/observability-ai-assistant.md).

The capabilities and ways to interact with AI Assistant can differ for each solution. For more information about how AI Assistant works in Observability and Search, refer to [{{obs-ai-assistant}}](/solutions/observability/observability-ai-assistant.md).
2 changes: 1 addition & 1 deletion solutions/security/ai/connect-to-amazon-bedrock.md
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ Finally, configure the connector in {{kib}}:
2. Find the **Connectors** page in the navigation menu or use the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). Then click **Create Connector**, and select **Amazon Bedrock**.
3. Name your connector.
4. (Optional) Configure the Amazon Bedrock connector to use a different AWS region where Anthropic models are supported by editing the **URL** field, for example by changing `us-east-1` to `eu-central-1`.
5. (Optional) Add one of the following strings if you want to use a model other than the default. Note that these URLs should have a prefix of `us.` or `eu.`, depending on your region, for example `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`.
5. (Optional) Add one of the following strings if you want to use a model other than the default. Note that these model IDs should have a prefix of `us.` or `eu.`, depending on your region, for example `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`.

* Sonnet 3.5: `us.anthropic.claude-3-5-sonnet-20240620-v1:0` or `eu.anthropic.claude-3-5-sonnet-20240620-v1:0`
* Sonnet 3.5 v2: `us.anthropic.claude-3-5-sonnet-20241022-v2:0` or `eu.anthropic.claude-3-5-sonnet-20241022-v2:0`
Expand Down
7 changes: 0 additions & 7 deletions solutions/security/ai/connect-to-google-vertex.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,6 @@ products:

# Connect to Google Vertex

% What needs to be done: Lift-and-shift

% Use migrated content from existing pages that map to this page:

% - [x] ./raw-migrated-files/security-docs/security/connect-to-vertex.md
% - [ ] ./raw-migrated-files/docs-content/serverless/security-connect-to-google-vertex.md

This page provides step-by-step instructions for setting up a Google Vertex AI connector for the first time. This connector type enables you to leverage Vertex AI’s large language models (LLMs) within {{elastic-sec}}. You’ll first need to enable Vertex AI, then generate a key, and finally configure the connector in your {{elastic-sec}} project.

::::{important}
Expand Down
41 changes: 40 additions & 1 deletion solutions/security/ai/triage-alerts.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,6 @@ Once you have chosen an alert to investigate:

5. (Optional) Ask AI Assistant follow-up questions, provide additional information for further analysis, and request clarification. The response is not a static report.


## Generate triage reports [ai-triage-reportgen]

Elastic AI Assistant can streamline the documentation and report generation process by providing clear records of security incidents, their scope and impact, and your remediation efforts. You can use AI Assistant to create summaries or reports for stakeholders that include key event details, findings, and diagrams. Once AI Assistant has finished analyzing one or more alerts, you can generate reports by using prompts such as:
Expand All @@ -60,3 +59,43 @@ After you review the report, click **Add to existing case** at the top of AI Ass
:alt: An AI Assistant dialogue with the add to existing case button highlighted
:screenshot:
:::


## Example alert triage workflow

This section shows an example workflow for triaging a specific alert.

**Scenario:** You are investigating an alert: "Multiple Failed Logins Followed by Success - user: jsmith"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a good place to use the stepper component: https://elastic.github.io/docs-builder/syntax/stepper/

**Step 1: Open Alert and Generate Initial Analysis**
1. From the **Alerts** table, click **View details**.
2. Click **Chat** to open AI Assistant. The alert information is automatically attached.
3. Click the **Alert summarization** quick prompt. AI Assistant shared an initial alert assessment.

**Step 2: Assess Criticality and Context**
Ask AI Assistant:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do users need to ask the assistant the following questions in any particular order? Also, do they need to ask all of these questions or just some?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No particular order, just examples of what users might consider asking to get more info.

- "Is user jsmith typically logging in from [this IP/location]?"
- "Are there other suspicious activities from this user in the last 24 hours?"
- "What's the risk score for the source IP?"

**Step 3: Investigate Related Activity**
If AI Assistant flags concerns, investigate further. Ask AI Assistant to:
- "Generate an {{esql}} query to find all recent activity from user jsmith".
- "Generate an {{esql}} query to find other users logging in from this IP".

**Step 4: Make a Determination**
Based on your initial AI-assisted analysis, determine whether you're dealing with a potential threat:

- **False Positive**: User was traveling, this is expected behavior.
- Immediate action: Add note to alert, close as false positive.
- Future action: Add a rule exception to prevent similar alerts.

- **True Positive**: Behavior indicates a potential attack.
In response to a potential credential compromise, immediately:
- Escalate according to your organization's incident response plan.
- Create a case to track the investigation.

**Step 5: Document Your Findings**
1. From AI Assistant, click **Add to case** on key messages.
2. Go to **Cases**, add your case notes.
3. Update alert status.
Loading