diff --git a/explore-analyze/machine-learning/aiops-labs.md b/explore-analyze/machine-learning/aiops-labs.md deleted file mode 100644 index 8f401599c5..0000000000 --- a/explore-analyze/machine-learning/aiops-labs.md +++ /dev/null @@ -1,22 +0,0 @@ ---- -mapped_urls: - - https://www.elastic.co/guide/en/kibana/current/xpack-ml-aiops.html - - https://www.elastic.co/guide/en/serverless/current/observability-machine-learning.html ---- - -# AIOps Labs - -% What needs to be done: Align serverless/stateful - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/kibana/kibana/xpack-ml-aiops.md -% - [ ] ./raw-migrated-files/docs-content/serverless/observability-machine-learning.md - -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): - -$$$change-point-detection$$$ - -$$$log-pattern-analysis$$$ - -$$$log-rate-analysis$$$ \ No newline at end of file diff --git a/explore-analyze/machine-learning/aiops-labs/observability-aiops-analyze-spikes.md b/explore-analyze/machine-learning/aiops-labs/observability-aiops-analyze-spikes.md deleted file mode 100644 index 83dfd1dedc..0000000000 --- a/explore-analyze/machine-learning/aiops-labs/observability-aiops-analyze-spikes.md +++ /dev/null @@ -1,58 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-aiops-analyze-spikes.html ---- - -# Analyze log spikes and drops [observability-aiops-analyze-spikes] - -{{obs-serverless}} provides built-in log rate analysis capabilities, based on advanced statistical methods, to help you find and investigate the causes of unusual spikes or drops in log rates. - -To analyze log spikes and drops: - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Log rate analysis**. -2. Choose a data view or saved search to access the log data you want to analyze. -3. In the histogram chart, click a spike (or drop) and then run the analysis. - - :::{image} ../../../images/serverless-log-rate-histogram.png - :alt: Histogram showing log spikes and drops - :class: screenshot - ::: - - When the analysis runs, it identifies statistically significant field-value combinations that contribute to the spike or drop, and then displays them in a table: - - :::{image} ../../../images/serverless-log-rate-analysis-results.png - :alt: Histogram showing log spikes and drops - :class: screenshot - ::: - - Notice that you can optionally turn on **Smart grouping** to summarize the results into groups. You can also click **Filter fields** to remove fields that are not relevant. - - The table shows an indicator of the level of impact and a sparkline showing the shape of the impact in the chart. - -4. Select a row to display the impact of the field on the histogram chart. -5. From the **Actions** menu in the table, you can choose to view the field in **Discover**, view it in [Log Pattern Analysis](#log-pattern-analysis), or copy the table row information to the clipboard as a query filter. - -To pin a table row, click the row, then move the cursor to the histogram chart. It displays a tooltip with exact count values for the pinned field which enables closer investigation. - -Brushes in the chart show the baseline time range and the deviation in the analyzed data. You can move the brushes to redefine both the baseline and the deviation and rerun the analysis with the modified values. - - -## Log pattern analysis [log-pattern-analysis] - -Use log pattern analysis to find patterns in unstructured log messages and examine your data. When you run a log pattern analysis, it performs categorization analysis on a selected field, creates categories based on the data, and then displays them together in a chart. The chart shows the distribution of each category and an example document that matches the category. Log pattern analysis is useful when you want to examine how often different types of logs appear in your data set. It also helps you group logs in ways that go beyond what you can achieve with a terms aggregation. - -To run log pattern analysis: - -1. Follow the steps under [Analyze log spikes and drops]() to run a log rate analysis. -2. From the **Actions** menu, choose **View in Log Pattern Analysis**. -3. Select a category field and optionally apply any filters that you want. -4. Click **Run pattern analysis**. - - The results of the analysis are shown in a table: - - :::{image} ../../../images/serverless-log-pattern-analysis.png - :alt: Log pattern analysis of the message field - :class: screenshot - ::: - -5. From the **Actions** menu, click the plus (or minus) icon to open **Discover** and show (or filter out) the given category there, which helps you to further examine your log messages. diff --git a/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md b/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md deleted file mode 100644 index ce8899cf53..0000000000 --- a/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md +++ /dev/null @@ -1,216 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-aiops-detect-anomalies.html ---- - -# Detect anomalies [observability-aiops-detect-anomalies] - -::::{admonition} Required role -:class: note - -The **Editor** role or higher is required to create, run, and view {{anomaly-job}}s. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles). - -:::: - - -The anomaly detection feature in {{obs-serverless}} automatically models the normal behavior of your time series data — learning trends, periodicity, and more — in real time to identify anomalies, streamline root cause analysis, and reduce false positives. - -To set up anomaly detection, you create and run anomaly detection jobs. Anomaly detection jobs use proprietary {{ml}} algorithms to detect anomalous events or patterns, such as: - -* Anomalies related to temporal deviations in values, counts, or frequencies -* Anomalies related to unusual locations in geographic data -* Statistical rarity -* Unusual behaviors for a member of a population - -To learn more about anomaly detection algorithms, refer to the [{{ml}}](../anomaly-detection/ml-ad-algorithms.md) documentation. Note that the {{ml}} documentation may contain details that are not valid when using a serverless project. - -::::{admonition} Some terms you might need to know -:class: note - -A *datafeed* retrieves time series data from {{es}} and provides it to an anomaly detection job for analysis. - -The job uses *buckets* to divide the time series into batches for processing. For example, a job may use a bucket span of 1 hour. - -Each {{anomaly-job}} contains one or more *detectors*, which define the type of analysis that occurs (for example, `max`, `average`, or `rare` analytical functions) and the fields that are analyzed. Some of the analytical functions look for single anomalous data points. For example, `max` identifies the maximum value that is seen within a bucket. Others perform some aggregation over the length of the bucket. For example, `mean` calculates the mean of all the data points seen within the bucket. - -To learn more about anomaly detection, refer to the [{{ml}}](../anomaly-detection.md) documentation. - -:::: - - - -## Create and run an anomaly detection job [create-anomaly-detection-job] - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Jobs**. -2. Click **Create anomaly detection job** (or **Create job** if other jobs exist). -3. Choose a data view or saved search to access the data you want to analyze. -4. Select the wizard for the type of job you want to create. The following wizards are available. You might also see specialized wizards based on the type of data you are analyzing. - - ::::{tip} - In general, it is a good idea to start with single metric anomaly detection jobs for your key performance indicators. After you examine these simple analysis results, you will have a better idea of what the influencers might be. Then you can create multi-metric jobs and split the data or create more complex analysis functions as necessary. - - :::: - - - Single metric - : Creates simple jobs that have a single detector. A *detector* applies an analytical function to specific fields in your data. In addition to limiting the number of detectors, the single metric wizard omits many of the more advanced configuration options. - - Multi-metric - : Creates jobs that can have more than one detector, which is more efficient than running multiple jobs against the same data. - - Population - : Creates jobs that detect activity that is unusual compared to the behavior of the population. - - Advanced - : Creates jobs that can have multiple detectors and enables you to configure all job settings. - - Categorization - : Creates jobs that group log messages into categories and use `count` or `rare` functions to detect anomalies within them. - - Rare - : Creates jobs that detect rare occurrences in time series data. Rare jobs use the `rare` or `freq_rare` functions and also detect rare occurrences in populations. - - Geo - : Creates jobs that detect unusual occurrences in the geographic locations of your data. Your data set must contain geo data. - - For more information about job types, refer to the [{{ml}}](../anomaly-detection/ml-anomaly-detection-job-types.md) documentation. - - ::::{admonition} Not sure what type of job to create? - :class: note - - Before selecting a wizard, click **Data Visualizer** to explore the fields and metrics in your data. To get the best results, you must understand your data, including its data types and the range and distribution of values. - - In the **Data Visualizer**, use the time filter to select a time period that you’re interested in exploring, or click **Use full data** to view the full time range of data. Expand the fields to see details about the range and distribution of values. When you’re done, go back to the first step and create your job. - - :::: - -5. Step through the instructions in the job creation wizard to configure your job. You can accept the default settings for most settings now and [tune the job](observability-aiops-tune-anomaly-detection-job.md) later. -6. If you want the job to start immediately when the job is created, make sure that option is selected on the summary page. -7. When you’re done, click **Create job**. When the job runs, the {{ml}} features analyze the input stream of data, model its behavior, and perform analysis based on the detectors in each job. When an event occurs outside of the baselines of normal behavior, that event is identified as an anomaly. -8. After the job is started, click **View results**. - - -## View the results [observability-aiops-detect-anomalies-view-the-results] - -After the anomaly detection job has processed some data, you can view the results in {{obs-serverless}}. - -::::{tip} -Depending on the capacity of your machine, you might need to wait a few seconds for the analysis to generate initial results. - -:::: - - -If you clicked **View results** after creating the job, the results open in either the **Single Metric Viewer** or **Anomaly Explorer**. To switch between these tools, click the icons in the upper-left corner of each tool. - -Read the following sections to learn more about these tools: - -* [View single metric job results](#view-single-metric) -* [View advanced or multi-metric job results](#anomaly-explorer) - - -## View single metric job results [view-single-metric] - -The **Single Metric Viewer** contains a chart that represents the actual and expected values over time: - -:::{image} ../../../images/serverless-anomaly-detection-single-metric-viewer.png -:alt: Single Metric Viewer showing analysis -:class: screenshot -::: - -* The line in the chart represents the actual data values. -* The shaded area represents the bounds for the expected values. -* The area between the upper and lower bounds are the most likely values for the model, using a 95% confidence level. That is to say, there is a 95% chance of the actual value falling within these bounds. If a value is outside of this area then it will usually be identified as anomalous. - -::::{tip} -Expected values are available only if **Enable model plot** was selected under Job Details when you created the job. - -:::: - - -To explore your data: - -1. If the **Single Metric Viewer** is not already open, go to **Machine learning** → **Single metric viewer** and select the job you created. -2. In the time filter, specify a time range that covers the majority of the analyzed data points. -3. Notice that the model improves as it processes more data. At the beginning, the expected range of values is pretty broad, and the model is not capturing the periodicity in the data. But it quickly learns and begins to reflect the patterns in your data. The duration of the learning process heavily depends on the characteristics and complexity of the input data. -4. Look for anomaly data points, depicted by colored dots or cross symbols, and hover over a data point to see more details about the anomaly. Note that anomalies with medium or high multi-bucket impact are depicted with a cross symbol instead of a dot. - - ::::{admonition} How are anomaly scores calculated? - :class: note - - Any data points outside the range that was predicted by the model are marked as anomalies. In order to provide a sensible view of the results, an *anomaly score* is calculated for each bucket time interval. The anomaly score is a value from 0 to 100, which indicates the significance of the anomaly compared to previously seen anomalies. The highly anomalous values are shown in red and the low scored values are shown in blue. An interval with a high anomaly score is significant and requires investigation. For more information about anomaly scores, refer to the [{{ml}}](../anomaly-detection/ml-ad-explain.md) documentation. - - :::: - -5. (Optional) Annotate your job results by drag-selecting a period of time and entering annotation text. Annotations are notes that refer to events in a specific time period. They can be created by the user or generated automatically by the anomaly detection job to reflect model changes and noteworthy occurrences. -6. Under **Anomalies**, expand each anomaly to see key details, such as the time, the actual and expected ("typical") values, and their probability. The **Anomaly explanation** section gives you further insights about each anomaly, such as its type and impact, to make it easier to interpret the job results: - - :::{image} ../../../images/serverless-anomaly-detection-details.png - :alt: Single Metric Viewer showing anomaly details - :class: screenshot - ::: - - By default, the **Anomalies** table contains all anomalies that have a severity of "warning" or higher in the selected section of the timeline. If you are only interested in critical anomalies, for example, you can change the severity threshold for this table. - -7. (Optional) From the **Actions** menu in the **Anomalies** table, you can choose to view relevant documents in **Discover** or create a job rule. Job rules instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide. To learn more, refer to [Tune your anomaly detection job](observability-aiops-tune-anomaly-detection-job.md) - -After you have identified anomalies, often the next step is to try to determine the context of those situations. For example, are there other factors that are contributing to the problem? Are the anomalies confined to particular applications or servers? You can begin to troubleshoot these situations by layering additional jobs or creating multi-metric jobs. - - -## View advanced or multi-metric job results [anomaly-explorer] - -Conceptually, you can think of *multi-metric anomaly detection jobs* as running multiple independent single metric jobs. By bundling them together in a multi-metric job, however, you can see an overall score and shared influencers for all the metrics and all the entities in the job. Multi-metric jobs therefore scale better than having many independent single metric jobs. They also provide better results when you have influencers that are shared across the detectors. - -::::{admonition} What is an influencer? -:class: note - -When you create an anomaly detection job, you can identify fields as *influencers*. These are fields that you think contain information about someone or something that influences or contributes to anomalies. As a best practice, do not pick too many influencers. For example, you generally do not need more than three. If you pick many influencers, the results can be overwhelming, and there is some overhead to the analysis. - -To learn more about influencers, refer to the [{{ml}}](../anomaly-detection/ml-ad-run-jobs.md#ml-ad-influencers) documentation. - -:::: - - -You can also configure your anomaly detection jobs to split a single time series into multiple time series based on a categorical field. For example, you could create a job for analyzing response code rates that has a single detector that splits the data based on the `response.keyword`, and uses the `count` function to determine when the number of events is anomalous. You might use a job like this if you want to look at both high and low request rates partitioned by response code. - -To view advanced or multi-metric results in the **Anomaly Explorer**: - -1. If the **Anomaly Explorer** is not already open, go to **Machine learning** → **Anomaly explorer** and select the job you created. -2. In the time filter, specify a time range that covers the majority of the analyzed data points. -3. If you specified influencers during job creation, the view includes a list of the top influencers for all of the detected anomalies in that same time period. The list includes maximum anomaly scores, which in this case are aggregated for each influencer, for each bucket, across all detectors. There is also a total sum of the anomaly scores for each influencer. Use this list to help you narrow down the contributing factors and focus on the most anomalous entities. -4. Under **Anomaly timeline**, click a section in the swim lanes to obtain more information about the anomalies in that time period. - - :::{image} ../../../images/serverless-anomaly-explorer.png - :alt: Anomaly Explorer showing swim lanes with anomaly selected - :class: screenshot - ::: - - You can see exact times when anomalies occurred. If there are multiple detectors or metrics in the job, you can see which caught the anomaly. You can also switch to viewing this time series in the **Single Metric Viewer** by selecting **View series** in the **Actions** menu. - -5. Under **Anomalies** (in the **Anomaly Explorer**), expand an anomaly to see key details, such as the time, the actual and expected ("typical") values, and the influencers that contributed to the anomaly: - - :::{image} ../../../images/serverless-anomaly-detection-multi-metric-details.png - :alt: Anomaly Explorer showing anomaly details - :class: screenshot - ::: - - By default, the **Anomalies** table contains all anomalies that have a severity of "warning" or higher in the selected section of the timeline. If you are only interested in critical anomalies, for example, you can change the severity threshold for this table. - - If your job has multiple detectors, the table aggregates the anomalies to show the highest severity anomaly per detector and entity, which is the field value that is displayed in the **found for** column. - - To view all the anomalies without any aggregation, set the **Interval** to **Show all**. - - -::::{tip} -The anomaly scores that you see in each section of the **Anomaly Explorer** might differ slightly. This disparity occurs because for each job there are bucket results, influencer results, and record results. Anomaly scores are generated for each type of result. The anomaly timeline uses the bucket-level anomaly scores. The list of top influencers uses the influencer-level anomaly scores. The list of anomalies uses the record-level anomaly scores. - -:::: - - - -## Next steps [observability-aiops-detect-anomalies-next-steps] - -After setting up an anomaly detection job, you may want to: - -* [Tune your anomaly detection job](observability-aiops-tune-anomaly-detection-job.md) -* [Forecast future behavior](observability-aiops-forecast-anomalies.md) -* [Anomaly detection](../../../solutions/observability/incident-management/create-an-anomaly-detection-rule.md) diff --git a/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-change-points.md b/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-change-points.md deleted file mode 100644 index 427cbddaec..0000000000 --- a/explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-change-points.md +++ /dev/null @@ -1,69 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-aiops-detect-change-points.html ---- - -# Detect change points [observability-aiops-detect-change-points] - -The change point detection feature in {{obs-serverless}} detects distribution changes, trend changes, and other statistically significant change points in time series data. Unlike anomaly detection, change point detection does not require you to configure a job or generate a model. Instead you select a metric and immediately see a visual representation that splits the time series into two parts, before and after the change point. - -{{obs-serverless}} uses a [change point aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-change-point-aggregation.html) to detect change points. This aggregation can detect change points when: - -* a significant dip or spike occurs -* the overall distribution of values has changed significantly -* there was a statistically significant step up or down in value distribution -* an overall trend change occurs - -To detect change points: - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Change point detection**. -2. Choose a data view or saved search to access the data you want to analyze. -3. Select a function: **avg**, **max**, **min**, or **sum**. -4. In the time filter, specify a time range over which you want to detect change points. -5. From the **Metric field** list, select a field you want to check for change points. -6. (Optional) From the **Split field** list, select a field to split the data by. If the cardinality of the split field exceeds 10,000, only the first 10,000 values, sorted by document count, are analyzed. Use this option when you want to investigate the change point across multiple instances, pods, clusters, and so on. For example, you may want to view CPU utilization split across multiple instances without having to jump across multiple dashboards and visualizations. - -::::{note} -You can configure a maximum of six combinations of a function applied to a metric field, partitioned by a split field, to identify change points. - -:::: - - -The change point detection feature automatically dissects the time series into multiple points within the given time window, tests whether the behavior is statistically different before and after each point in time, and then detects a change point if one exists: - -:::{image} ../../../images/serverless-change-point-detection.png -:alt: Change point detection UI showing change points split by process -:class: screenshot -::: - -The resulting view includes: - -* The timestamp of the change point -* A preview chart -* The type of change point and its p-value. The p-value indicates the magnitude of the change; lower values indicate more significant changes. -* The name and value of the split field, if used. - -If the analysis is split by a field, a separate chart is shown for every partition that has a detected change point. The chart displays the type of change point, its value, and the timestamp of the bucket where the change point has been detected. - -On the **Change point detection page**, you can also: - -* Select a subset of charts and click **View selected** to view only the selected charts. - - :::{image} ../../../images/serverless-change-point-detection-view-selected.png - :alt: View selected change point detection charts - :class: screenshot - ::: - -* Filter the results by specific types of change points by using the change point type selector: - - :::{image} ../../../images/serverless-change-point-detection-filter-by-type.png - :alt: Change point detection filter by type list - :class: screenshot - ::: - -* Attach change points to a chart or dashboard by using the context menu: - - :::{image} ../../../images/serverless-change-point-detection-attach-charts.png - :alt: Change point detection add to charts menu - :class: screenshot - ::: diff --git a/explore-analyze/machine-learning/aiops-labs/observability-aiops-forecast-anomalies.md b/explore-analyze/machine-learning/aiops-labs/observability-aiops-forecast-anomalies.md deleted file mode 100644 index 18a69ac81b..0000000000 --- a/explore-analyze/machine-learning/aiops-labs/observability-aiops-forecast-anomalies.md +++ /dev/null @@ -1,41 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-aiops-forecast-anomalies.html ---- - -# Forecast future behavior [observability-aiops-forecast-anomalies] - -::::{admonition} Required role -:class: note - -The **Editor** role or higher is required to create a forecast for an {{anomaly-job}}. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles). - -:::: - - -In addition to detecting anomalous behavior in your data, you can use the {{ml}} features to predict future behavior. - -You can use a forecast to estimate a time series value at a specific future date. For example, you might want to determine how much disk usage to expect next Sunday at 09:00. - -You can also use a forecast to estimate the probability of a time series value occurring at a future date. For example, you might want to determine how likely it is that your disk utilization will reach 100% before the end of next week. - -To create a forecast: - -1. [Create an anomaly detection job](observability-aiops-detect-anomalies.md) and view the results in the **Single Metric Viewer**. -2. Click **Forecast**. -3. Specify a duration for your forecast. This value indicates how far to extrapolate beyond the last record that was processed. You must use time units, for example 1w, 1d, 1h, and so on. -4. Click **Run**. -5. View the forecast in the **Single Metric Viewer**: - - :::{image} ../../../images/serverless-anomaly-detection-forecast.png - :alt: Single Metric Viewer showing forecast - :class: screenshot - ::: - - * The line in the chart represents the predicted data values. - * The shaded area represents the bounds for the predicted values, which also gives an indication of the confidence of the predictions. - * Note that the bounds generally increase with time (that is to say, the confidence levels decrease), since you are forecasting further into the future. Eventually if the confidence levels are too low, the forecast stops. - -6. (Optional) After the job has processed more data, click the **Forecast** button again to compare the forecast to actual data. - - The resulting chart will contain the actual data values, the bounds for the expected values, the anomalies, the forecast data values, and the bounds for the forecast. This combination of actual and forecast data gives you an indication of how well the {{ml}} features can extrapolate the future behavior of the data. diff --git a/explore-analyze/machine-learning/aiops-labs/observability-aiops-tune-anomaly-detection-job.md b/explore-analyze/machine-learning/aiops-labs/observability-aiops-tune-anomaly-detection-job.md deleted file mode 100644 index 97dae9c925..0000000000 --- a/explore-analyze/machine-learning/aiops-labs/observability-aiops-tune-anomaly-detection-job.md +++ /dev/null @@ -1,153 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/serverless/current/observability-aiops-tune-anomaly-detection-job.html ---- - -# Tune your anomaly detection job [observability-aiops-tune-anomaly-detection-job] - -::::{admonition} Required role -:class: note - -The **Editor** role or higher is required to create calendars, add job rules, and define custom URLs. To learn more, refer to [Assign user roles and privileges](../../../deploy-manage/users-roles/cloud-organization/manage-users.md#general-assign-user-roles). - -:::: - - -After you run an anomaly detection job and view the results, you might find that you need to alter the job configuration or settings. - -To further tune your job, you can: - -* [Create calendars](#create-calendars) that contain a list of scheduled events for which you do not want to generate anomalies, such as planned system outages or public holidays. -* [Create job rules](#create-job-rules) that instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide. Your job rules can use filter lists, which contain values that you can use to include or exclude events from the {{ml}} analysis. -* [Define custom URLs](#define-custom-urls) to make dashboards and other resources readily available when viewing job results. - -For more information about tuning your job, refer to the how-to guides in the [{{ml}}](../anomaly-detection/anomaly-how-tos.md) documentation. Note that the {{ml}} documentation may contain details that are not valid when using a fully-managed Elastic project. - -::::{tip} -You can also create calendars and add URLs when configuring settings during job creation, but generally it’s easier to start with a simple job and add complexity later. - -:::: - - - -## Create calendars [create-calendars] - -Sometimes there are periods when you expect unusual activity to take place, such as bank holidays, "Black Friday", or planned system outages. If you identify these events in advance, no anomalies are generated during that period. The {{ml}} model is not ill-affected, and you do not receive spurious results. - -To create a calendar and add scheduled events: - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Settings**. -2. Under **Calendars**, click **Create**. -3. Enter an ID and description for the calendar. -4. Select the jobs you want to apply the calendar to, or turn on **Apply calendar to all jobs**. -5. Under **Events**, click **New event** or click **Import events** to import events from an iCalendar (ICS) file: - - :::{image} ../../../images/serverless-anomaly-detection-create-calendar.png - :alt: Create new calendar page - :class: screenshot - ::: - - A scheduled event must have a start time, end time, and calendar ID. In general, scheduled events are short in duration (typically lasting from a few hours to a day) and occur infrequently. If you have regularly occurring events, such as weekly maintenance periods, you do not need to create scheduled events for these circumstances; they are already handled by the {{ml}} analytics. If your ICS file contains recurring events, only the first occurrence is imported. - -6. When you’re done adding events, save your calendar. - -You must identify scheduled events *before* your anomaly detection job analyzes the data for that time period. {{ml-cap}} results are not updated retroactively. Bucket results are generated during scheduled events, but they have an anomaly score of zero. - -::::{tip} -If you use long or frequent scheduled events, it might take longer for the {{ml}} analytics to learn to model your data, and some anomalous behavior might be missed. - -:::: - - - -## Create job rules and filters [create-job-rules] - -By default, anomaly detection is unsupervised, and the {{ml}} models have no awareness of the domain of your data. As a result, anomaly detection jobs might identify events that are statistically significant but are uninteresting when you know the larger context. - -You can customize anomaly detection by creating custom job rules. *Job rules* instruct anomaly detectors to change their behavior based on domain-specific knowledge that you provide. When you create a rule, you can specify conditions, scope, and actions. When the conditions of a rule are satisfied, its actions are triggered. - -::::{admonition} Example use case for creating a job rule -:class: note - -If you have an anomaly detector that is analyzing CPU usage, you might decide you are only interested in anomalies where the CPU usage is greater than a certain threshold. You can define a rule with conditions and actions that instruct the detector to refrain from generating {{ml}} results when there are anomalous events related to low CPU usage. You might also decide to add a scope for the rule so that it applies only to certain machines. The scope is defined by using {{ml}} filters. - -:::: - - -*Filters* contain a list of values that you can use to include or exclude events from the {{ml}} analysis. You can use the same filter in multiple anomaly detection jobs. - -::::{admonition} Example use case for creating a filter list -:class: note - -If you are analyzing web traffic, you might create a filter that contains a list of IP addresses. The list could contain IP addresses that you trust to upload data to your website or to send large amounts of data from behind your firewall. You can define the rule’s scope so that the action triggers only when a specific field in your data matches (or doesn’t match) a value in the filter. This gives you much greater control over which anomalous events affect the {{ml}} model and appear in the {{ml}} results. - -:::: - - -To create a job rule, first create any filter lists you want to use in the rule, then configure the rule: - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Settings**. -2. (Optional) Create one or more filter lists: - - 1. Under **Filter lists**, click **Create**. - 2. Enter the filter list ID. This is the ID you will select when you want to use the filter list in a job rule. - 3. Click **Add item** and enter one item per line. - 4. Click **Add** then save the filter list: - - :::{image} ../../../images/serverless-anomaly-detection-create-filter-list.png - :alt: Create filter list - :class: screenshot - ::: - -3. Open the job results in the **Single Metric Viewer** or **Anomaly Explorer**. -4. From the **Actions** menu in the **Anomalies** table, select **Configure job rules**. - - :::{image} ../../../images/serverless-anomaly-detection-configure-job-rules.png - :alt: Configure job rules menu selection - :class: screenshot - ::: - -5. Choose which actions to take when the job rule matches the anomaly: **Skip result**, **Skip model update**, or both. -6. Under **Conditions**, add one or more conditions that must be met for the action to be triggered. -7. Under **Scope** (if available), add one or more filter lists to limit where the job rule applies. -8. Save the job rule. Note that changes to job rules take effect for new results only. To apply these changes to existing results, you must clone and rerun the job. - - -## Define custom URLs [define-custom-urls] - -You can optionally attach one or more custom URLs to your anomaly detection jobs. Links for these URLs will appear in the **Actions** menu of the anomalies table when viewing job results in the **Single Metric Viewer** or **Anomaly Explorer**. Custom URLs can point to dashboards, the Discover app, or external websites. For example, you can define a custom URL that enables users to drill down to the source data from the results set. - -To add a custom URL to the **Actions** menu: - -1. In your {{obs-serverless}} project, go to **Machine learning** → **Jobs**. -2. From the **Actions** menu in the job list, select **Edit job**. -3. Select the **Custom URLs** tab, then click **Add custom URL**. -4. Enter the label to use for the link text. -5. Choose the type of resource you want to link to: - - | If you select…​ | Do this…​ | - | --- | --- | - | {{kib}} dashboard | Select the dashboard you want to link to. | - | Discover | Select the data view to use. | - | Other | Specify the URL for the external website. | - -6. Click **Test** to test your link. -7. Click **Add**, then save your changes. - -Now when you view job results in **Single Metric Viewer** or **Anomaly Explorer**, the **Actions** menu includes the custom link: - -:::{image} ../../../images/serverless-anomaly-detection-custom-url.png -:alt: Create filter list -:class: screenshot -::: - -::::{tip} -It is also possible to use string substitution in custom URLs. For example, you might have a **Raw data** URL defined as: - -`discover#/?_g=(time:(from:'$earliest$',mode:absolute,to:'$latest$'))&_a=(index:ff959d40-b880-11e8-a6d9-e546fe2bba5f,query:(language:kuery,query:'customer_full_name.keyword:"$customer_full_name.keyword$"'))`. - -The value of the `customer_full_name.keyword` field is passed to the target page when the link is clicked. - -For more information about using string substitution, refer to the [{{ml}}](../anomaly-detection/ml-configuring-url.md#ml-configuring-url-strings) documentation. Note that the {{ml}} documentation may contain details that are not valid when using a fully-managed Elastic project. - -:::: diff --git a/explore-analyze/toc.yml b/explore-analyze/toc.yml index f0da5de044..65ac32c70a 100644 --- a/explore-analyze/toc.yml +++ b/explore-analyze/toc.yml @@ -231,14 +231,6 @@ toc: - file: machine-learning/machine-learning-in-kibana/xpack-ml-dfanalytics.md - file: machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md - file: machine-learning/machine-learning-in-kibana/inference-processing.md - - file: machine-learning/aiops-labs.md - children: - - file: machine-learning/aiops-labs/observability-aiops-detect-anomalies.md - children: - - file: machine-learning/aiops-labs/observability-aiops-tune-anomaly-detection-job.md - - file: machine-learning/aiops-labs/observability-aiops-forecast-anomalies.md - - file: machine-learning/aiops-labs/observability-aiops-analyze-spikes.md - - file: machine-learning/aiops-labs/observability-aiops-detect-change-points.md - file: ai-assistant.md - file: discover.md children: diff --git a/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md b/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md index 49b1b33b41..9c0e415d36 100644 --- a/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md +++ b/raw-migrated-files/docs-content/serverless/observability-log-monitoring.md @@ -79,7 +79,7 @@ Use **Logs Explorer** to search, filter, and tail all your logs ingested into yo The following resources provide information on viewing and monitoring your logs: * [Discover and explore](../../../solutions/observability/logs/logs-explorer.md): Discover and explore all of the log events flowing in from your servers, virtual machines, and containers in a centralized view. -* [Detect log anomalies](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md): Use {{ml}} to detect log anomalies automatically. +* [Detect log anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md): Use {{ml}} to detect log anomalies automatically. ## Monitor data sets [observability-log-monitoring-monitor-data-sets] diff --git a/raw-migrated-files/docs-content/serverless/observability-machine-learning.md b/raw-migrated-files/docs-content/serverless/observability-machine-learning.md deleted file mode 100644 index 1c0270d1b0..0000000000 --- a/raw-migrated-files/docs-content/serverless/observability-machine-learning.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -navigation_title: "Machine learning" ---- - -# Machine learning and AIOps [observability-machine-learning] - - -The machine learning capabilities available in {{obs-serverless}} enable you to consume and process large observability data sets at scale, reducing the time and effort required to detect, understand, investigate, and resolve incidents. Built on predictive analytics and {{ml}}, our AIOps capabilities require no prior experience with {{ml}}. DevOps engineers, SREs, and security analysts can get started right away using these AIOps features with little or no advanced configuration: - -| Feature | Description | -| --- | --- | -| [Anomaly detection](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md) | Detect anomalies by comparing real-time and historical data from different sources to look for unusual, problematic patterns. | -| [Log rate analysis](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-analyze-spikes.md) | Find and investigate the causes of unusual spikes or drops in log rates. | -| [Change point detection](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-change-points.md) | Detect distribution changes, trend changes, and other statistically significant change points in a metric of your time series data. | - - - - diff --git a/raw-migrated-files/docs-content/serverless/observability-quickstarts-monitor-hosts-with-elastic-agent.md b/raw-migrated-files/docs-content/serverless/observability-quickstarts-monitor-hosts-with-elastic-agent.md index 2a9c408f78..c180b5ebcb 100644 --- a/raw-migrated-files/docs-content/serverless/observability-quickstarts-monitor-hosts-with-elastic-agent.md +++ b/raw-migrated-files/docs-content/serverless/observability-quickstarts-monitor-hosts-with-elastic-agent.md @@ -95,11 +95,11 @@ For host monitoring, the following capabilities and features are recommended: * [Run a pattern analysis](../../../solutions/observability/logs/run-pattern-analysis-on-log-data.md) to find patterns in unstructured log messages. * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an Observability data type reaches or exceeds a given value. -* Use [machine learning and AIOps features](../../../explore-analyze/machine-learning/aiops-labs.md) to apply predictive analytics and machine learning to your data: +* Use [machine learning and AIOps features](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md) to apply predictive analytics and machine learning to your data: - * [Detect anomalies](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns. - * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-analyze-spikes.md). - * [Detect change points](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-change-points.md) in your time series data. + * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns. + * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis). + * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data. Refer to [Observability overview](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features. diff --git a/raw-migrated-files/docs-content/serverless/observability-serverless-observability-overview.md b/raw-migrated-files/docs-content/serverless/observability-serverless-observability-overview.md index 9cd6244619..698b552175 100644 --- a/raw-migrated-files/docs-content/serverless/observability-serverless-observability-overview.md +++ b/raw-migrated-files/docs-content/serverless/observability-serverless-observability-overview.md @@ -126,4 +126,4 @@ Reduce the time and effort required to detect, understand, investigate, and reso :class: screenshot ::: -[Learn more about machine learning and AIOps →](../../../explore-analyze/machine-learning/aiops-labs.md) +[Learn more about machine learning and AIOps →](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md) diff --git a/raw-migrated-files/docs-content/serverless/observability-triage-threshold-breaches.md b/raw-migrated-files/docs-content/serverless/observability-triage-threshold-breaches.md index a3ca665bf2..60c18097d7 100644 --- a/raw-migrated-files/docs-content/serverless/observability-triage-threshold-breaches.md +++ b/raw-migrated-files/docs-content/serverless/observability-triage-threshold-breaches.md @@ -28,7 +28,7 @@ Explore charts on the page to learn more about the threshold breach: :::: -* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](../../../explore-analyze/machine-learning/aiops-labs.md#log-rate-analysis) documentation. +* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis) documentation. :::{image} ../../../images/serverless-log-threshold-breach-log-rate-analysis.png :alt: Log rate analysis chart in alert details for log threshold breach diff --git a/raw-migrated-files/docs-content/serverless/quickstart-monitor-hosts-with-otel.md b/raw-migrated-files/docs-content/serverless/quickstart-monitor-hosts-with-otel.md index af12626a40..e9112b0e79 100644 --- a/raw-migrated-files/docs-content/serverless/quickstart-monitor-hosts-with-otel.md +++ b/raw-migrated-files/docs-content/serverless/quickstart-monitor-hosts-with-otel.md @@ -63,14 +63,14 @@ After using the Hosts page and Discover to confirm you’ve ingested all the hos * In the [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents. - * [Run a pattern analysis](../../../explore-analyze/machine-learning/aiops-labs.md#log-pattern-analysis) to find patterns in unstructured log messages. + * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages. * [Create alerts](../../../solutions/observability/incident-management/create-manage-rules.md) that notify you when an Observability data type reaches or exceeds a given value. * Use [machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data: * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns. - * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/aiops-labs.md#log-rate-analysis). - * [Detect change points](../../../explore-analyze/machine-learning/aiops-labs.md#change-point-detection) in your time series data. + * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis). + * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data. Refer to the [Elastic Observability](../../../solutions/observability.md) for a description of other useful features. diff --git a/raw-migrated-files/docs-content/serverless/what-is-observability-serverless.md b/raw-migrated-files/docs-content/serverless/what-is-observability-serverless.md index ea46a42218..d8ab2f6f9a 100644 --- a/raw-migrated-files/docs-content/serverless/what-is-observability-serverless.md +++ b/raw-migrated-files/docs-content/serverless/what-is-observability-serverless.md @@ -25,7 +25,7 @@ Not using serverless? Go to the [Elastic Observability docs](../../../solutions/ * [**Explore log data**](../../../solutions/observability/logs/logs-explorer.md): Use Discover to explore your log data. * [**Trigger alerts and triage problems**](../../../solutions/observability/incident-management/create-manage-rules.md): Create rules to detect complex conditions and trigger alerts. * [**Track and deliver on your SLOs**](../../../solutions/observability/incident-management/service-level-objectives-slos.md): Measure key metrics important to the business. -* [**Detect anomalies and spikes**](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md): Find unusual behavior in time series data. +* [**Detect anomalies and spikes**](../../../explore-analyze/machine-learning/anomaly-detection.md): Find unusual behavior in time series data. * [**Monitor application performance**](../../../solutions/observability/apps/application-performance-monitoring-apm.md): Monitor your software services and applications in real time. * [**Integrate with OpenTelemetry**](../../../solutions/observability/apps/use-opentelemetry-with-apm.md): Reuse existing APM instrumentation to capture logs, traces, and metrics. * [**Monitor your hosts and services**](../../../solutions/observability/infra-and-hosts/analyze-compare-hosts.md): Get a metrics-driven view of your hosts backed by an interface called Lens. diff --git a/raw-migrated-files/kibana/kibana/xpack-ml-aiops.md b/raw-migrated-files/kibana/kibana/xpack-ml-aiops.md deleted file mode 100644 index 7e2135d505..0000000000 --- a/raw-migrated-files/kibana/kibana/xpack-ml-aiops.md +++ /dev/null @@ -1,66 +0,0 @@ -# AIOps Labs [xpack-ml-aiops] - -AIOps Labs is a part of {{ml-app}} in {{kib}} which provides features that use advanced statistical methods to help you interpret your data and its behavior. - - -## Log rate analysis [log-rate-analysis] - -Log rate analysis uses advanced statistical methods to identify reasons for increases or decreases in log rates and displays the statistically significant data in a tabular format. It makes it easy to find and investigate causes of unusual spikes or drops by using the analysis workflow view. Examine the histogram chart of the log rates for a given {{data-source}}, and find the reason behind a particular change possibly in millions of log events across multiple fields and values. - -You can find log rate analysis embedded in multiple applications. In {{kib}}, you can find it under **{{ml-app}}*** > ***AIOps Labs** or by using the [global search field](../../../get-started/the-stack.md#kibana-navigation-search). Here, you can select the {{data-source}} or saved Discover session that you want to analyze. - -:::{image} ../../../images/kibana-ml-log-rate-analysis-before.png -:alt: Log event histogram chart -:class: screenshot -::: - -Select a spike or drop in the log event histogram chart to start the analysis. It identifies statistically significant field-value combinations that contribute to the spike or drop and displays them in a table. You can optionally choose to summarize the results into groups. The table also shows an indicator of the level of impact and a sparkline showing the shape of the impact in the chart. Hovering over a row displays the impact on the histogram chart in more detail. You can inspect a field in **Discover***, further investigate in ***Log pattern analysis***, or copy the table row information as a query filter to the clipboard by selecting the corresponding option under the ***Actions** column. You can also pin a table row by clicking on it then move the cursor to the histogram chart. It displays a tooltip with exact count values for the pinned field which enables closer investigation. - -Brushes in the chart show the baseline time range and the deviation in the analyzed data. You can move the brushes to redefine both the baseline and the deviation and rerun the analysis with the modified values. - -:::{image} ../../../images/kibana-ml-log-rate-analysis.png -:alt: Log rate spike explained -:class: screenshot -::: - - -## Log pattern analysis [log-pattern-analysis] - -Log pattern analysis helps you to find patterns in unstructured log messages and makes it easier to examine your data. It performs categorization analysis on a selected field of a {{data-source}}, creates categories based on the data and displays them together with a chart that shows the distribution of each category and an example document that matches the category. - -You can find log pattern analysis under **{{ml-app}}*** > ***AIOps Labs*** or by using the [global search field](../../../get-started/the-stack.md#kibana-navigation-search). Here, you can select the {{data-source}} or saved Discover session that you want to analyze, or in ***Discover** as an available action for any text field. - -:::{image} ../../../images/kibana-ml-log-pattern-analysis.png -:alt: Log pattern analysis UI -:class: screenshot -::: - -Select a field for categorization and optionally apply any filters that you want, then start the analysis. The analysis uses the same algorithms as a {{ml}} categorization job. The results of the analysis are shown in a table that makes it possible to open **Discover** and show or filter out the given category there, which helps you to further examine your log messages. - - -## Change point detection [change-point-detection] - -::::{warning} -This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features. -:::: - - -Change point detection uses the [change point aggregation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-aggregations-change-point-aggregation.html) to detect distribution changes, trend changes, and other statistically significant change points in a metric of your time series data. - -You can find change point detection under **{{ml-app}}*** > ***AIOps Labs** or by using the [global search field](../../../get-started/the-stack.md#kibana-navigation-search). Here, you can select the {{data-source}} or saved Discover session that you want to analyze. - -:::{image} ../../../images/kibana-ml-change-point-detection.png -:alt: Change point detection UI -:class: screenshot -::: - -Select a function and a metric field, then pick a date range to start detecting change points in the defined range. Optionally, you can split the data by a field. If the cardinality of the split field exceeds 10,000, then only the first 10,000, sorted by document count, are analyzed. You can configure a maximum of 6 combinations of a function applied to a metric field, partitioned by a split field to identify change points. - -When a change point is detected, a row displays basic information including the timestamp of the change point, a preview chart, the type of change point, its p-value, the name and value of the split field. You can further examine the selected change point in a detailed view. A chart visualizes the identified change point within the analyzed time window, making the interpretation easier. If the analysis is split by a field, a separate chart is shown for every partition that has a detected change point. The chart displays the type of change point, its value, and the timestamp of the bucket where the change point has been detected. The corresponding `p-value` indicates the magnitude of the change; lower values indicate more significant changes. You can use the change point type selector to filter the results by specific types of change points. - -:::{image} ../../../images/kibana-ml-change-point-detection-selected.png -:alt: Selected change points -:class: screenshot -::: - -You can attach change point charts to a dashboard or a case by using the context menu. If the split field is selected, you can either select specific charts (partitions) or set the maximum number of top change points to plot. It’s possible to preserve the applied time range or use the time bound from the page date picker. You can also add or edit change point charts directly from the **Dashboard** app. diff --git a/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-elastic-agent.md b/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-elastic-agent.md index 28cd2c5491..a334d462d0 100644 --- a/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-elastic-agent.md +++ b/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-elastic-agent.md @@ -99,14 +99,14 @@ For host monitoring, the following capabilities and features are recommended: * In the [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents. - * [Run a pattern analysis](../../../explore-analyze/machine-learning/aiops-labs.md#log-pattern-analysis) to find patterns in unstructured log messages. + * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages. * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an Observability data type reaches or exceeds a given value. * Use [machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data: * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns. - * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/aiops-labs.md#log-rate-analysis). - * [Detect change points](../../../explore-analyze/machine-learning/aiops-labs.md#change-point-detection) in your time series data. + * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis). + * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data. Refer to the [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features. diff --git a/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-otel.md b/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-otel.md index 4f3acbdef7..487869560b 100644 --- a/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-otel.md +++ b/raw-migrated-files/observability-docs/observability/quickstart-monitor-hosts-with-otel.md @@ -68,14 +68,14 @@ After using the Hosts page and Discover to confirm you’ve ingested all the hos * In the [Logs Explorer](../../../solutions/observability/logs/logs-explorer.md), search and filter your log data, get information about the structure of log fields, and display your findings in a visualization. You can also: * [Monitor log data set quality](../../../solutions/observability/data-set-quality-monitoring.md) to find degraded documents. - * [Run a pattern analysis](../../../explore-analyze/machine-learning/aiops-labs.md#log-pattern-analysis) to find patterns in unstructured log messages. + * [Run a pattern analysis](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-pattern-analysis) to find patterns in unstructured log messages. * [Create alerts](../../../solutions/observability/incident-management/alerting.md) that notify you when an Observability data type reaches or exceeds a given value. * Use [machine learning](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md) to apply predictive analytics to your data: * [Detect anomalies](../../../explore-analyze/machine-learning/anomaly-detection.md) by comparing real-time and historical data from different sources to look for unusual, problematic patterns. - * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/aiops-labs.md#log-rate-analysis). - * [Detect change points](../../../explore-analyze/machine-learning/aiops-labs.md#change-point-detection) in your time series data. + * [Analyze log spikes and drops](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis). + * [Detect change points](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#change-point-detection) in your time series data. Refer to the [What is Elastic {{observability}}?](../../../solutions/observability/get-started/what-is-elastic-observability.md) for a description of other useful features. diff --git a/raw-migrated-files/observability-docs/observability/triage-threshold-breaches.md b/raw-migrated-files/observability-docs/observability/triage-threshold-breaches.md index 33fb3d1976..f97003f286 100644 --- a/raw-migrated-files/observability-docs/observability/triage-threshold-breaches.md +++ b/raw-migrated-files/observability-docs/observability/triage-threshold-breaches.md @@ -28,7 +28,7 @@ Explore charts on the page to learn more about the threshold breach: :::: -* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](../../../explore-analyze/machine-learning/aiops-labs.md#log-rate-analysis) documentation. +* **Log rate analysis chart**. If your rule is intended to detect log threshold breaches (that is, it has a single condition that uses a count aggregation), you can run a log rate analysis, assuming you have the required license. Running a log rate analysis is useful for detecting significant dips or spikes in the number of logs. Notice that you can adjust the baseline and deviation, and then run the analysis again. For more information about using the log rate analysis feature, refer to the [AIOps Labs](../../../explore-analyze/machine-learning/machine-learning-in-kibana/xpack-ml-aiops.md#log-rate-analysis) documentation. :::{image} ../../../images/observability-log-threshold-breach-log-rate-analysis.png :alt: Log rate analysis chart in alert details for log threshold breach diff --git a/raw-migrated-files/toc.yml b/raw-migrated-files/toc.yml index c3c34d2de4..0393e2ae7b 100644 --- a/raw-migrated-files/toc.yml +++ b/raw-migrated-files/toc.yml @@ -379,7 +379,6 @@ toc: - file: docs-content/serverless/observability-handle-no-results-found-message.md - file: docs-content/serverless/observability-infrastructure-monitoring.md - file: docs-content/serverless/observability-log-monitoring.md - - file: docs-content/serverless/observability-machine-learning.md - file: docs-content/serverless/observability-monitor-datasets.md - file: docs-content/serverless/observability-monitor-status-alert.md - file: docs-content/serverless/observability-monitor-synthetics.md @@ -714,7 +713,6 @@ toc: - file: kibana/kibana/upgrade.md - file: kibana/kibana/using-kibana-with-security.md - file: kibana/kibana/watcher-ui.md - - file: kibana/kibana/xpack-ml-aiops.md - file: kibana/kibana/xpack-security-authorization.md - file: kibana/kibana/xpack-security-fips-140-2.md - file: kibana/kibana/xpack-security.md diff --git a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md index aad262590b..30a105ca69 100644 --- a/solutions/observability/incident-management/create-an-anomaly-detection-rule.md +++ b/solutions/observability/incident-management/create-an-anomaly-detection-rule.md @@ -30,7 +30,7 @@ Create an anomaly detection rule to check for anomalies in one or more anomaly d To create an anomaly detection rule: 1. In your {{obs-serverless}} project, go to **Machine learning** → **Jobs**. -2. In the list of anomaly detection jobs, find the job you want to check for anomalies. Haven’t created a job yet? [Create one now](../../../explore-analyze/machine-learning/aiops-labs/observability-aiops-detect-anomalies.md). +2. In the list of anomaly detection jobs, find the job you want to check for anomalies. Haven’t created a job yet? [Create one now](../../../explore-analyze/machine-learning/anomaly-detection.md). 3. From the **Actions** menu next to the job, select **Create alert rule**. 4. Specify a name and optional tags for the rule. You can use these tags later to filter alerts. 5. Verify that the correct job is selected and configure the alert details: