Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
60 changes: 30 additions & 30 deletions explore-analyze/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,23 +27,19 @@ The Elasticsearch platform and its UI, also known as Kibana, provide a comprehen

Elastic is committed to ensuring digital accessibility for people with disabilities. We are continually improving the user experience, and strive toward ensuring our tools are usable by everyone.


**Measures to support accessibility**
Elastic takes the following measures to ensure accessibility of Kibana:

* Maintains and incorporates an [accessible component library](https://elastic.github.io/eui/).
* Provides continual accessibility training for our staff.
* Employs a third-party audit.


**Conformance status**
Kibana aims to meet [WCAG 2.1 level AA](https://www.w3.org/WAI/WCAG21/quickref/?currentsidebar=%23col_customize&levels=aaa&technologies=server%2Csmil%2Cflash%2Csl) compliance. Currently, we can only claim to partially conform, meaning we do not fully meet all of the success criteria. However, we do try to take a broader view of accessibility, and go above and beyond the legal and regulatory standards to provide a good experience for all of our users.


**Feedback**
We welcome your feedback on the accessibility of Kibana. Please let us know if you encounter accessibility barriers on Kibana by either emailing us at `[email protected]` or opening [an issue on GitHub](https://github.com/elastic/kibana/issues/new?labels=Project%3AAccessibility&template=Accessibility.md&title=%28Accessibility%29).


**Technical specifications**
Accessibility of Kibana relies on the following technologies to work with your web browser and any assistive technologies or plugins installed on your computer:

Expand All @@ -52,7 +48,6 @@ Accessibility of Kibana relies on the following technologies to work with your w
* JavaScript
* WAI-ARIA


**Limitations and alternatives**
Despite our best efforts to ensure accessibility of Kibana, there are some limitations. Please [open an issue on GitHub](https://github.com/elastic/kibana/issues/new?labels=Project%3AAccessibility&template=Accessibility.md&title=%28Accessibility%29) if you observe an issue not in this list.

Expand All @@ -65,7 +60,6 @@ Known limitations are in the following areas:

To see individual tickets, view our [GitHub issues with label "`Project:Accessibility`"](https://github.com/elastic/kibana/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3AProject%3AAccessibility).


**Assessment approach**
Elastic assesses the accessibility of Kibana with the following approaches:

Expand All @@ -81,40 +75,46 @@ Manual testing largely focuses on screen reader support and is done on:
:::

## Querying and filtering

Elasticsearch’s robust query capabilities enable you to retrieve specific data from your datasets. Using the Query DSL (Domain Specific Language), you can build powerful, flexible queries that support:

- Full-text search
- Boolean logic
- Fuzzy matching
- Proximity searches
- Semantic search
- …and more.
* Full-text search
* Boolean logic
* Fuzzy matching
* Proximity searches
* Semantic search
* …and more.

These tools simplify refining searches and pinpointing relevant information in real-time.

## Scripting
Scripting makes custom data manipulation and transformation possible during search and aggregation processes. Using scripting languages like Painless, you can calculate custom metrics, perform conditional logic, or adjust data dynamically in search time. This flexibility ensures tailored insights specific to your needs.
### Aggregations

## Aggregations
Aggregations provide advanced data analysis, enabling you to extract actionable insights. With aggregations, you can calculate statistical metrics (for example, sums, averages, medians), group data into buckets (histograms, terms, and so on), or perform nested and multi-level analyses. Aggregations transform raw data into structured insights with ease.

## Geospatial Analysis

The geospatial capabilities enable analysis of location-based data, including distance calculations, polygon and bounding box queries, and geohash grid aggregations. This functionality is necessary for logistics, real estate, and IoT industries, where location matters.

## Machine Learning

Elasticsearch integrates machine learning for proactive analytics, helping you to:
- Detect anomalies in time-series data
- Forecast future trends
- Analyze seasonal patterns
- Perform powerful NLP operations such as semantic search
- Machine learning models simplify complex predictive tasks, unlocking new opportunities for optimization.
* Detect anomalies in time-series data
* Forecast future trends
* Analyze seasonal patterns
* Perform powerful NLP operations such as semantic search
* Machine learning models simplify complex predictive tasks, unlocking new opportunities for optimization.

## Scripting

Scripting makes custom data manipulation and transformation possible during search and aggregation processes. Using scripting languages like Painless, you can calculate custom metrics, perform conditional logic, or adjust data dynamically in search time. This flexibility ensures tailored insights specific to your needs.

## Explore with Discover [explore-the-data]

[Discover](/explore-analyze/discover.md) lets you interact directly with raw data. Use Discover to:
- Browse documents in your indices
- Apply filters and search queries
- Visualize results in real-time

* Browse documents in your indices
* Apply filters and search queries
* Visualize results in real-time

It’s the starting point for exploratory analysis.

Expand All @@ -124,21 +124,25 @@ Create a variety of visualizations and add them to a dashboard.

### Dashboards
[Dashboards](/explore-analyze/dashboards.md) serve as centralized hubs for visualizing and monitoring data insights. With Dashboards, you can:
- Combine multiple visualizations into a single, unified view
- Display data from multiple indices or datasets for comprehensive analysis
- Customize layouts to suit specific workflows and preferences

* Combine multiple visualizations into a single, unified view
* Display data from multiple indices or datasets for comprehensive analysis
* Customize layouts to suit specific workflows and preferences

Dashboards provide an interactive and cohesive environment with filtering capabilities and controls to explore trends and metrics at a glance.

### Panels and visualizations

[Panels and visualizations](/explore-analyze/visualize.md) are the core elements that populate your dashboards, enabling dynamic data representation. They support diverse chart types, Interactive filtering, and drill-down capabilities to explore data further. These building blocks transform raw data into clear, actionable visuals, allowing users to analyze and interpret results effectively.

## Reporting and sharing

You can share your work and findings with colleagues and stakeholders or generate reports. Report generation can be scheduled or on-demand. You can choose from multiple formats (for example, PDF, CSV). These tools ensure that actionable insights reach the right people at the right time.
Alerting
You can set up alerts to monitor your data continuously. Alerts notify you when specific conditions are met. This ensures timely action on critical issues.

## Bringing it all together

Elasticsearch's features integrate seamlessly, offering an end-to-end solution for exploring, analyzing, and acting on data. If you want to explore any of the listed features in greater depth, refer to their respective documentation pages and check the provided hands-on examples and tutorials.

If you'd like to explore some features but don't have data ready yet, some sample data sets are available in {{kib}} for you to install and play with.
Expand All @@ -153,7 +157,3 @@ Sample data sets come with sample visualizations, dashboards, and more to help y
4. Install the sample data sets that you want.

Once installed, you can access the sample data in the various {{kib}} apps available to you.




Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,6 @@ If the estimated model memory limit for an {{anomaly-job}} is greater than the m

* If you are using the default value for the `model_memory_limit` and the {{ml}} nodes in the cluster have lots of memory, the best course of action might be to simply increase the job’s `model_memory_limit`. Before doing this, however, double-check that the chosen analysis makes sense. The default `model_memory_limit` is relatively low to avoid accidentally creating a job that uses a huge amount of memory.
* If the {{ml}} nodes in the cluster do not have sufficient memory to accommodate a job of the estimated size, the only options are:

* Add bigger {{ml}} nodes to the cluster, or
* Accept that the job will hit its memory limit and will not necessarily find all the anomalies it could otherwise find.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,19 +18,17 @@ If an {{anomaly-job}} fails, try to restart the job by following the procedure d
If an {{anomaly-job}} has failed, do the following to recover from `failed` state:

1. *Force* stop the corresponding {{dfeed}} by using the [Stop {{dfeed}} API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-stop-datafeed.html) with the `force` parameter being `true`. For example, the following request force stops the `my_datafeed` {{dfeed}}.

```console
POST _ml/datafeeds/my_datafeed/_stop
```console
POST _ml/datafeeds/my_datafeed/_stop
{
"force": "true"
}
```
```

2. *Force* close the {{anomaly-job}} by using the [Close {{anomaly-job}} API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-close-job.html) with the `force` parameter being `true`. For example, the following request force closes the `my_job` {{anomaly-job}}:

```console
POST _ml/anomaly_detectors/my_job/_close?force=true
```
```console
POST _ml/anomaly_detectors/my_job/_close?force=true
```

3. Restart the {{anomaly-job}} on the **Job management** pane in {{kib}}.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,22 @@ This type of analysis is most effective when the behavior within a group is gene

Population analysis is resource-efficient and scales well, enabling the analysis of populations consisting of hundreds of thousands or even millions of entities with a lower resource footprint than analyzing each series individually.


## Recommendations [population-recommendations]

* Use population analysis when the behavior within a group is mostly homogeneous, as it helps identify anomalous patterns effectively.
* Leverage population analysis when dealing with large-scale datasets.
* Avoid using population analysis when members of the population exhibit vastly different behaviors, as it may not be effective.


## Creating population jobs [creating-population-jobs]

1. In {{kib}}, navigate to **Jobs**. To open **Jobs**, find **{{ml-app}} > Anomaly Detection** in the main menu, or use the [global search field](https://www.elastic.co/guide/en/kibana/current/kibana-concepts-analysts.html#_finding_your_apps_and_objects).
2. Click **Create job**, select the {{data-source}} you want to analyze.
3. Select the **Population** wizard from the list.
4. Choose a population field - it’s the `clientip` field in this example - and the metric you want to use for the analysis - `Mean(bytes)` in this example.

:::{image} ../../../images/machine-learning-ml-population-wizard.png
:alt: Creating a population job in Kibana
:class: screenshot
:::
:::{image} ../../../images/machine-learning-ml-population-wizard.png
:alt: Creating a population job in Kibana
:class: screenshot
:::

5. Click **Next**.
6. Provide a job ID and click **Next**.
Expand Down Expand Up @@ -68,11 +65,8 @@ PUT _ml/anomaly_detectors/population

1. This `over_field_name` property indicates that the metrics for each client (as identified by their IP address) are analyzed relative to other clients in each bucket.


::::



### Viewing the job results [population-job-results]

Use the **Anomaly Explorer** in {{kib}} to view the analysis results:
Expand Down
Loading
Loading