You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: explore-analyze/index.md
+30-30Lines changed: 30 additions & 30 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,23 +27,19 @@ The Elasticsearch platform and its UI, also known as Kibana, provide a comprehen
27
27
28
28
Elastic is committed to ensuring digital accessibility for people with disabilities. We are continually improving the user experience, and strive toward ensuring our tools are usable by everyone.
29
29
30
-
31
30
**Measures to support accessibility**
32
31
Elastic takes the following measures to ensure accessibility of Kibana:
33
32
34
33
* Maintains and incorporates an [accessible component library](https://elastic.github.io/eui/).
35
34
* Provides continual accessibility training for our staff.
36
35
* Employs a third-party audit.
37
36
38
-
39
37
**Conformance status**
40
38
Kibana aims to meet [WCAG 2.1 level AA](https://www.w3.org/WAI/WCAG21/quickref/?currentsidebar=%23col_customize&levels=aaa&technologies=server%2Csmil%2Cflash%2Csl) compliance. Currently, we can only claim to partially conform, meaning we do not fully meet all of the success criteria. However, we do try to take a broader view of accessibility, and go above and beyond the legal and regulatory standards to provide a good experience for all of our users.
41
39
42
-
43
40
**Feedback**
44
41
We welcome your feedback on the accessibility of Kibana. Please let us know if you encounter accessibility barriers on Kibana by either emailing us at `[email protected]` or opening [an issue on GitHub](https://github.com/elastic/kibana/issues/new?labels=Project%3AAccessibility&template=Accessibility.md&title=%28Accessibility%29).
45
42
46
-
47
43
**Technical specifications**
48
44
Accessibility of Kibana relies on the following technologies to work with your web browser and any assistive technologies or plugins installed on your computer:
49
45
@@ -52,7 +48,6 @@ Accessibility of Kibana relies on the following technologies to work with your w
52
48
* JavaScript
53
49
* WAI-ARIA
54
50
55
-
56
51
**Limitations and alternatives**
57
52
Despite our best efforts to ensure accessibility of Kibana, there are some limitations. Please [open an issue on GitHub](https://github.com/elastic/kibana/issues/new?labels=Project%3AAccessibility&template=Accessibility.md&title=%28Accessibility%29) if you observe an issue not in this list.
58
53
@@ -65,7 +60,6 @@ Known limitations are in the following areas:
65
60
66
61
To see individual tickets, view our [GitHub issues with label "`Project:Accessibility`"](https://github.com/elastic/kibana/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3AProject%3AAccessibility).
67
62
68
-
69
63
**Assessment approach**
70
64
Elastic assesses the accessibility of Kibana with the following approaches:
71
65
@@ -81,40 +75,46 @@ Manual testing largely focuses on screen reader support and is done on:
81
75
:::
82
76
83
77
## Querying and filtering
78
+
84
79
Elasticsearch’s robust query capabilities enable you to retrieve specific data from your datasets. Using the Query DSL (Domain Specific Language), you can build powerful, flexible queries that support:
85
80
86
-
- Full-text search
87
-
- Boolean logic
88
-
- Fuzzy matching
89
-
- Proximity searches
90
-
- Semantic search
91
-
- …and more.
81
+
* Full-text search
82
+
* Boolean logic
83
+
* Fuzzy matching
84
+
* Proximity searches
85
+
* Semantic search
86
+
* …and more.
92
87
93
88
These tools simplify refining searches and pinpointing relevant information in real-time.
94
89
95
-
## Scripting
96
-
Scripting makes custom data manipulation and transformation possible during search and aggregation processes. Using scripting languages like Painless, you can calculate custom metrics, perform conditional logic, or adjust data dynamically in search time. This flexibility ensures tailored insights specific to your needs.
90
+
### Aggregations
97
91
98
-
## Aggregations
99
92
Aggregations provide advanced data analysis, enabling you to extract actionable insights. With aggregations, you can calculate statistical metrics (for example, sums, averages, medians), group data into buckets (histograms, terms, and so on), or perform nested and multi-level analyses. Aggregations transform raw data into structured insights with ease.
100
93
101
94
## Geospatial Analysis
95
+
102
96
The geospatial capabilities enable analysis of location-based data, including distance calculations, polygon and bounding box queries, and geohash grid aggregations. This functionality is necessary for logistics, real estate, and IoT industries, where location matters.
103
97
104
98
## Machine Learning
99
+
105
100
Elasticsearch integrates machine learning for proactive analytics, helping you to:
106
-
- Detect anomalies in time-series data
107
-
- Forecast future trends
108
-
- Analyze seasonal patterns
109
-
- Perform powerful NLP operations such as semantic search
110
-
- Machine learning models simplify complex predictive tasks, unlocking new opportunities for optimization.
101
+
* Detect anomalies in time-series data
102
+
* Forecast future trends
103
+
* Analyze seasonal patterns
104
+
* Perform powerful NLP operations such as semantic search
105
+
* Machine learning models simplify complex predictive tasks, unlocking new opportunities for optimization.
106
+
107
+
## Scripting
108
+
109
+
Scripting makes custom data manipulation and transformation possible during search and aggregation processes. Using scripting languages like Painless, you can calculate custom metrics, perform conditional logic, or adjust data dynamically in search time. This flexibility ensures tailored insights specific to your needs.
111
110
112
111
## Explore with Discover [explore-the-data]
113
112
114
113
[Discover](/explore-analyze/discover.md) lets you interact directly with raw data. Use Discover to:
115
-
- Browse documents in your indices
116
-
- Apply filters and search queries
117
-
- Visualize results in real-time
114
+
115
+
* Browse documents in your indices
116
+
* Apply filters and search queries
117
+
* Visualize results in real-time
118
118
119
119
It’s the starting point for exploratory analysis.
120
120
@@ -124,21 +124,25 @@ Create a variety of visualizations and add them to a dashboard.
124
124
125
125
### Dashboards
126
126
[Dashboards](/explore-analyze/dashboards.md) serve as centralized hubs for visualizing and monitoring data insights. With Dashboards, you can:
127
-
- Combine multiple visualizations into a single, unified view
128
-
- Display data from multiple indices or datasets for comprehensive analysis
129
-
- Customize layouts to suit specific workflows and preferences
127
+
128
+
* Combine multiple visualizations into a single, unified view
129
+
* Display data from multiple indices or datasets for comprehensive analysis
130
+
* Customize layouts to suit specific workflows and preferences
130
131
131
132
Dashboards provide an interactive and cohesive environment with filtering capabilities and controls to explore trends and metrics at a glance.
132
133
133
134
### Panels and visualizations
135
+
134
136
[Panels and visualizations](/explore-analyze/visualize.md) are the core elements that populate your dashboards, enabling dynamic data representation. They support diverse chart types, Interactive filtering, and drill-down capabilities to explore data further. These building blocks transform raw data into clear, actionable visuals, allowing users to analyze and interpret results effectively.
135
137
136
138
## Reporting and sharing
139
+
137
140
You can share your work and findings with colleagues and stakeholders or generate reports. Report generation can be scheduled or on-demand. You can choose from multiple formats (for example, PDF, CSV). These tools ensure that actionable insights reach the right people at the right time.
138
141
Alerting
139
142
You can set up alerts to monitor your data continuously. Alerts notify you when specific conditions are met. This ensures timely action on critical issues.
140
143
141
144
## Bringing it all together
145
+
142
146
Elasticsearch's features integrate seamlessly, offering an end-to-end solution for exploring, analyzing, and acting on data. If you want to explore any of the listed features in greater depth, refer to their respective documentation pages and check the provided hands-on examples and tutorials.
143
147
144
148
If you'd like to explore some features but don't have data ready yet, some sample data sets are available in {{kib}} for you to install and play with.
@@ -153,7 +157,3 @@ Sample data sets come with sample visualizations, dashboards, and more to help y
153
157
4. Install the sample data sets that you want.
154
158
155
159
Once installed, you can access the sample data in the various {{kib}} apps available to you.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/anomaly-detection/ml-ad-run-jobs.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -105,7 +105,6 @@ If the estimated model memory limit for an {{anomaly-job}} is greater than the m
105
105
106
106
* If you are using the default value for the `model_memory_limit` and the {{ml}} nodes in the cluster have lots of memory, the best course of action might be to simply increase the job’s `model_memory_limit`. Before doing this, however, double-check that the chosen analysis makes sense. The default `model_memory_limit` is relatively low to avoid accidentally creating a job that uses a huge amount of memory.
107
107
* If the {{ml}} nodes in the cluster do not have sufficient memory to accommodate a job of the estimated size, the only options are:
108
-
109
108
* Add bigger {{ml}} nodes to the cluster, or
110
109
* Accept that the job will hit its memory limit and will not necessarily find all the anomalies it could otherwise find.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/anomaly-detection/ml-ad-troubleshooting.md
+6-8Lines changed: 6 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,19 +18,17 @@ If an {{anomaly-job}} fails, try to restart the job by following the procedure d
18
18
If an {{anomaly-job}} has failed, do the following to recover from `failed` state:
19
19
20
20
1.*Force* stop the corresponding {{dfeed}} by using the [Stop {{dfeed}} API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-stop-datafeed.html) with the `force` parameter being `true`. For example, the following request force stops the `my_datafeed` {{dfeed}}.
21
-
22
-
```console
23
-
POST _ml/datafeeds/my_datafeed/_stop
21
+
```console
22
+
POST _ml/datafeeds/my_datafeed/_stop
24
23
{
25
24
"force": "true"
26
25
}
27
-
```
26
+
```
28
27
29
28
2.*Force* close the {{anomaly-job}} by using the [Close {{anomaly-job}} API](https://www.elastic.co/guide/en/elasticsearch/reference/current/ml-close-job.html) with the `force` parameter being `true`. For example, the following request force closes the `my_job` {{anomaly-job}}:
30
-
31
-
```console
32
-
POST _ml/anomaly_detectors/my_job/_close?force=true
33
-
```
29
+
```console
30
+
POST _ml/anomaly_detectors/my_job/_close?force=true
31
+
```
34
32
35
33
3. Restart the {{anomaly-job}} on the **Job management** pane in {{kib}}.
Copy file name to clipboardExpand all lines: explore-analyze/machine-learning/anomaly-detection/ml-configuring-populations.md
+4-10Lines changed: 4 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,25 +14,22 @@ This type of analysis is most effective when the behavior within a group is gene
14
14
15
15
Population analysis is resource-efficient and scales well, enabling the analysis of populations consisting of hundreds of thousands or even millions of entities with a lower resource footprint than analyzing each series individually.
16
16
17
-
18
17
## Recommendations [population-recommendations]
19
18
20
19
* Use population analysis when the behavior within a group is mostly homogeneous, as it helps identify anomalous patterns effectively.
21
20
* Leverage population analysis when dealing with large-scale datasets.
22
21
* Avoid using population analysis when members of the population exhibit vastly different behaviors, as it may not be effective.
23
22
24
-
25
23
## Creating population jobs [creating-population-jobs]
26
24
27
25
1. In {{kib}}, navigate to **Jobs**. To open **Jobs**, find **{{ml-app}} > Anomaly Detection** in the main menu, or use the [global search field](https://www.elastic.co/guide/en/kibana/current/kibana-concepts-analysts.html#_finding_your_apps_and_objects).
28
26
2. Click **Create job**, select the {{data-source}} you want to analyze.
29
27
3. Select the **Population** wizard from the list.
30
28
4. Choose a population field - it’s the `clientip` field in this example - and the metric you want to use for the analysis - `Mean(bytes)` in this example.
@@ -68,11 +65,8 @@ PUT _ml/anomaly_detectors/population
68
65
69
66
1. This `over_field_name` property indicates that the metrics for each client (as identified by their IP address) are analyzed relative to other clients in each bucket.
70
67
71
-
72
68
::::
73
69
74
-
75
-
76
70
### Viewing the job results [population-job-results]
77
71
78
72
Use the **Anomaly Explorer** in {{kib}} to view the analysis results:
0 commit comments