You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/search/search-your-data/paginate-search-results.asciidoc
+16-2Lines changed: 16 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,18 @@
1
1
[[paginate-search-results]]
2
2
=== Paginate search results
3
3
4
+
Pagination provides a structured way for users to interact with search results by breaking them down into smaller, manageable pieces or "pages." Whether allowing users to select a specific range of pages or implementing an “infinite scroll” experience, pagination helps create a tailored and intuitive user experience.
5
+
6
+
The three commonly used pagination techniques are:
7
+
8
+
* <<from-and-size-pagination, From and size pagination>>: This method is ideal for creating a defined list of pages that users can select to navigate through the results.
9
+
* <<search-after, Search after>>: Designed for seamless navigation, this technique supports “infinite scroll” experiences or enables users to load additional results by pressing a “next” button.
10
+
* <<scroll-search-results, Scroll>>: Historically used to retrieve all matching documents for display or export. However, we now recommend using the <<search-after, search after>> method in combination with the <<point-in-time-api, point in time API>> for improved efficiency and reliability.
11
+
12
+
[discrete]
13
+
[[from-and-size-pagination]]
14
+
=== From and size pagination
15
+
4
16
By default, searches return the top 10 matching hits. To page through a larger
5
17
set of results, you can use the <<search-search,search API>>'s `from` and `size`
6
18
parameters. The `from` parameter defines the number of hits to skip, defaulting
@@ -25,14 +37,16 @@ Avoid using `from` and `size` to page too deeply or request too many results at
25
37
once. Search requests usually span multiple shards. Each shard must load its
26
38
requested hits and the hits for any previous pages into memory. For deep pages
27
39
or large sets of results, these operations can significantly increase memory and
28
-
CPU usage, resulting in degraded performance or node failures.
40
+
CPU usage, resulting in degraded performance or node failures. Deep pagination can be a big performance killer if not managed correctly.
29
41
30
42
By default, you cannot use `from` and `size` to page through more than 10,000
31
43
hits. This limit is a safeguard set by the
32
44
<<index-max-result-window,`index.max_result_window`>> index setting. If you need
33
45
to page through more than 10,000 hits, use the <<search-after,`search_after`>>
34
46
parameter instead.
35
47
48
+
Pagination is stateless, meaning there is no guarantee that the order of search results will remain consistent when users navigate back and forth between pages. If maintaining consistent result order is desired, the preferred approach is to use the <<point-in-time-api, point in time (PIT) API>> to implement a stateful pagination technique.
49
+
36
50
WARNING: {es} uses Lucene's internal doc IDs as tie-breakers. These internal doc
37
51
IDs can be completely different across replicas of the same data. When paging
38
52
search hits, you might occasionally see that documents with the same sort values
@@ -43,7 +57,7 @@ are not ordered consistently.
43
57
=== Search after
44
58
45
59
You can use the `search_after` parameter to retrieve the next page of hits
46
-
using a set of <<sort-search-results,sort values>> from the previous page.
60
+
using a set of <<sort-search-results,sort values>> from the previous page. This approach is ideal for scenarios where users click a "next" or "load more" button, rather than selecting a specific page.
47
61
48
62
Using `search_after` requires multiple search requests with the same `query` and
49
63
`sort` values. The first step is to run an initial request. The following
0 commit comments