Skip to content

Commit 42d7c80

Browse files
author
Kofi
committed
Proposed changes to match opster. Added a summary at the top to help distinguish between different methods
1 parent 7262fb3 commit 42d7c80

File tree

1 file changed

+16
-2
lines changed

1 file changed

+16
-2
lines changed

docs/reference/search/search-your-data/paginate-search-results.asciidoc

Lines changed: 16 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,18 @@
11
[[paginate-search-results]]
22
=== Paginate search results
33

4+
Pagination provides a structured way for users to interact with search results by breaking them down into smaller, manageable pieces or "pages." Whether allowing users to select a specific range of pages or implementing an “infinite scroll” experience, pagination helps create a tailored and intuitive user experience.
5+
6+
The three commonly used pagination techniques are:
7+
8+
* <<from-and-size-pagination, From and size pagination>>: This method is ideal for creating a defined list of pages that users can select to navigate through the results.
9+
* <<search-after, Search after>>: Designed for seamless navigation, this technique supports “infinite scroll” experiences or enables users to load additional results by pressing a “next” button.
10+
* <<scroll-search-results, Scroll>>: Historically used to retrieve all matching documents for display or export. However, we now recommend using the <<search-after, search after>> method in combination with the <<point-in-time-api, point in time API>> for improved efficiency and reliability.
11+
12+
[discrete]
13+
[[from-and-size-pagination]]
14+
=== From and size pagination
15+
416
By default, searches return the top 10 matching hits. To page through a larger
517
set of results, you can use the <<search-search,search API>>'s `from` and `size`
618
parameters. The `from` parameter defines the number of hits to skip, defaulting
@@ -25,14 +37,16 @@ Avoid using `from` and `size` to page too deeply or request too many results at
2537
once. Search requests usually span multiple shards. Each shard must load its
2638
requested hits and the hits for any previous pages into memory. For deep pages
2739
or large sets of results, these operations can significantly increase memory and
28-
CPU usage, resulting in degraded performance or node failures.
40+
CPU usage, resulting in degraded performance or node failures. Deep pagination can be a big performance killer if not managed correctly.
2941

3042
By default, you cannot use `from` and `size` to page through more than 10,000
3143
hits. This limit is a safeguard set by the
3244
<<index-max-result-window,`index.max_result_window`>> index setting. If you need
3345
to page through more than 10,000 hits, use the <<search-after,`search_after`>>
3446
parameter instead.
3547

48+
Pagination is stateless, meaning there is no guarantee that the order of search results will remain consistent when users navigate back and forth between pages. If maintaining consistent result order is desired, the preferred approach is to use the <<point-in-time-api, point in time (PIT) API>> to implement a stateful pagination technique.
49+
3650
WARNING: {es} uses Lucene's internal doc IDs as tie-breakers. These internal doc
3751
IDs can be completely different across replicas of the same data. When paging
3852
search hits, you might occasionally see that documents with the same sort values
@@ -43,7 +57,7 @@ are not ordered consistently.
4357
=== Search after
4458

4559
You can use the `search_after` parameter to retrieve the next page of hits
46-
using a set of <<sort-search-results,sort values>> from the previous page.
60+
using a set of <<sort-search-results,sort values>> from the previous page. This approach is ideal for scenarios where users click a "next" or "load more" button, rather than selecting a specific page.
4761

4862
Using `search_after` requires multiple search requests with the same `query` and
4963
`sort` values. The first step is to run an initial request. The following

0 commit comments

Comments
 (0)