Skip to content

[Feedback]: "tokenizer" parameter is missing in the _analyze documentation #2197

@sabi0

Description

@sabi0

The "Get tokens from text analysis" documentation forgets to mention the "tokenizer" body property.

There are parameters for character filters ("char_filter") and token filters ("filter"), but no way to specify a tokenizer?

The "tokenizer" property is in fact supported by the _analyze REST endpoint:

POST /{index}/_analyze

{
  "text": "Hello there!",
  "tokenizer": "keyword"
}

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions