Skip to content

Commit 5607277

Browse files
committed
Merge branch '4.12' into merge-4.12-into-4.13.0
2 parents a387346 + 6f5d68c commit 5607277

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

58 files changed

+27322
-1053
lines changed

CHANGELOG.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ All notable changes to this project will be documented in this file.
4040
- **Post-release**: Added DNF package manager support for installation and configuration steps. ([#8689](https://github.com/wazuh/wazuh-documentation/pull/8689))
4141
- **Post-release**: Added security update for the `remove-threat.py` script and a warning to the Detecting and removing malware using VirusTotal integration POC guide. ([#8697](https://github.com/wazuh/wazuh-documentation/pull/8697))
4242
- **Post-release**: Added note about manual replication of `ossec.conf` between master and worker nodes. ([#8720](https://github.com/wazuh/wazuh-documentation/pull/8720))
43+
- **Post-release**: Added a table describing the possible environment statuses in the cloud service documentation. ([#8407](https://github.com/wazuh/wazuh-documentation/pull/8407))
44+
- **Post-release**: Added the Wazuh indexer API reference. ([#8756](https://github.com/wazuh/wazuh-documentation/pull/8756))
45+
- **Post-release**: Added examples of Wazuh tools to the user manual reference. ([#8763](https://github.com/wazuh/wazuh-documentation/pull/8763))
4346

4447
### Changed
4548

@@ -64,6 +67,7 @@ All notable changes to this project will be documented in this file.
6467
- **Post-release**: Added steps to export and import dashboard customizations in the upgrade guide. ([#8618](https://github.com/wazuh/wazuh-documentation/pull/8618))
6568
- **Post-release**: Updated the vulnerability detection capability section. ([#8693](https://github.com/wazuh/wazuh-documentation/pull/8693))
6669
- **Post-release**: Changed the warning note on using the `$` and `&` characters when changing passwords in Docker deployments. ([#8694](https://github.com/wazuh/wazuh-documentation/pull/8694))
70+
- **Post-release**: Changed Windows commands in the backup guide to PowerShell. ([#8761](https://github.com/wazuh/wazuh-documentation/pull/8761))
6771

6872
### Fixed
6973

19 KB
Loading
Lines changed: 285 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,285 @@
1+
paths:
2+
/_analyze:
3+
get:
4+
tags: [Analyze API]
5+
summary: Text analysis (Global GET)
6+
description: |
7+
The analyze API endpoint enables text analysis by transforming unstructured text into individual tokens, typically words, optimized for search functionality. It processes a text string and returns the corresponding tokens as the output. It helps you debug and fine-tune text analysis settings for indexing and querying, providing insight into how text is broken into tokens and how filters are applied.
8+
operationId: analyzeGlobalGet
9+
parameters:
10+
- $ref: '#/components/parameters/analyzer'
11+
- $ref: '#/components/parameters/attributes'
12+
- $ref: '#/components/parameters/char_filter'
13+
- $ref: '#/components/parameters/filter'
14+
- $ref: '#/components/parameters/normalizer'
15+
- $ref: '#/components/parameters/tokenizer'
16+
- $ref: '#/components/parameters/explain'
17+
- $ref: 'spec.yml#/components/parameters/pretty'
18+
requestBody:
19+
$ref: '#/components/requestBodies/AnalyzeRequest'
20+
responses:
21+
'200':
22+
$ref: '#/components/responses/AnalyzeSuccess'
23+
'400':
24+
$ref: 'spec.yml#/components/responses/BadRequest'
25+
'401':
26+
$ref: 'spec.yml#/components/responses/Unauthorized'
27+
'403':
28+
$ref: 'spec.yml#/components/responses/Forbidden'
29+
security:
30+
- basicAuth: []
31+
- jwtAuth: []
32+
33+
post:
34+
tags: [Analyze API]
35+
summary: Text analysis (Global POST)
36+
description: |
37+
The analyze API endpoint enables text analysis by transforming unstructured text into individual tokens, typically words, optimized for search functionality. It processes a text string and returns the corresponding tokens as the output. It helps you debug and fine-tune text analysis settings for indexing and querying, providing insight into how text is broken into tokens and how filters are applied.
38+
operationId: analyzeGlobalPost
39+
parameters:
40+
- $ref: '#/components/parameters/analyzer'
41+
- $ref: '#/components/parameters/attributes'
42+
- $ref: '#/components/parameters/char_filter'
43+
- $ref: '#/components/parameters/filter'
44+
- $ref: '#/components/parameters/normalizer'
45+
- $ref: '#/components/parameters/tokenizer'
46+
- $ref: '#/components/parameters/explain'
47+
- $ref: 'spec.yml#/components/parameters/pretty'
48+
requestBody:
49+
$ref: '#/components/requestBodies/AnalyzeRequest'
50+
responses:
51+
'200':
52+
$ref: '#/components/responses/AnalyzeSuccess'
53+
'400':
54+
$ref: 'spec.yml#/components/responses/BadRequest'
55+
'401':
56+
$ref: 'spec.yml#/components/responses/Unauthorized'
57+
'403':
58+
$ref: 'spec.yml#/components/responses/Forbidden'
59+
security:
60+
- basicAuth: []
61+
- jwtAuth: []
62+
63+
/{index}/_analyze:
64+
parameters:
65+
- $ref: '#/components/parameters/index'
66+
get:
67+
tags: [Analyze API]
68+
summary: Text analysis (Index GET)
69+
description: |
70+
The analyze API endpoint enables text analysis by transforming unstructured text into individual tokens, typically words, optimized for search functionality. It processes a text string and returns the corresponding tokens as the output. It helps you debug and fine-tune text analysis settings for indexing and querying, providing insight into how text is broken into tokens and how filters are applied.
71+
operationId: analyzeIndexedGet
72+
parameters:
73+
- $ref: '#/components/parameters/analyzer'
74+
- $ref: '#/components/parameters/attributes'
75+
- $ref: '#/components/parameters/char_filter'
76+
- $ref: '#/components/parameters/field'
77+
- $ref: '#/components/parameters/filter'
78+
- $ref: '#/components/parameters/normalizer'
79+
- $ref: '#/components/parameters/tokenizer'
80+
- $ref: '#/components/parameters/explain'
81+
- $ref: 'spec.yml#/components/parameters/pretty'
82+
requestBody:
83+
$ref: '#/components/requestBodies/AnalyzeRequest'
84+
responses:
85+
'200':
86+
$ref: '#/components/responses/AnalyzeSuccess'
87+
'400':
88+
$ref: 'spec.yml#/components/responses/BadRequest'
89+
'401':
90+
$ref: 'spec.yml#/components/responses/Unauthorized'
91+
'403':
92+
$ref: 'spec.yml#/components/responses/Forbidden'
93+
security:
94+
- basicAuth: []
95+
- jwtAuth: []
96+
97+
post:
98+
tags: [Analyze API]
99+
summary: Text analysis (Index POST)
100+
description: |
101+
The analyze API endpoint enables text analysis by transforming unstructured text into individual tokens, typically words, optimized for search functionality. It processes a text string and returns the corresponding tokens as the output. It helps you debug and fine-tune text analysis settings for indexing and querying, providing insight into how text is broken into tokens and how filters are applied.
102+
operationId: analyzeIndexedPost
103+
parameters:
104+
- $ref: '#/components/parameters/analyzer'
105+
- $ref: '#/components/parameters/attributes'
106+
- $ref: '#/components/parameters/char_filter'
107+
- $ref: '#/components/parameters/field'
108+
- $ref: '#/components/parameters/filter'
109+
- $ref: '#/components/parameters/normalizer'
110+
- $ref: '#/components/parameters/tokenizer'
111+
- $ref: '#/components/parameters/explain'
112+
- $ref: 'spec.yml#/components/parameters/pretty'
113+
requestBody:
114+
$ref: '#/components/requestBodies/AnalyzeRequest'
115+
responses:
116+
'200':
117+
$ref: '#/components/responses/AnalyzeSuccess'
118+
'400':
119+
$ref: 'spec.yml#/components/responses/BadRequest'
120+
'401':
121+
$ref: 'spec.yml#/components/responses/Unauthorized'
122+
'403':
123+
$ref: 'spec.yml#/components/responses/Forbidden'
124+
security:
125+
- basicAuth: []
126+
- jwtAuth: []
127+
128+
components:
129+
parameters:
130+
index:
131+
name: index
132+
in: path
133+
required: true
134+
description: Target index name for analyzer derivation
135+
schema: { type: string }
136+
137+
analyzer:
138+
name: analyzer
139+
in: query
140+
description: |
141+
The name of the analyzer to apply to the `text` field. The analyzer can be built in or configured in the index.
142+
143+
If analyzer is not specified, the analyze API uses the analyzer defined in the mapping of the `field` field.
144+
schema: { type: string }
145+
146+
attributes:
147+
name: attributes
148+
in: query
149+
description: Array of token attributes for filtering the output of the `explain` field.
150+
schema:
151+
type: array
152+
items: { type: string }
153+
style: form
154+
explode: true
155+
156+
char_filter:
157+
name: char_filter
158+
in: query
159+
description: Array of character filters for preprocessing characters before the `tokenizer` field.
160+
schema:
161+
type: array
162+
items: { type: string }
163+
style: form
164+
explode: true
165+
166+
field:
167+
name: field
168+
in: query
169+
description: |
170+
Field for deriving the analyzer.
171+
172+
If you specify `field`, you must also specify the `index` path parameter.
173+
174+
If you specify the `analyzer` field, it overrides the value of `field`.
175+
176+
If you do not specify `field`, the analyze API uses the default analyzer for the index.
177+
178+
If you do not specify the `index` field, or the index does not have a default analyzer, the analyze API uses the standard analyzer.
179+
schema: { type: string }
180+
181+
filter:
182+
name: filter
183+
in: query
184+
description: Array of token filters to apply after the `tokenizer` field.
185+
schema:
186+
type: array
187+
items: { type: string }
188+
style: form
189+
explode: true
190+
191+
normalizer:
192+
name: normalizer
193+
in: query
194+
description: Normalizer for converting text into a single token.
195+
schema: { type: string }
196+
197+
tokenizer:
198+
name: tokenizer
199+
in: query
200+
description: Tokenizer for converting the text field into tokens.
201+
schema: { type: string }
202+
203+
explain:
204+
name: explain
205+
in: query
206+
description: If `true`, causes the response to include token attributes and additional details.
207+
schema:
208+
type: boolean
209+
default: false
210+
211+
schemas:
212+
AnalyzeRequest:
213+
type: object
214+
properties:
215+
text:
216+
oneOf:
217+
- type: string
218+
- type: array
219+
items:
220+
type: string
221+
description: Text to analyze. If you provide an array of strings, the text is analyzed as a multi-value field.
222+
223+
Token:
224+
type: object
225+
properties:
226+
token:
227+
type: string
228+
description: The text fragment extracted during analysis. It is a word or part of a word.
229+
start_offset:
230+
type: integer
231+
description: The position in the input text where this token starts.
232+
end_offset:
233+
type: integer
234+
description: The position in the input text where this token ends.
235+
type:
236+
type: string
237+
description: The type of token inputted. This can be alphanumeric or custom depending on the analyzer.
238+
position:
239+
type: integer
240+
description: The position of the token in the sequence of tokens, starting from 0. This can help determine word order.
241+
242+
AnalyzeResponse:
243+
type: object
244+
properties:
245+
analyzer:
246+
type: string
247+
description: The name of the analyzer applied to the query.
248+
text:
249+
type: string
250+
description: The text supplied to the query for analysis.
251+
tokens:
252+
type: array
253+
items: { $ref: '#/components/schemas/Token' }
254+
255+
requestBodies:
256+
AnalyzeRequest:
257+
content:
258+
application/json:
259+
schema:
260+
$ref: '#/components/schemas/AnalyzeRequest'
261+
example:
262+
analyzer: "standard"
263+
text: ["first array element", "second array element"]
264+
265+
responses:
266+
AnalyzeSuccess:
267+
description: Success
268+
content:
269+
application/json:
270+
schema:
271+
$ref: '#/components/schemas/AnalyzeResponse'
272+
examples:
273+
standard:
274+
value:
275+
tokens:
276+
- token: "test"
277+
start_offset: 0
278+
end_offset: 4
279+
type: "<ALPHANUM>"
280+
position: 0
281+
- token: "word"
282+
start_offset: 5
283+
end_offset: 9
284+
type: "<ALPHANUM>"
285+
position: 1

0 commit comments

Comments
 (0)