Skip to content

Commit 9bd0d47

Browse files
authored
[DOCS] resource_name property for attachment ingest processor (#65974) (#69826)
1 parent fa5e6e5 commit 9bd0d47

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

docs/plugins/ingest-attachment.asciidoc

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ include::install_remove.asciidoc[]
2828
| `indexed_chars_field` | no | `null` | Field name from which you can overwrite the number of chars being used for extraction. See `indexed_chars`.
2929
| `properties` | no | all properties | Array of properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language`
3030
| `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document
31+
| `resource_name` | no | | Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable https://tika.apache.org/1.24.1/detection.html#Resource_Name_Based_Detection[Resource Name Based Detection].
3132
|======
3233

3334
[discrete]
@@ -115,7 +116,7 @@ PUT _ingest/pipeline/attachment
115116
NOTE: Extracting contents from binary data is a resource intensive operation and
116117
consumes a lot of resources. It is highly recommended to run pipelines
117118
using this processor in a dedicated ingest node.
118-
119+
119120
[[ingest-attachment-cbor]]
120121
==== Use the attachment processor with CBOR
121122

@@ -157,17 +158,17 @@ with open(file, 'rb') as f:
157158
'data': f.read()
158159
}
159160
requests.put(
160-
'http://localhost:9200/my-index-000001/_doc/my_id?pipeline=cbor-attachment',
161-
data=cbor2.dumps(doc),
161+
'http://localhost:9200/my-index-000001/_doc/my_id?pipeline=cbor-attachment',
162+
data=cbor2.dumps(doc),
162163
headers=headers
163164
)
164165
----
165166

166167
[[ingest-attachment-extracted-chars]]
167168
==== Limit the number of extracted chars
168169

169-
To prevent extracting too many chars and overload the node memory, the number of chars being used for extraction
170-
is limited by default to `100000`. You can change this value by setting `indexed_chars`. Use `-1` for no limit but
170+
To prevent extracting too many chars and overload the node memory, the number of chars being used for extraction
171+
is limited by default to `100000`. You can change this value by setting `indexed_chars`. Use `-1` for no limit but
171172
ensure when setting this that your node will have enough HEAP to extract the content of very big documents.
172173

173174
You can also define this limit per document by extracting from a given field the limit to set. If the document

0 commit comments

Comments
 (0)