You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/reference/connector/docs/connectors-salesforce.asciidoc
+20-8Lines changed: 20 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -200,7 +200,7 @@ Once the permissions are set, assign the Profiles, Permission Set or Permission
200
200
Follow these steps in Salesforce:
201
201
202
202
1. Navigate to `Administration` under the `Users` section.
203
-
2. Select `Users` and choose the user to set the permissions to.
203
+
2. Select `Users` and choose the user to set the permissions to.
204
204
3. Set the `Profile`, `Permission Set` or `Permission Set Groups` created in the earlier steps.
205
205
206
206
[discrete#es-connectors-salesforce-sync-rules]
@@ -249,7 +249,7 @@ Allowed values are *SOQL* and *SOSL*.
249
249
[
250
250
{
251
251
"query": "FIND {Salesforce} IN ALL FIELDS",
252
-
"language": "SOSL"
252
+
"language": "SOSL"
253
253
}
254
254
]
255
255
----
@@ -381,7 +381,13 @@ See <<es-connectors-content-extraction,content extraction>> for more specifics o
381
381
[discrete#es-connectors-salesforce-known-issues]
382
382
===== Known issues
383
383
384
-
There are currently no known issues for this connector.
384
+
* *DLS feature is "type-level" not "document-level"*
385
+
+
386
+
Salesforce DLS, added in 8.13.0, does not accomodate specific access controls to specific Salesforce Objects.
387
+
Instead, if a given user/group can have access to _any_ Objects of a given type (`Case`, `Lead`, `Opportunity`, etc), that user/group will appear in the `\_allow_access_control` list for _all_ of the Objects of that type.
388
+
See https://github.com/elastic/connectors/issues/3028 for more details.
389
+
+
390
+
385
391
Refer to <<es-connectors-known-issues,connector known issues>> for a list of known issues for all connectors.
386
392
387
393
[discrete#es-connectors-salesforce-security]
@@ -396,7 +402,7 @@ This connector is built with the {connectors-python}[Elastic connector framework
396
402
397
403
View the {connectors-python}/connectors/sources/salesforce.py[source code for this connector^] (branch _{connectors-branch}_, compatible with Elastic _{minor-version}_).
398
404
399
-
// Closing the collapsible section
405
+
// Closing the collapsible section
400
406
===============
401
407
402
408
@@ -598,7 +604,7 @@ Once the permissions are set, assign the Profiles, Permission Set or Permission
598
604
Follow these steps in Salesforce:
599
605
600
606
1. Navigate to `Administration` under the `Users` section.
601
-
2. Select `Users` and choose the user to set the permissions to.
607
+
2. Select `Users` and choose the user to set the permissions to.
602
608
3. Set the `Profile`, `Permission Set` or `Permission Set Groups` created in the earlier steps.
There are currently no known issues for this connector.
790
+
* *DLS feature is "type-level" not "document-level"*
791
+
+
792
+
Salesforce DLS, added in 8.13.0, does not accomodate specific access controls to specific Salesforce Objects.
793
+
Instead, if a given user/group can have access to _any_ Objects of a given type (`Case`, `Lead`, `Opportunity`, etc), that user/group will appear in the `\_allow_access_control` list for _all_ of the Objects of that type.
794
+
See https://github.com/elastic/connectors/issues/3028 for more details.
795
+
+
796
+
785
797
Refer to <<es-connectors-known-issues,connector known issues>> for a list of known issues for all connectors.
@@ -797,5 +809,5 @@ This connector is built with the {connectors-python}[Elastic connector framework
797
809
View the {connectors-python}/connectors/sources/salesforce.py[source code for this connector^] (branch _{connectors-branch}_, compatible with Elastic _{minor-version}_).
preview::[Logs data streams and the logsdb index mode are in tech preview and may be changed or removed in the future. Don't use logs data streams or logsdb index mode in production.]
4
+
IMPORTANT: The {es} `logsdb` index mode is generally available in Elastic Cloud Hosted
5
+
and self-managed Elasticsearch as of version 8.17, and is enabled by default for
6
+
logs in https://www.elastic.co/elasticsearch/serverless[{serverless-full}].
5
7
6
8
A logs data stream is a data stream type that stores log data more efficiently.
7
9
8
10
In benchmarks, log data stored in a logs data stream used ~2.5 times less disk space than a regular data
9
-
stream. The exact impact will vary depending on your data set.
10
-
11
-
The following features are enabled in a logs data stream:
12
-
13
-
* <<synthetic-source,Synthetic source>>, which omits storing the `_source` field. When the document source is requested, it is synthesized from document fields upon retrieval.
14
-
15
-
* Index sorting. This yields a lower storage footprint. By default indices are sorted by `host.name` and `@timestamp` fields at index time.
16
-
17
-
* More space efficient compression for fields with <<doc-values,`doc_values`>> enabled.
11
+
stream. The exact impact varies by data set.
18
12
19
13
[discrete]
20
14
[[how-to-use-logsds]]
21
15
=== Create a logs data stream
22
16
23
-
To create a logs data stream, set your indextemplate `index.mode` to `logsdb`:
17
+
To create a logs data stream, set your <<index-templates,template>> `index.mode` to `logsdb`:
24
18
25
19
[source,console]
26
20
----
@@ -39,14 +33,193 @@ PUT _index_template/my-index-template
39
33
// TEST
40
34
41
35
<1> The index mode setting.
42
-
<2> The index template priority. By default, Elasticsearch ships with an index template with a `logs-*-*` pattern with a priority of 100. You need to define a priority higher than 100 to ensure that this index template gets selected over the default index template for the `logs-*-*` pattern. See the <<avoid-index-pattern-collisions,avoid index pattern collision section>> for more information.
36
+
<2> The index template priority. By default, Elasticsearch ships with a `logs-*-*` index template with a priority of 100. To make sure your index template takes priority over the default `logs-*-*` template, set its `priority` to a number higher than 100. For more information, see <<avoid-index-pattern-collisions,Avoid index pattern collisions>>.
43
37
44
38
After the index template is created, new indices that use the template will be configured as a logs data stream. You can start indexing data and <<use-a-data-stream,using the data stream>>.
45
39
40
+
You can also set the index mode and adjust other template settings in <<index-mgmt,the Elastic UI>>.
41
+
46
42
////
47
43
[source,console]
48
44
----
49
45
DELETE _index_template/my-index-template
50
46
----
51
47
// TEST[continued]
52
48
////
49
+
50
+
[[logsdb-default-settings]]
51
+
52
+
[discrete]
53
+
[[logsdb-synthetic-source]]
54
+
=== Synthetic source
55
+
56
+
If you have the required https://www.elastic.co/subscriptions[subscription], `logsdb` index mode uses <<synthetic-source,synthetic `_source`>>, which omits storing the original `_source`
57
+
field. Instead, the document source is synthesized from doc values or stored fields upon document retrieval.
58
+
59
+
If you don't have the required https://www.elastic.co/subscriptions[subscription], `logsdb` mode uses the original `_source` field.
60
+
61
+
Before using synthetic source, make sure to review the <<synthetic-source-restrictions,restrictions>>.
62
+
63
+
When working with multi-value fields, the `index.mapping.synthetic_source_keep` setting controls how field values
64
+
are preserved for <<synthetic-source,synthetic source>> reconstruction. In `logsdb`, the default value is `arrays`,
65
+
which retains both duplicate values and the order of entries. However, the exact structure of
66
+
array elements and objects is not necessarily retained. Preserving duplicates and ordering can be critical for some
67
+
log fields, such as DNS A records, HTTP headers, and log entries that represent sequential or repeated events.
68
+
69
+
[discrete]
70
+
[[logsdb-sort-settings]]
71
+
=== Index sort settings
72
+
73
+
In `logsdb` index mode, the following sort settings are applied by default:
Indices are sorted by `host.name` and `@timestamp` by default. The `@timestamp` field is automatically injected if it is not present.
77
+
78
+
`index.sort.order`: `["desc", "desc"]`::
79
+
Both `host.name` and `@timestamp` are sorted in descending (`desc`) order, prioritizing the latest data.
80
+
81
+
`index.sort.mode`: `["min", "min"]`::
82
+
The `min` mode sorts indices by the minimum value of multi-value fields.
83
+
84
+
`index.sort.missing`: `["_first", "_first"]`::
85
+
Missing values are sorted to appear `_first`.
86
+
87
+
You can override these default sort settings. For example, to sort on different fields
88
+
and change the order, manually configure `index.sort.field` and `index.sort.order`. For more details, see
89
+
<<index-modules-index-sorting>>.
90
+
91
+
When using the default sort settings, the `host.name` field is automatically injected into the index mappings as a `keyword` field to ensure that sorting can be applied. This guarantees that logs are efficiently sorted and retrieved based on the `host.name` and `@timestamp` fields.
92
+
93
+
NOTE: If `subobjects` is set to `true` (default), the `host` field is mapped as an object field
94
+
named `host` with a `name` child field of type `keyword`. If `subobjects` is set to `false`,
95
+
a single `host.name` field is mapped as a `keyword` field.
96
+
97
+
To apply different sort settings to an existing data stream, update the data stream's component templates, and then
98
+
perform or wait for a <<data-streams-rollover,rollover>>.
99
+
100
+
NOTE: In `logsdb` mode, the `@timestamp` field is automatically injected if it's not already present. If you apply custom sort settings, the `@timestamp` field is injected into the mappings but is not
101
+
automatically added to the list of sort fields.
102
+
103
+
[discrete]
104
+
[[logsdb-host-name]]
105
+
==== Existing data streams
106
+
107
+
If you're enabling `logsdb` index mode on a data stream that already exists, make sure to check mappings and sorting. The `logsdb` mode automatically maps `host.name` as a keyword if it's included in the sort settings. If a `host.name` field already exists but has a different type, mapping errors might occur, preventing `logsdb` mode from being fully applied.
108
+
109
+
To avoid mapping conflicts, consider these options:
110
+
111
+
* **Adjust mappings:** Check your existing mappings to ensure that `host.name` is mapped as a keyword.
112
+
113
+
* **Change sorting:** If needed, you can remove `host.name` from the sort settings and use a different set of fields. Sorting by `@timestamp` can be a good fallback.
114
+
115
+
* **Switch to a different <<index-mode-setting,index mode>>**: If resolving `host.name` mapping conflicts is not feasible, you can choose not to use `logsdb` mode.
116
+
117
+
IMPORTANT: On existing data streams, `logsdb` mode is applied on <<data-streams-rollover,rollover>> (automatic or manual).
118
+
119
+
[discrete]
120
+
[[logsdb-specialized-codecs]]
121
+
=== Specialized codecs
122
+
123
+
By default, `logsdb` index mode uses the `best_compression` <<index-codec,codec>>, which applies {wikipedia}/Zstd[ZSTD]
124
+
compression to stored fields. You can switch to the `default` codec for faster compression with a slightly larger storage footprint.
125
+
126
+
The `logsdb` index mode also automatically applies specialized codecs for numeric doc values, in order to optimize storage usage. Numeric fields are
127
+
encoded using the following sequence of codecs:
128
+
129
+
* **Delta encoding**:
130
+
Stores the difference between consecutive values instead of the actual values.
131
+
132
+
* **Offset encoding**:
133
+
Stores the difference from a base value rather than between consecutive values.
134
+
135
+
* **Greatest Common Divisor (GCD) encoding**:
136
+
Finds the greatest common divisor of a set of values and stores the differences as multiples of the GCD.
137
+
138
+
* **Frame Of Reference (FOR) encoding**:
139
+
Determines the smallest number of bits required to encode a block of values and uses
140
+
bit-packing to fit such values into larger 64-bit blocks.
141
+
142
+
Each encoding is evaluated according to heuristics determined by the data distribution.
143
+
For example, the algorithm checks whether the data is monotonically non-decreasing or
144
+
non-increasing. If so, delta encoding is applied; otherwise, the process
145
+
continues with the next encoding method (offset).
146
+
147
+
Encoding is specific to each Lucene segment and is reapplied when segments are merged. The merged Lucene segment
148
+
might use a different encoding than the original segments, depending on the characteristics of the merged data.
149
+
150
+
For keyword fields, **Run Length Encoding (RLE)** is applied to the ordinals, which represent positions in the Lucene
151
+
segment-level keyword dictionary. This compression is used when multiple consecutive documents share the same keyword.
152
+
153
+
[discrete]
154
+
[[logsdb-ignored-settings]]
155
+
=== `ignore` settings
156
+
157
+
The `logsdb` index mode uses the following `ignore` settings. You can override these settings as needed.
158
+
159
+
[discrete]
160
+
[[logsdb-ignore-malformed]]
161
+
==== `ignore_malformed`
162
+
163
+
By default, `logsdb` index mode sets `ignore_malformed` to `true`. With this setting, documents with malformed fields
164
+
can be indexed without causing ingestion failures.
165
+
166
+
[discrete]
167
+
[[logs-db-ignore-above]]
168
+
==== `ignore_above`
169
+
170
+
In `logsdb` index mode, the `index.mapping.ignore_above` setting is applied by default at the index level to ensure
171
+
efficient storage and indexing of large keyword fields.The index-level default for `ignore_above` is 8191
172
+
_characters._ Using UTF-8 encoding, this results in a limit of 32764 bytes, depending on character encoding.
173
+
174
+
The mapping-level `ignore_above` setting takes precedence. If a specific field has an `ignore_above` value
175
+
defined in its mapping, that value overrides the index-level `index.mapping.ignore_above` value. This default
176
+
behavior helps to optimize indexing performance by preventing excessively large string values from being indexed.
177
+
178
+
If you need to customize the limit, you can override it at the mapping level or change the index level default.
179
+
180
+
[discrete]
181
+
[[logs-db-ignore-limit]]
182
+
==== `ignore_dynamic_beyond_limit`
183
+
184
+
In `logsdb` index mode, the setting `index.mapping.total_fields.ignore_dynamic_beyond_limit` is set to `true` by
185
+
default. This setting allows dynamically mapped fields to be added on top of statically defined fields, even when the total number of fields exceeds the `index.mapping.total_fields.limit`. Instead of triggering an index failure, additional dynamically mapped fields are ignored so that ingestion can continue.
186
+
187
+
NOTE: When automatically injected, `host.name` and `@timestamp` count toward the limit of mapped fields. If `host.name` is mapped with `subobjects: true`, it has two fields. When mapped with `subobjects: false`, `host.name` has only one field.
188
+
189
+
[discrete]
190
+
[[logsdb-nodocvalue-fields]]
191
+
=== Fields without `doc_values`
192
+
193
+
When the `logsdb` index mode uses synthetic `_source` and `doc_values` are disabled for a field in the mapping,
194
+
{es} might set the `store` setting to `true` for that field. This ensures that the field's
195
+
data remains accessible for reconstructing the document's source when using
196
+
<<synthetic-source,synthetic source>>.
197
+
198
+
For example, this adjustment occurs with text fields when `store` is `false` and no suitable multi-field is available for
199
+
reconstructing the original value.
200
+
201
+
[discrete]
202
+
[[logsdb-settings-summary]]
203
+
=== Settings reference
204
+
205
+
The `logsdb` index mode uses the following settings:
0 commit comments