Skip to content

Commit d1e3c0a

Browse files
ESQL: Union Types Support (#107545)
* Union Types Support The second prototype replaced MultiTypeField.Unresolved with MultiTypeField, but this clashed with existing behaviour around mapping unused MultiTypeFields to `unsupported` and `null`, so this new attempt simply adds new fields, resulting in more than one field with the same name. We still need to store this new field in EsRelation, so that physical planner can insert it into FieldExtractExec, so this is quite similar to the second protototype. The following query works in this third prototype: ``` multiIndexIpString FROM sample_data* METADATA _index | EVAL client_ip = TO_IP(client_ip) | KEEP _index, @timestamp, client_ip, event_duration, message | SORT _index ASC, @timestamp DESC ``` As with the previous prototyep, we no longer need an aggregation to force the conversion function onto the data node, as the 'real' conversion is now done at field extraction time using the converter function previously saved in the EsRelation and replanned into the EsQueryExec. Support row-stride-reader for LoadFromMany Add missing ESQL version after rebase on main Fixed missing block release Simplify UnresolvedUnionTypes Support other commands, notably WHERE Update docs/changelog/107545.yaml Fix changelog Removed unused code Slight code reduction in analyser of union types Removed unused interface method Fix bug in copying blocks (array overrun) Convert MultiTypeEsField.UnresolvedField back to InvalidMappedField This is to ensure older behaviour still works. Simplify InvalidMappedField support Rather than complex code to recreate InvalidMappedField from MultiTypeEsField.UnresolvedField, we rely on the fact that this is the parent class anyway, so we can resolve this during plan serialization/deserialization anyway. Much simpler Simplify InvalidMappedField support further Combining InvalidMappedField and MultiTypeEsField.UnresolvedField into one class simplifies plan serialization even further. InvalidMappedField is used slightly differently in QL We need to separate the aggregatable used in the original really-invalid mapped field from the aggregatable used if the field can indeed be used as a union-type in ES|QL. Updated version limitation after 8.14 branch Try debug CI failures in multi-node clusters Support type conversion in rowstride reader on single leaf Disable union_types from CsvTests Keep track of per-shard converters for LoadFromMany Simplify block loader convert function Code cleanup Added unit test for ValuesSourceReaderOperator including field type conversions at block loading Added test for @timestamp and fixed related bug It turns out that most, but not all, DataType values have the same esType as typeName, and @timestamp is one that does not, using `date` for esType and `datetime` for typename. Our EsqlIndexResolver was recording multi-type fields with `esType`, while later the actual type conversion was using an evaluator that relied on DataTypes.typeFromName(typeName). So we fixed the EsqlIndexResolver to rather use typeName. Added more tests, with three indices combined and two type conversions Disable lucene-pushdown on union-type fields Since the union-type rewriter replaced conversion functions with new FieldAttributes, these were passing the check for being possible to push-down, which was incorrect. Now we prevent that. Set union-type aggregatable flag to false always This simplifies the push-down check. Fixed tests after rebase on main Add unit tests for union-types (same field, different type) Remove generic warnings Test code cleanup and clarifying comments Remove -IT_tests_only in favor of CsvTests assumeFalse Improved comment Code review updates Code review updates Remove changes to ql/EsRelation And it turned out the latest version of union type no longer needed these changes anyway, and was using the new EsRelation in the ESQL module without these changes. Port InvalidMappedField to ESQL Note, this extends the QL version of InvalidMappedField, so is not a complete port. This is necessary because of the intertwining of QL IndexResolver and EsqlIndexResolver. Once those classes are disentangled, we can completely break InvalidMappedField from QL and make it a forbidden type. Fix capabilities line after rebase on main Revert QL FieldAttribute and extend with ESQL FieldAttribute So as to remove any edits to QL code, we extend FieldAttribute in the ESQL code with the changes required, since is simply to include the `field` in the hascode and equals methods. Revert "Revert QL FieldAttribute and extend with ESQL FieldAttribute" This reverts commit 168c6c75436e26b83e083cd3de8e18062e116bc9. Switch UNION_TYPES from EsqlFeatures to EsqlCapabilities Make hashcode and equals aligned And removed unused method from earlier union-types work where we kept the NodeId during re-writing (which we no longer do). Replace required_feature with required_capability after rebase Switch union_types capability back to feature, because capabilities do not work in mixed clusters Revert "Switch union_types capability back to feature, because capabilities do not work in mixed clusters" This reverts commit 56d58bedf756dbad703c07bf4cdb991d4341c1ae. Added test for multiple columns from same fields Both IP and Date are tested Fix bug with incorrectly resolving invalid types And added more tests Fixed bug with multiple fields of same name This fix simply removes the original field already at the EsRelation level, which covers all test cases but has the side effect of having the final field no-longer be unsupported/null when the alias does not overwrite the field with the same name. This is not exactly the correct semantic intent. The original field name should be unsupported/null unless the user explicitly overwrote the name with `field=TO_TYPE(field)`, which effectively deletes the old field anyway. Fixed bug with multiple conversions of the same field This also fixes the issue with the previous fix that incorrectly reported the converted type for the original field. More tests with multiple fields and KEEP/DROP combinations Replace skip with capabilities in YML tests Fixed missing ql->esql import change afer merging main Merged two InvalidMappedField classes After the QL code was ported to esql.core, we can now make the edits directly in InvalidMappedField instead of having one extend the other. Move FieldAttribute edits from QL to ESQL ESQL: Prepare analyzer for LOOKUP (#109045) This extracts two fairly uncontroversial changes that were in the main LOOKUP PR into a smaller change that's easier to review. ESQL: Move serialization for EsField (#109222) This moves the serialization logic for `EsField` into the `EsField` subclasses to better align with the way rest of Elasticsearch works. It also switches them from ESQL's home grown `writeNamed` thing to `NamedWriteable`. These are wire compatible with one another. ESQL: Move serialization of `Attribute` (#109267) This moves the serialization of `Attribute` classes used in ESQL into the classes themselves to better line up with the rest of Elasticsearch. ES|QL: add MV_APPEND function (#107001) Adding `MV_APPEND(value1, value2)` function, that appends two values creating a single multi-value. If one or both the inputs are multi-values, the result is the concatenation of all the values, eg. ``` MV_APPEND([a, b], [c, d]) -> [a, b, c, d] ``` ~I think for this specific case it makes sense to consider `null` values as empty arrays, so that~ ~MV_APPEND(value, null) -> value~ ~It is pretty uncommon for ESQL (all the other functions, apart from `COALESCE`, short-circuit to `null` when one of the values is null), so let's discuss this behavior.~ [EDIT] considering the feedback from Andrei, I changed this logic and made it consistent with the other functions: now if one of the parameters is null, the function returns null [ES|QL] Convert string to datetime when the other size of an arithmetic operator is date_period or time_duration (#108455) * convert string to datetime when the other side of binary operator is temporal amount ESQL: Move `NamedExpression` serialization (#109380) This moves the serialization for the remaining `NamedExpression` subclass into the class itself, and switches all direct serialization of `NamedExpression`s to `readNamedWriteable` and friends. All other `NamedExpression` subclasses extend from `Attribute` who's serialization was moved ealier. They are already registered under the "category class" for `Attribute`. This also registers them as `NamedExpression`s. ESQL: Implement LOOKUP, an "inline" enrich (#107987) This adds support for `LOOKUP`, a command that implements a sort of inline `ENRICH`, using data that is passed in the request: ``` $ curl -uelastic:password -HContent-Type:application/json -XPOST \ 'localhost:9200/_query?error_trace&pretty&format=txt' \ -d'{ "query": "ROW a=1::LONG | LOOKUP t ON a", "tables": { "t": { "a:long": [ 1, 4, 2], "v1:integer": [ 10, 11, 12], "v2:keyword": ["cat", "dog", "wow"] } }, "version": "2024.04.01" }' v1 | v2 | a ---------------+---------------+--------------- 10 |cat |1 ``` This required these PRs: * #107624 * #107634 * #107701 * #107762 * Closes #107306 parent 32ac5ba755dd5c24364a210f1097ae093fdcbd75 author Craig Taverner <[email protected]> 1717779549 +0200 committer Craig Taverner <[email protected]> 1718115775 +0200 Fixed compile error after merging in main Fixed strange merge issues from main Remove version from ES|QL test queries after merging main Fixed union-types on nested fields Switch to Luigi's solution, and expand nested tests Cleanup after rebase * Added more tests from code review Note that one test, `multiIndexIpStringStatsInline` is muted due to failing with the error: UnresolvedException: Invalid call to dataType on an unresolved object ?client_ip * Make CsvTests consistent with integration tests for capabilities The integration tests do not fail the tests if the capability does not even exist on cluster nodes, instead the tests are ignored. The same behaviour should happen with CsvTests for consistency. * Return assumeThat to assertThat, but change order This way we don't have to add more features to the test framework in this PR, but we would probably want a mute feature (like a `skip` line). * Move serialization of MultiTypeEsField to NamedWritable approach Since the sub-fields are AbstractConvertFunction expressions, and Expression is not yet fully supported as a category class for NamedWritable, we need a few slight tweaks to this, notably registering this explicitly in the EsqlPlugin, as well as calling PlanStreamInput.readExpression() instead of StreamInput.readNamedWritable(Expression.class). These can be removed later once Expression is fully supported as a category class. * Remove attempt to mute two failed tests We used required_capability to mute the tests, but this caused issues with CsvTests which also uses this as a spelling mistake checker for typing the capability name wrong, so we tried to use muted-tests.yml, but that only mutes tests in specific run configurations (ie. we need to mute each and every IT class separately). So now we just remove the tests entirely. We left a comment in the muted-tests.yml file for future reference about how to mute csv-spec tests. * Fix rather massive issue with performance of testConcurrentSerialization Recreating the config on every test was very expensive. * Code review by Nik --------- Co-authored-by: Elastic Machine <[email protected]>
1 parent 379c02b commit d1e3c0a

File tree

27 files changed

+4362
-77
lines changed

27 files changed

+4362
-77
lines changed

docs/changelog/107545.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
pr: 107545
2+
summary: "ESQL: Union Types Support"
3+
area: ES|QL
4+
type: enhancement
5+
issues:
6+
- 100603

muted-tests.yml

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,3 +103,10 @@ tests:
103103
# - class: org.elasticsearch.xpack.esql.expression.function.scalar.convert.ToIPTests
104104
# method: testCrankyEvaluateBlockWithoutNulls
105105
# issue: https://github.com/elastic/elasticsearch/...
106+
#
107+
# Mute a single test in an ES|QL csv-spec test file:
108+
# - class: "org.elasticsearch.xpack.esql.CsvTests"
109+
# method: "test {union_types.MultiIndexIpStringStatsInline}"
110+
# issue: "https://github.com/elastic/elasticsearch/..."
111+
# Note that this mutes for the unit-test-like CsvTests only.
112+
# Muting for the integration tests needs to be done for each IT class individually.

server/src/main/java/org/elasticsearch/index/mapper/BlockLoader.java

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -92,6 +92,16 @@ interface StoredFields {
9292
*/
9393
SortedSetDocValues ordinals(LeafReaderContext context) throws IOException;
9494

95+
/**
96+
* In support of 'Union Types', we sometimes desire that Blocks loaded from source are immediately
97+
* converted in some way. Typically, this would be a type conversion, or an encoding conversion.
98+
* @param block original block loaded from source
99+
* @return converted block (or original if no conversion required)
100+
*/
101+
default Block convert(Block block) {
102+
return block;
103+
}
104+
95105
/**
96106
* Load blocks with only null.
97107
*/

x-pack/plugin/esql-core/src/main/java/org/elasticsearch/xpack/esql/core/expression/FieldAttribute.java

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -168,12 +168,14 @@ protected Attribute clone(
168168

169169
@Override
170170
public int hashCode() {
171-
return Objects.hash(super.hashCode(), path);
171+
return Objects.hash(super.hashCode(), path, field);
172172
}
173173

174174
@Override
175175
public boolean equals(Object obj) {
176-
return super.equals(obj) && Objects.equals(path, ((FieldAttribute) obj).path);
176+
return super.equals(obj)
177+
&& Objects.equals(path, ((FieldAttribute) obj).path)
178+
&& Objects.equals(field, ((FieldAttribute) obj).field);
177179
}
178180

179181
@Override

x-pack/plugin/esql-core/src/main/java/org/elasticsearch/xpack/esql/core/type/InvalidMappedField.java

Lines changed: 43 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,15 @@
1515
import java.io.IOException;
1616
import java.util.Map;
1717
import java.util.Objects;
18+
import java.util.Set;
1819
import java.util.TreeMap;
1920

2021
/**
2122
* Representation of field mapped differently across indices.
2223
* Used during mapping discovery only.
24+
* Note that the field <code>typesToIndices</code> is not serialized because that information is
25+
* not required through the cluster, only surviving as long as the Analyser phase of query planning.
26+
* It is used specifically for the 'union types' feature in ES|QL.
2327
*/
2428
public class InvalidMappedField extends EsField {
2529
static final NamedWriteableRegistry.Entry ENTRY = new NamedWriteableRegistry.Entry(
@@ -29,10 +33,10 @@ public class InvalidMappedField extends EsField {
2933
);
3034

3135
private final String errorMessage;
36+
private final Map<String, Set<String>> typesToIndices;
3237

3338
public InvalidMappedField(String name, String errorMessage, Map<String, EsField> properties) {
34-
super(name, DataType.UNSUPPORTED, properties, false);
35-
this.errorMessage = errorMessage;
39+
this(name, errorMessage, properties, Map.of());
3640
}
3741

3842
public InvalidMappedField(String name, String errorMessage) {
@@ -43,6 +47,19 @@ public InvalidMappedField(String name) {
4347
this(name, StringUtils.EMPTY, new TreeMap<>());
4448
}
4549

50+
/**
51+
* Constructor supporting union types, used in ES|QL.
52+
*/
53+
public InvalidMappedField(String name, Map<String, Set<String>> typesToIndices) {
54+
this(name, makeErrorMessage(typesToIndices), new TreeMap<>(), typesToIndices);
55+
}
56+
57+
private InvalidMappedField(String name, String errorMessage, Map<String, EsField> properties, Map<String, Set<String>> typesToIndices) {
58+
super(name, DataType.UNSUPPORTED, properties, false);
59+
this.errorMessage = errorMessage;
60+
this.typesToIndices = typesToIndices;
61+
}
62+
4663
private InvalidMappedField(StreamInput in) throws IOException {
4764
this(in.readString(), in.readString(), in.readImmutableMap(StreamInput::readString, i -> i.readNamedWriteable(EsField.class)));
4865
}
@@ -88,4 +105,28 @@ public EsField getExactField() {
88105
public Exact getExactInfo() {
89106
return new Exact(false, "Field [" + getName() + "] is invalid, cannot access it");
90107
}
108+
109+
public Map<String, Set<String>> getTypesToIndices() {
110+
return typesToIndices;
111+
}
112+
113+
private static String makeErrorMessage(Map<String, Set<String>> typesToIndices) {
114+
StringBuilder errorMessage = new StringBuilder();
115+
errorMessage.append("mapped as [");
116+
errorMessage.append(typesToIndices.size());
117+
errorMessage.append("] incompatible types: ");
118+
boolean first = true;
119+
for (Map.Entry<String, Set<String>> e : typesToIndices.entrySet()) {
120+
if (first) {
121+
first = false;
122+
} else {
123+
errorMessage.append(", ");
124+
}
125+
errorMessage.append("[");
126+
errorMessage.append(e.getKey());
127+
errorMessage.append("] in ");
128+
errorMessage.append(e.getValue());
129+
}
130+
return errorMessage.toString();
131+
}
91132
}

x-pack/plugin/esql/compute/src/main/java/org/elasticsearch/compute/data/Page.java

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,9 @@ private Page(boolean copyBlocks, int positionCount, Block[] blocks) {
8383
private Page(Page prev, Block[] toAdd) {
8484
for (Block block : toAdd) {
8585
if (prev.positionCount != block.getPositionCount()) {
86-
throw new IllegalArgumentException("Block [" + block + "] does not have same position count");
86+
throw new IllegalArgumentException(
87+
"Block [" + block + "] does not have same position count: " + block.getPositionCount() + " != " + prev.positionCount
88+
);
8789
}
8890
}
8991
this.positionCount = prev.positionCount;

x-pack/plugin/esql/compute/src/main/java/org/elasticsearch/compute/lucene/ValuesSourceReaderOperator.java

Lines changed: 62 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -165,14 +165,14 @@ public int get(int i) {
165165
}
166166
}
167167
success = true;
168+
return page.appendBlocks(blocks);
168169
} catch (IOException e) {
169170
throw new UncheckedIOException(e);
170171
} finally {
171172
if (success == false) {
172173
Releasables.closeExpectNoException(blocks);
173174
}
174175
}
175-
return page.appendBlocks(blocks);
176176
}
177177

178178
private void positionFieldWork(int shard, int segment, int firstDoc) {
@@ -233,6 +233,7 @@ private void loadFromSingleLeaf(Block[] blocks, int shard, int segment, BlockLoa
233233
new RowStrideReaderWork(
234234
field.rowStride(ctx),
235235
(Block.Builder) field.loader.builder(loaderBlockFactory, docs.count()),
236+
field.loader,
236237
f
237238
)
238239
);
@@ -262,17 +263,13 @@ private void loadFromSingleLeaf(Block[] blocks, int shard, int segment, BlockLoa
262263
);
263264
for (int p = 0; p < docs.count(); p++) {
264265
int doc = docs.get(p);
265-
if (storedFields != null) {
266-
storedFields.advanceTo(doc);
267-
}
268-
for (int r = 0; r < rowStrideReaders.size(); r++) {
269-
RowStrideReaderWork work = rowStrideReaders.get(r);
270-
work.reader.read(doc, storedFields, work.builder);
266+
storedFields.advanceTo(doc);
267+
for (RowStrideReaderWork work : rowStrideReaders) {
268+
work.read(doc, storedFields);
271269
}
272270
}
273-
for (int r = 0; r < rowStrideReaders.size(); r++) {
274-
RowStrideReaderWork work = rowStrideReaders.get(r);
275-
blocks[work.offset] = work.builder.build();
271+
for (RowStrideReaderWork work : rowStrideReaders) {
272+
blocks[work.offset] = work.build();
276273
}
277274
} finally {
278275
Releasables.close(rowStrideReaders);
@@ -310,7 +307,9 @@ private class LoadFromMany implements Releasable {
310307
private final IntVector docs;
311308
private final int[] forwards;
312309
private final int[] backwards;
313-
private final Block.Builder[] builders;
310+
private final Block.Builder[][] builders;
311+
private final BlockLoader[][] converters;
312+
private final Block.Builder[] fieldTypeBuilders;
314313
private final BlockLoader.RowStrideReader[] rowStride;
315314

316315
BlockLoaderStoredFieldsFromLeafLoader storedFields;
@@ -322,29 +321,34 @@ private class LoadFromMany implements Releasable {
322321
docs = docVector.docs();
323322
forwards = docVector.shardSegmentDocMapForwards();
324323
backwards = docVector.shardSegmentDocMapBackwards();
325-
builders = new Block.Builder[target.length];
324+
fieldTypeBuilders = new Block.Builder[target.length];
325+
builders = new Block.Builder[target.length][shardContexts.size()];
326+
converters = new BlockLoader[target.length][shardContexts.size()];
326327
rowStride = new BlockLoader.RowStrideReader[target.length];
327328
}
328329

329330
void run() throws IOException {
330331
for (int f = 0; f < fields.length; f++) {
331332
/*
332-
* Important note: each block loader has a method to build an
333-
* optimized block loader, but we have *many* fields and some
334-
* of those block loaders may not be compatible with each other.
335-
* So! We take the least common denominator which is the loader
336-
* from the element expected element type.
333+
* Important note: each field has a desired type, which might not match the mapped type (in the case of union-types).
334+
* We create the final block builders using the desired type, one for each field, but then also use inner builders
335+
* (one for each field and shard), and converters (again one for each field and shard) to actually perform the field
336+
* loading in a way that is correct for the mapped field type, and then convert between that type and the desired type.
337337
*/
338-
builders[f] = fields[f].info.type.newBlockBuilder(docs.getPositionCount(), blockFactory);
338+
fieldTypeBuilders[f] = fields[f].info.type.newBlockBuilder(docs.getPositionCount(), blockFactory);
339+
builders[f] = new Block.Builder[shardContexts.size()];
340+
converters[f] = new BlockLoader[shardContexts.size()];
339341
}
342+
ComputeBlockLoaderFactory loaderBlockFactory = new ComputeBlockLoaderFactory(blockFactory, docs.getPositionCount());
340343
int p = forwards[0];
341344
int shard = shards.getInt(p);
342345
int segment = segments.getInt(p);
343346
int firstDoc = docs.getInt(p);
344347
positionFieldWork(shard, segment, firstDoc);
345348
LeafReaderContext ctx = ctx(shard, segment);
346349
fieldsMoved(ctx, shard);
347-
read(firstDoc);
350+
verifyBuilders(loaderBlockFactory, shard);
351+
read(firstDoc, shard);
348352
for (int i = 1; i < forwards.length; i++) {
349353
p = forwards[i];
350354
shard = shards.getInt(p);
@@ -354,11 +358,19 @@ void run() throws IOException {
354358
ctx = ctx(shard, segment);
355359
fieldsMoved(ctx, shard);
356360
}
357-
read(docs.getInt(p));
361+
verifyBuilders(loaderBlockFactory, shard);
362+
read(docs.getInt(p), shard);
358363
}
359-
for (int f = 0; f < builders.length; f++) {
360-
try (Block orig = builders[f].build()) {
361-
target[f] = orig.filter(backwards);
364+
for (int f = 0; f < target.length; f++) {
365+
for (int s = 0; s < shardContexts.size(); s++) {
366+
if (builders[f][s] != null) {
367+
try (Block orig = (Block) converters[f][s].convert(builders[f][s].build())) {
368+
fieldTypeBuilders[f].copyFrom(orig, 0, orig.getPositionCount());
369+
}
370+
}
371+
}
372+
try (Block targetBlock = fieldTypeBuilders[f].build()) {
373+
target[f] = targetBlock.filter(backwards);
362374
}
363375
}
364376
}
@@ -379,16 +391,29 @@ private void fieldsMoved(LeafReaderContext ctx, int shard) throws IOException {
379391
}
380392
}
381393

382-
private void read(int doc) throws IOException {
394+
private void verifyBuilders(ComputeBlockLoaderFactory loaderBlockFactory, int shard) {
395+
for (int f = 0; f < fields.length; f++) {
396+
if (builders[f][shard] == null) {
397+
// Note that this relies on field.newShard() to set the loader and converter correctly for the current shard
398+
builders[f][shard] = (Block.Builder) fields[f].loader.builder(loaderBlockFactory, docs.getPositionCount());
399+
converters[f][shard] = fields[f].loader;
400+
}
401+
}
402+
}
403+
404+
private void read(int doc, int shard) throws IOException {
383405
storedFields.advanceTo(doc);
384406
for (int f = 0; f < builders.length; f++) {
385-
rowStride[f].read(doc, storedFields, builders[f]);
407+
rowStride[f].read(doc, storedFields, builders[f][shard]);
386408
}
387409
}
388410

389411
@Override
390412
public void close() {
391-
Releasables.closeExpectNoException(builders);
413+
Releasables.closeExpectNoException(fieldTypeBuilders);
414+
for (int f = 0; f < fields.length; f++) {
415+
Releasables.closeExpectNoException(builders[f]);
416+
}
392417
}
393418
}
394419

@@ -468,7 +493,17 @@ private void trackReader(String type, BlockLoader.Reader reader) {
468493
}
469494
}
470495

471-
private record RowStrideReaderWork(BlockLoader.RowStrideReader reader, Block.Builder builder, int offset) implements Releasable {
496+
private record RowStrideReaderWork(BlockLoader.RowStrideReader reader, Block.Builder builder, BlockLoader loader, int offset)
497+
implements
498+
Releasable {
499+
void read(int doc, BlockLoaderStoredFieldsFromLeafLoader storedFields) throws IOException {
500+
reader.read(doc, storedFields, builder);
501+
}
502+
503+
Block build() {
504+
return (Block) loader.convert(builder.build());
505+
}
506+
472507
@Override
473508
public void close() {
474509
builder.close();

0 commit comments

Comments
 (0)