|
| 1 | +<!-- |
| 2 | +Licensed to the Apache Software Foundation (ASF) under one |
| 3 | +or more contributor license agreements. See the NOTICE file |
| 4 | +distributed with this work for additional information |
| 5 | +regarding copyright ownership. The ASF licenses this file |
| 6 | +to you under the Apache License, Version 2.0 (the |
| 7 | +"License"); you may not use this file except in compliance |
| 8 | +with the License. You may obtain a copy of the License at |
| 9 | +
|
| 10 | + http://www.apache.org/licenses/LICENSE-2.0 |
| 11 | +
|
| 12 | +Unless required by applicable law or agreed to in writing, |
| 13 | +software distributed under the License is distributed on an |
| 14 | +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY |
| 15 | +KIND, either express or implied. See the License for the |
| 16 | +specific language governing permissions and limitations |
| 17 | +under the License. |
| 18 | +--> |
| 19 | + |
| 20 | +# Comet Parquet Scan Implementations |
| 21 | + |
| 22 | +Comet currently has three distinct implementations of the Parquet scan operator. The configuration property |
| 23 | +`spark.comet.scan.impl` is used to select an implementation. The default setting is `spark.comet.scan.impl=auto`, and |
| 24 | +Comet will choose the most appropriate implementation based on the Parquet schema and other Comet configuration |
| 25 | +settings. Most users should not need to change this setting. However, it is possible to force Comet to try and use |
| 26 | +a particular implementation for all scan operations by setting this configuration property to one of the following |
| 27 | +implementations. |
| 28 | + |
| 29 | +| Implementation | Description | |
| 30 | +| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
| 31 | +| `native_comet` | This implementation provides strong compatibility with Spark but does not support complex types. This is the original scan implementation in Comet and may eventually be removed. | |
| 32 | +| `native_iceberg_compat` | This implementation delegates to DataFusion's `DataSourceExec` but uses a hybrid approach of JVM and native code. This scan is designed to be integrated with Iceberg in the future. | |
| 33 | +| `native_datafusion` | This experimental implementation delegates to DataFusion's `DataSourceExec` for full native execution. There are known compatibility issues when using this scan. | |
| 34 | + |
| 35 | +The `native_datafusion` and `native_iceberg_compat` scans provide the following benefits over the `native_comet` |
| 36 | +implementation: |
| 37 | + |
| 38 | +- Leverages the DataFusion community's ongoing improvements to `DataSourceExec` |
| 39 | +- Provides support for reading complex types (structs, arrays, and maps) |
| 40 | +- Removes the use of reusable mutable-buffers in Comet, which is complex to maintain |
| 41 | +- Improves performance |
| 42 | + |
| 43 | +The `native_datafusion` and `native_iceberg_compat` scans share the following limitations: |
| 44 | + |
| 45 | +- When reading Parquet files written by systems other than Spark that contain columns with the logical types `UINT_8` |
| 46 | + or `UINT_16`, Comet will produce different results than Spark because Spark does not preserve or understand these |
| 47 | + logical types. Arrow-based readers, such as DataFusion and Comet do respect these types and read the data as unsigned |
| 48 | + rather than signed. By default, Comet will fall back to `native_comet` when scanning Parquet files containing `byte` or `short` |
| 49 | + types (regardless of the logical type). This behavior can be disabled by setting |
| 50 | + `spark.comet.scan.allowIncompatible=true`. |
| 51 | +- No support for default values that are nested types (e.g., maps, arrays, structs). Literal default values are supported. |
| 52 | + |
| 53 | +The `native_datafusion` scan has some additional limitations: |
| 54 | + |
| 55 | +- Bucketed scans are not supported |
| 56 | +- No support for row indexes |
| 57 | +- `PARQUET_FIELD_ID_READ_ENABLED` is not respected [#1758] |
| 58 | +- There are failures in the Spark SQL test suite [#1545] |
| 59 | +- Setting Spark configs `ignoreMissingFiles` or `ignoreCorruptFiles` to `true` is not compatible with Spark |
| 60 | + |
| 61 | +## S3 Support |
| 62 | + |
| 63 | +There are some |
| 64 | + |
| 65 | +### `native_comet` |
| 66 | + |
| 67 | +The default `native_comet` Parquet scan implementation reads data from S3 using the [Hadoop-AWS module](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html), which |
| 68 | +is identical to the approach commonly used with vanilla Spark. AWS credential configuration and other Hadoop S3A |
| 69 | +configurations works the same way as in vanilla Spark. |
| 70 | + |
| 71 | +### `native_datafusion` and `native_iceberg_compat` |
| 72 | + |
| 73 | +The `native_datafusion` and `native_iceberg_compat` Parquet scan implementations completely offload data loading |
| 74 | +to native code. They use the [`object_store` crate](https://crates.io/crates/object_store) to read data from S3 and |
| 75 | +support configuring S3 access using standard [Hadoop S3A configurations](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#General_S3A_Client_configuration) by translating them to |
| 76 | +the `object_store` crate's format. |
| 77 | + |
| 78 | +This implementation maintains compatibility with existing Hadoop S3A configurations, so existing code will |
| 79 | +continue to work as long as the configurations are supported and can be translated without loss of functionality. |
| 80 | + |
| 81 | +#### Additional S3 Configuration Options |
| 82 | + |
| 83 | +Beyond credential providers, the `native_datafusion` implementation supports additional S3 configuration options: |
| 84 | + |
| 85 | +| Option | Description | |
| 86 | +|--------|-------------| |
| 87 | +| `fs.s3a.endpoint` | The endpoint of the S3 service | |
| 88 | +| `fs.s3a.endpoint.region` | The AWS region for the S3 service. If not specified, the region will be auto-detected. | |
| 89 | +| `fs.s3a.path.style.access` | Whether to use path style access for the S3 service (true/false, defaults to virtual hosted style) | |
| 90 | +| `fs.s3a.requester.pays.enabled` | Whether to enable requester pays for S3 requests (true/false) | |
| 91 | + |
| 92 | +All configuration options support bucket-specific overrides using the pattern `fs.s3a.bucket.{bucket-name}.{option}`. |
| 93 | + |
| 94 | +#### Examples |
| 95 | + |
| 96 | +The following examples demonstrate how to configure S3 access with the `native_datafusion` Parquet scan implementation using different authentication methods. |
| 97 | + |
| 98 | +**Example 1: Simple Credentials** |
| 99 | + |
| 100 | +This example shows how to access a private S3 bucket using an access key and secret key. The `fs.s3a.aws.credentials.provider` configuration can be omitted since `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider` is included in Hadoop S3A's default credential provider chain. |
| 101 | + |
| 102 | +```shell |
| 103 | +$SPARK_HOME/bin/spark-shell \ |
| 104 | +... |
| 105 | +--conf spark.comet.scan.impl=native_datafusion \ |
| 106 | +--conf spark.hadoop.fs.s3a.access.key=my-access-key \ |
| 107 | +--conf spark.hadoop.fs.s3a.secret.key=my-secret-key |
| 108 | +... |
| 109 | +``` |
| 110 | + |
| 111 | +**Example 2: Assume Role with Web Identity Token** |
| 112 | + |
| 113 | +This example demonstrates using an assumed role credential to access a private S3 bucket, where the base credential for assuming the role is provided by a web identity token credentials provider. |
| 114 | + |
| 115 | +```shell |
| 116 | +$SPARK_HOME/bin/spark-shell \ |
| 117 | +... |
| 118 | +--conf spark.comet.scan.impl=native_datafusion \ |
| 119 | +--conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider \ |
| 120 | +--conf spark.hadoop.fs.s3a.assumed.role.arn=arn:aws:iam::123456789012:role/my-role \ |
| 121 | +--conf spark.hadoop.fs.s3a.assumed.role.session.name=my-session \ |
| 122 | +--conf spark.hadoop.fs.s3a.assumed.role.credentials.provider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider |
| 123 | +... |
| 124 | +``` |
| 125 | + |
| 126 | +#### Limitations |
| 127 | + |
| 128 | +The S3 support of `native_datafusion` has the following limitations: |
| 129 | + |
| 130 | +1. **Partial Hadoop S3A configuration support**: Not all Hadoop S3A configurations are currently supported. Only the configurations listed in the tables above are translated and applied to the underlying `object_store` crate. |
| 131 | + |
| 132 | +2. **Custom credential providers**: Custom implementations of AWS credential providers are not supported. The implementation only supports the standard credential providers listed in the table above. We are planning to add support for custom credential providers through a JNI-based adapter that will allow calling Java credential providers from native code. See [issue #1829](https://github.com/apache/datafusion-comet/issues/1829) for more details. |
| 133 | + |
| 134 | + |
| 135 | + |
| 136 | +[#1545]: https://github.com/apache/datafusion-comet/issues/1545 |
| 137 | +[#1758]: https://github.com/apache/datafusion-comet/issues/1758 |
0 commit comments