You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[SPARK-22267][SQL][TEST] Spark SQL incorrectly reads ORC files when column order is different
## What changes were proposed in this pull request?
Until 2.2.1, with the default configuration, Apache Spark returns incorrect results when ORC file schema is different from metastore schema order. This is due to Hive 1.2.1 library and some issues on `convertMetastoreOrc` option.
```scala
scala> Seq(1 -> 2).toDF("c1", "c2").write.format("orc").mode("overwrite").save("/tmp/o")
scala> sql("CREATE EXTERNAL TABLE o(c2 INT, c1 INT) STORED AS orc LOCATION '/tmp/o'")
scala> spark.table("o").show // This is wrong.
+---+---+
| c2| c1|
+---+---+
| 1| 2|
+---+---+
scala> spark.read.orc("/tmp/o").show // This is correct.
+---+---+
| c1| c2|
+---+---+
| 1| 2|
+---+---+
```
After [SPARK-22279](#19499), the default configuration doesn't have this bug. Although Hive 1.2.1 library code path still has the problem, we had better have a test coverage on what we have now in order to prevent future regression on it.
## How was this patch tested?
Pass the Jenkins with a newly added test test.
Author: Dongjoon Hyun <[email protected]>
Closes#19928 from dongjoon-hyun/SPARK-22267.
0 commit comments