You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
[SPARK-21085][SQL] Failed to read the partitioned table created by Spark 2.1
### What changes were proposed in this pull request?
Before the PR, Spark is unable to read the partitioned table created by Spark 2.1 when the table schema does not put the partitioning column at the end of the schema.
[assert(partitionFields.map(_.name) == partitionColumnNames)](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala#L234-L236)
When reading the table metadata from the metastore, we also need to reorder the columns.
### How was this patch tested?
Added test cases to check both Hive-serde and data source tables.
Author: gatorsmile <[email protected]>
Closesapache#18295 from gatorsmile/reorderReadSchema.
(cherry picked from commit 0c88e8d)
Signed-off-by: Wenchen Fan <[email protected]>
0 commit comments