[SPARK-55350][PYTHON][CONNECT] Fix row count loss when creating DataFrame from pandas with 0 columns#54144
Closed
Yicong-Huang wants to merge 4 commits intoapache:masterfrom
Closed
Conversation
JIRA Issue Information=== Bug SPARK-55350 === This comment was automatically generated by GitHub Actions |
Contributor
Author
|
cc @ueshin |
ueshin
approved these changes
Feb 4, 2026
Member
ueshin
left a comment
There was a problem hiding this comment.
Otherwise LGTM, pending tests.
| # Handle the 0-column case separately to preserve row count. | ||
| if len(data.columns) == 0: | ||
| # For 0 rows, need explicit struct type; otherwise pa.array infers null type | ||
| if len(data) == 0: |
Member
There was a problem hiding this comment.
nit: just wondering if we need to branch with len(data) == 0 here?
Contributor
Author
Contributor
Author
There was a problem hiding this comment.
oh, I can combine them to
>>> pa.array([{}] * 0, type=pa.struct([]))
<pyarrow.lib.StructArray object at 0x1115cf700>
-- is_valid: all not null
Removed handling for 0-column case and simplified table creation.
Handle the case where the input DataFrame has no columns by creating an empty Arrow table with preserved row count.
HyukjinKwon
approved these changes
Feb 4, 2026
ueshin
reviewed
Feb 5, 2026
| self.assertEqual(cdf.schema, sdf.schema) | ||
| self.assertEqual(cdf.schema, schema) | ||
| self.assertEqual(cdf.count(), 5) | ||
| self.assertEqual(sdf.count(), 5) |
Member
|
Thanks! merging to master. |
rpnkv
pushed a commit
to rpnkv/spark
that referenced
this pull request
Feb 18, 2026
…rame from pandas with 0 columns ### What changes were proposed in this pull request? This PR fixes the row count loss issue when creating a Spark DataFrame from a pandas DataFrame with 0 columns in **Spark Connect**. The issue occurs due to two PyArrow limitations: 1. `pa.RecordBatch.from_arrays([], [])` loses row count information 2. `pa.Table.cast()` on a 0-column table resets the row count to 0 **Changes:** 1. Handle 0-column pandas DataFrames separately using `pa.Table.from_struct_array()` to preserve row count 2. Skip the `cast()` operation for 0-column tables as it loses row count ### Why are the changes needed? Before this fix: ```python import pandas as pd from pyspark.sql.types import StructType pdf = pd.DataFrame(index=range(10)) # 10 rows, 0 columns df = spark.createDataFrame(pdf, schema=StructType([])) df.count() # Returns 0 (wrong!) ``` After this fix: ```python df.count() # Returns 10 (correct!) ``` ### Does this PR introduce _any_ user-facing change? Yes. Creating a DataFrame from a pandas DataFrame with 0 columns now correctly preserves the row count in Spark Connect. ### How was this patch tested? Added unit test `test_from_pandas_dataframe_with_zero_columns` in `test_connect_creation.py` ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#54144 from Yicong-Huang/SPARK-55350/fix/arrow-zero-columns-row-count. Lead-authored-by: Yicong-Huang <17627829+Yicong-Huang@users.noreply.github.com> Co-authored-by: Yicong Huang <17627829+Yicong-Huang@users.noreply.github.com> Signed-off-by: Takuya Ueshin <ueshin@databricks.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

What changes were proposed in this pull request?
This PR fixes the row count loss issue when creating a Spark DataFrame from a pandas DataFrame with 0 columns in Spark Connect.
The issue occurs due to two PyArrow limitations:
pa.RecordBatch.from_arrays([], [])loses row count informationpa.Table.cast()on a 0-column table resets the row count to 0Changes:
pa.Table.from_struct_array()to preserve row countcast()operation for 0-column tables as it loses row countWhy are the changes needed?
Before this fix:
After this fix:
Does this PR introduce any user-facing change?
Yes. Creating a DataFrame from a pandas DataFrame with 0 columns now correctly preserves the row count in Spark Connect.
How was this patch tested?
Added unit test
test_from_pandas_dataframe_with_zero_columnsintest_connect_creation.pyWas this patch authored or co-authored using generative AI tooling?
No