Commit 2a5d03a
committed
[SPARK-53854][PYTHON][TESTS] Skip
### What changes were proposed in this pull request?
This PR aims to skip `test_collect_time` test if pandas or pyarrow are unavailable.
### Why are the changes needed?
According to `Python 3.14` CI, this seems to be the last error of `pyspark-sql` module due to the missing `pyarrow`.
- https://github.com/apache/spark/actions/workflows/build_python_3.14.yml
- https://github.com/apache/spark/actions/runs/18363201896/job/52310847550
```
======================================================================
ERROR [0.990s]: test_collect_time (pyspark.sql.tests.test_collection.DataFrameCollectionTests.test_collect_time)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/__w/spark/spark/python/pyspark/sql/pandas/utils.py", line 69, in require_minimum_pyarrow_version
import pyarrow
ModuleNotFoundError: No module named 'pyarrow'
```
### Does this PR introduce _any_ user-facing change?
No, this is a test case change.
### How was this patch tested?
Manual review.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes #52555 from dongjoon-hyun/SPARK-53854.
Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>test_collect_time test if pandas or pyarrow are unavailable1 parent 49e2c9e commit 2a5d03a
1 file changed
+4
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
365 | 365 | | |
366 | 366 | | |
367 | 367 | | |
| 368 | + | |
| 369 | + | |
| 370 | + | |
| 371 | + | |
368 | 372 | | |
369 | 373 | | |
370 | 374 | | |
| |||
0 commit comments