Skip to content

Commit 49a3c13

Browse files
committed
[SPARK-53632][PYTHON][DOCS][TESTS] Reenable doctest for DataFrame.pandas_api
### What changes were proposed in this pull request? Reenable doctest for `DataFrame.pandas_api` ### Why are the changes needed? for test coverage, the doctest will be ran when pandas and pyarrow are installed ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? ci ### Was this patch authored or co-authored using generative AI tooling? no Closes #52383 from zhengruifeng/doc_test_pandas_api. Authored-by: Ruifeng Zheng <[email protected]> Signed-off-by: Ruifeng Zheng <[email protected]>
1 parent 4b93d4c commit 49a3c13

File tree

3 files changed

+4
-2
lines changed

3 files changed

+4
-2
lines changed

python/pyspark/sql/classic/dataframe.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1949,6 +1949,7 @@ def _test() -> None:
19491949
del pyspark.sql.dataframe.DataFrame.toPandas.__doc__
19501950
del pyspark.sql.dataframe.DataFrame.mapInArrow.__doc__
19511951
del pyspark.sql.dataframe.DataFrame.mapInPandas.__doc__
1952+
del pyspark.sql.dataframe.DataFrame.pandas_api.__doc__
19521953

19531954
spark = (
19541955
SparkSession.builder.master("local[4]").appName("sql.classic.dataframe tests").getOrCreate()

python/pyspark/sql/connect/dataframe.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2321,6 +2321,7 @@ def _test() -> None:
23212321
del pyspark.sql.dataframe.DataFrame.toPandas.__doc__
23222322
del pyspark.sql.dataframe.DataFrame.mapInArrow.__doc__
23232323
del pyspark.sql.dataframe.DataFrame.mapInPandas.__doc__
2324+
del pyspark.sql.dataframe.DataFrame.pandas_api.__doc__
23242325

23252326
globs["spark"] = (
23262327
PySparkSession.builder.appName("sql.connect.dataframe tests")

python/pyspark/sql/dataframe.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6295,15 +6295,15 @@ def pandas_api(
62956295
>>> df = spark.createDataFrame(
62966296
... [(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"])
62976297
6298-
>>> df.pandas_api() # doctest: +SKIP
6298+
>>> df.pandas_api()
62996299
age name
63006300
0 14 Tom
63016301
1 23 Alice
63026302
2 16 Bob
63036303
63046304
We can specify the index columns.
63056305
6306-
>>> df.pandas_api(index_col="age") # doctest: +SKIP
6306+
>>> df.pandas_api(index_col="age")
63076307
name
63086308
age
63096309
14 Tom

0 commit comments

Comments
 (0)