Skip to content

chore: prepare v0.5.2#1447

Merged
linhr merged 4 commits intomainfrom
release-0-5-2
Mar 2, 2026
Merged

chore: prepare v0.5.2#1447
linhr merged 4 commits intomainfrom
release-0-5-2

Conversation

@linhr
Copy link
Contributor

@linhr linhr commented Feb 28, 2026

No description provided.

@github-actions
Copy link

Spark 3.5.7 Test Report

Commit Information

Commit Revision Branch
After f431d55 refs/pull/1447/merge
Before 399317b refs/heads/main

Test Summary

Suite Commit Failed Passed Skipped Warnings Time (s)
doctest-catalog After 12 13 6 5.10
Before 12 13 6 5.60
doctest-column After 33 2 5.52
Before 33 2 5.72
doctest-dataframe After 25 80 2 3 8.56
Before 25 80 2 3 8.66
doctest-functions After 71 331 7 7 15.70
Before 71 331 7 7 16.82
test-connect After 171 831 169 690 134.05
Before 171 831 169 688 135.20

Test Details

Error Counts
          279 Total
          139 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
           26 DocTestFailure
           17 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
           15 AssertionError: AnalysisException not raised
           15 UnsupportedOperationException: lambda function
           10 handle add artifacts
            6 AssertionError: False is not true
            6 UnsupportedOperationException: PlanNode::CacheTable
            6 UnsupportedOperationException: function: window
            4 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
            4 AssertionError: Attributes of DataFrame.iloc[:, 7] (column name="8_timestamp_t") are different
            4 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd() is not implemented.
            4 UnsupportedOperationException: function: input_file_name
            4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
            3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.7/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
            3 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="time") are different
            3 UnsupportedOperationException: handle analyze input files
            3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
            2 AnalysisException: Could not find config namespace "spark"
            2 AnalysisException: No table format found for: orc
            2 AnalysisException: not supported: function exists
            2 AnalysisException: not supported: list functions
            2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
            2 AssertionError
            2 AssertionError: 0 not greater than or equal to 1
            2 AssertionError: AnalysisException not raised by <lambda>
            2 AssertionError: Lists differ: [Row([22 chars](key=1, value='1'), Row(key=10, value='10'), R[2402 chars]99')] != [Row([22 chars](key=0, value='0'), Row(key=1, value='1'), Row[4882 chars]99')]
            2 IllegalArgumentException: expected value at line 1 column 1
            2 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
            2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
            2 UnsupportedOperationException: approx quantile
            2 UnsupportedOperationException: collect metrics
            2 UnsupportedOperationException: freq items
            2 UnsupportedOperationException: function: from_json
            2 UnsupportedOperationException: function: schema_of_csv
            2 UnsupportedOperationException: handle analyze is local
            2 UnsupportedOperationException: handle analyze same semantics
            2 UnsupportedOperationException: pivot
            2 UnsupportedOperationException: user defined data type should only exist in a field
            2 UnsupportedOperationException: with watermark
            2 handle artifact statuses
            2 received metadata size exceeds hard limit (19158 vs. 16384);  :status:42B content-type:60B grpc-status:45B grpc-message:8800B grpc-status-details-bin:7672B ValueError: Code in Status proto (StatusCode...
            1 AnalysisException: Cannot cast string 'abc' to value of Float64 type
            1 AnalysisException: Cannot cast value 'abc' to value of Boolean type
            1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
            1 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
            1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-3.5.7/lib/python3.11/site-packages/pyspark/sql/functions.py
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/tmp8k1gdv13/text-0.text, file:///tmp/tmp8k1gdv13/text-1.text, file:///tmp/tmp8k1gdv13/text-2.text
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/tmpdsjdace1/
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/tmpq_7hmneu/
(+1)        1 AnalysisException: UNION queries have different number of columns: left has 3 columns whereas right has 2 columns
            1 AnalysisException: table already exists: tbl1
            1 AnalysisException: temporary view not found: tab2
            1 AssertionError: "2000000" does not match "raise_error expects a single UTF-8 string argument"
(+1)        1 AssertionError: "Database 'memory:20a286c8-3f50-4428-95db-010f4726a893' dropped." does not match "No table format found for: jdbc"
(+1)        1 AssertionError: "Database 'memory:383cb264-4b8b-4916-90a5-2af2b4c40d87' dropped." does not match "No table format found for: jdbc"
            1 AssertionError: "attribute.*missing" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
            1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
            1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
            1 AssertionError: '+---[17 chars]-----+\n|                        x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
            1 AssertionError: '4.1.1' != '3.5.7'
            1 AssertionError: 1 != 0
            1 AssertionError: 2 != 6
            1 AssertionError: ArrayIndexOutOfBoundsException not raised
            1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="a") are different
            1 AssertionError: Attributes of DataFrame.iloc[:, 0] (column name="ts") are different
            1 AssertionError: Exception not raised
            1 AssertionError: Exception not raised by <lambda>
            1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
            1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
            1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
            1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
            1 AssertionError: Lists differ: [Row(ln(id)=0.0, ln(id)=0.0, struct(id, name)=Row(id=[1232 chars]0'))] != [Row(ln(id)=4.31748811353631, ln(id)=4.31748811353631[1312 chars]4'))]
            1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
            1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
            1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
            1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
            1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
            1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
            1 AssertionError: Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), False)]) != Struc[64 chars]Type(), True), StructField('i', StringType(), True)]), True)])
            1 AssertionError: Struc[69 chars]e(), True), StructField('name', StringType(), True)]), True)]) != Struc[69 chars]e(), True), StructField('name', StringType(), True)]), False)])
            1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
            1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
            1 AssertionError: dtype('<M8[us]') != 'datetime64[ns]'
            1 AttributeError: 'DataFrame' object has no attribute '_ipython_key_completions_'
            1 AttributeError: 'DataFrame' object has no attribute '_joinAsOf'
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreach() is not implemented.
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] foreachPartition() is not implemented.
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] localCheckpoint() is not implemented.
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] sparkContext() is not implemented.
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
            1 PythonException:  AttributeError: 'NoneType' object has no attribute 'partitionId'
            1 PythonException:  AttributeError: 'list' object has no attribute 'x'
            1 PythonException:  AttributeError: 'list' object has no attribute 'y'
            1 SparkRuntimeException: Invalid argument error: 83.140 is too large to store in a Decimal128 of precision 4. Max is 9.999
            1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Int64) at column index 1
            1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
            1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
            1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
            1 SparkRuntimeException: Parser error: Error parsing timestamp from '1997/02/28 10:30:00': error parsing date
            1 SparkRuntimeException: Parser error: Error while parsing value '0
            1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
            1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@dpwaizb7u45fv8mrw1upwqebz(plus_one@5u7wzy1x1zf4loxm9n1r277lw(#9)) PART...
            1 UnsupportedOperationException: PlanNode::ClearCache
            1 UnsupportedOperationException: PlanNode::IsCached
            1 UnsupportedOperationException: PlanNode::RecoverPartitions
            1 UnsupportedOperationException: SHOW FUNCTIONS
            1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
            1 UnsupportedOperationException: bucketing for writing listing table format
            1 UnsupportedOperationException: deduplicate within watermark
            1 UnsupportedOperationException: function: java_method
            1 UnsupportedOperationException: function: json_tuple
            1 UnsupportedOperationException: function: reflect
            1 UnsupportedOperationException: function: regexp_extract_all
            1 UnsupportedOperationException: function: schema_of_json
            1 UnsupportedOperationException: function: sentences
            1 UnsupportedOperationException: function: session_window
            1 UnsupportedOperationException: function: spark_partition_id
            1 UnsupportedOperationException: function: to_char
            1 UnsupportedOperationException: function: to_csv
            1 UnsupportedOperationException: function: to_varchar
            1 UnsupportedOperationException: function: xpath
            1 UnsupportedOperationException: function: xpath_boolean
            1 UnsupportedOperationException: function: xpath_double
            1 UnsupportedOperationException: function: xpath_float
            1 UnsupportedOperationException: function: xpath_int
            1 UnsupportedOperationException: function: xpath_long
            1 UnsupportedOperationException: function: xpath_number
            1 UnsupportedOperationException: function: xpath_short
            1 UnsupportedOperationException: function: xpath_string
            1 UnsupportedOperationException: handle analyze semantic hash
            1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
            1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
            1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
            1 UnsupportedOperationException: unknown aggregate function: grouping_id
            1 UnsupportedOperationException: unknown function: distributed_sequence_id
            1 UnsupportedOperationException: unknown function: product
            1 ValueError: Code in Status proto (StatusCode.INTERNAL) doesn't match status code (StatusCode.RESOURCE_EXHAUSTED)
            1 ValueError: The column label 'id' is not unique.
            1 ValueError: The column label 'struct' is not unique.
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/tmp7vslktn_/text-0.text, file:///tmp/tmp7vslktn_/text-1.text, file:///tmp/tmp7vslktn_/text-2.text
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/tmpbyudxrv_/
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/tmpk3m2hr3d/
(-1)        0 AnalysisException: UNION queries have different number of columns: left has 2 columns whereas right has 3 columns
(-1)        0 AssertionError: "Database 'memory:28e85cf6-c301-4595-b8dd-16a916c7d98e' dropped." does not match "No table format found for: jdbc"
(-1)        0 AssertionError: "Database 'memory:74194789-c827-456c-8e40-a36bd15c01fb' dropped." does not match "No table format found for: jdbc"
Passed Tests Diff

(empty)

Failed Tests
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.cacheTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.clearCache
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.createTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.functionExists
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.getFunction
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.isCached
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.listCatalogs
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.listFunctions
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.recoverPartitions
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.refreshByPath
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.refreshTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.uncacheTable
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame._ipython_key_completions_
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame._joinAsOf
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.checkpoint
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.coalesce
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.colRegex
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.dropDuplicatesWithinWatermark
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.explain
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.foreach
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.foreachPartition
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.hint
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.inputFiles
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.isLocal
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.isStreaming
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.localCheckpoint
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.observe
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.randomSplit
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.rdd
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.repartition
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.repartitionByRange
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.sameSemantics
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.sampleBy
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.storageLevel
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.toJSON
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.withWatermark
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrameStatFunctions.sampleBy
pyspark/sql/functions.py::pyspark.sql.functions.aggregate
pyspark/sql/functions.py::pyspark.sql.functions.approx_percentile
pyspark/sql/functions.py::pyspark.sql.functions.array_position
pyspark/sql/functions.py::pyspark.sql.functions.array_sort
pyspark/sql/functions.py::pyspark.sql.functions.array_union
pyspark/sql/functions.py::pyspark.sql.functions.arrays_zip
pyspark/sql/functions.py::pyspark.sql.functions.bitmap_construct_agg
pyspark/sql/functions.py::pyspark.sql.functions.bitmap_or_agg
pyspark/sql/functions.py::pyspark.sql.functions.count_min_sketch
pyspark/sql/functions.py::pyspark.sql.functions.exists
pyspark/sql/functions.py::pyspark.sql.functions.filter
pyspark/sql/functions.py::pyspark.sql.functions.first
pyspark/sql/functions.py::pyspark.sql.functions.forall
pyspark/sql/functions.py::pyspark.sql.functions.from_csv
pyspark/sql/functions.py::pyspark.sql.functions.from_json
pyspark/sql/functions.py::pyspark.sql.functions.grouping_id
pyspark/sql/functions.py::pyspark.sql.functions.hll_sketch_agg
pyspark/sql/functions.py::pyspark.sql.functions.hll_sketch_estimate
pyspark/sql/functions.py::pyspark.sql.functions.hll_union
pyspark/sql/functions.py::pyspark.sql.functions.hll_union_agg
pyspark/sql/functions.py::pyspark.sql.functions.ilike
pyspark/sql/functions.py::pyspark.sql.functions.input_file_block_length
pyspark/sql/functions.py::pyspark.sql.functions.input_file_block_start
pyspark/sql/functions.py::pyspark.sql.functions.input_file_name
pyspark/sql/functions.py::pyspark.sql.functions.java_method
pyspark/sql/functions.py::pyspark.sql.functions.json_tuple
pyspark/sql/functions.py::pyspark.sql.functions.kurtosis
pyspark/sql/functions.py::pyspark.sql.functions.like
pyspark/sql/functions.py::pyspark.sql.functions.map_entries
pyspark/sql/functions.py::pyspark.sql.functions.map_filter
pyspark/sql/functions.py::pyspark.sql.functions.map_zip_with
pyspark/sql/functions.py::pyspark.sql.functions.mode
pyspark/sql/functions.py::pyspark.sql.functions.monotonically_increasing_id
pyspark/sql/functions.py::pyspark.sql.functions.percentile
pyspark/sql/functions.py::pyspark.sql.functions.percentile_approx
pyspark/sql/functions.py::pyspark.sql.functions.product
pyspark/sql/functions.py::pyspark.sql.functions.randn
pyspark/sql/functions.py::pyspark.sql.functions.reduce
pyspark/sql/functions.py::pyspark.sql.functions.reflect
pyspark/sql/functions.py::pyspark.sql.functions.regexp_extract
pyspark/sql/functions.py::pyspark.sql.functions.regexp_extract_all
pyspark/sql/functions.py::pyspark.sql.functions.regexp_instr
pyspark/sql/functions.py::pyspark.sql.functions.regr_avgy
pyspark/sql/functions.py::pyspark.sql.functions.regr_intercept
pyspark/sql/functions.py::pyspark.sql.functions.regr_r2
pyspark/sql/functions.py::pyspark.sql.functions.regr_slope
pyspark/sql/functions.py::pyspark.sql.functions.regr_sxy
pyspark/sql/functions.py::pyspark.sql.functions.regr_syy
pyspark/sql/functions.py::pyspark.sql.functions.schema_of_csv
pyspark/sql/functions.py::pyspark.sql.functions.schema_of_json
pyspark/sql/functions.py::pyspark.sql.functions.sentences
pyspark/sql/functions.py::pyspark.sql.functions.session_window
pyspark/sql/functions.py::pyspark.sql.functions.spark_partition_id
pyspark/sql/functions.py::pyspark.sql.functions.to_char
pyspark/sql/functions.py::pyspark.sql.functions.to_csv
pyspark/sql/functions.py::pyspark.sql.functions.to_varchar
pyspark/sql/functions.py::pyspark.sql.functions.transform
pyspark/sql/functions.py::pyspark.sql.functions.transform_keys
pyspark/sql/functions.py::pyspark.sql.functions.transform_values
pyspark/sql/functions.py::pyspark.sql.functions.window
pyspark/sql/functions.py::pyspark.sql.functions.window_time
pyspark/sql/functions.py::pyspark.sql.functions.xpath
pyspark/sql/functions.py::pyspark.sql.functions.xpath_boolean
pyspark/sql/functions.py::pyspark.sql.functions.xpath_double
pyspark/sql/functions.py::pyspark.sql.functions.xpath_float
pyspark/sql/functions.py::pyspark.sql.functions.xpath_int
pyspark/sql/functions.py::pyspark.sql.functions.xpath_long
pyspark/sql/functions.py::pyspark.sql.functions.xpath_number
pyspark/sql/functions.py::pyspark.sql.functions.xpath_short
pyspark/sql/functions.py::pyspark.sql.functions.xpath_string
pyspark/sql/functions.py::pyspark.sql.functions.zip_with
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_add_archive
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_add_file
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_add_pyfile
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_add_zipped_package
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_basic_requests
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_cache_artifact
pyspark/sql/tests/connect/client/test_artifact.py::ArtifactTests::test_copy_from_local_to_fs
pyspark/sql/tests/connect/client/test_artifact.py::LocalClusterArtifactTests::test_add_archive
pyspark/sql/tests/connect/client/test_artifact.py::LocalClusterArtifactTests::test_add_file
pyspark/sql/tests/connect/client/test_artifact.py::LocalClusterArtifactTests::test_add_pyfile
pyspark/sql/tests/connect/client/test_artifact.py::LocalClusterArtifactTests::test_add_zipped_package
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_collect
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_collect_timestamp
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_column_regexp
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_create_global_temp_view
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_deduplicate_within_watermark_in_batch
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_describe
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_explain_string
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_grouped_data
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_hint
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_input_files
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_invalid_column
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_is_local
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_join_hint
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_json
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_multi_paths
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_namedargs_with_global_limit
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_numeric_aggregation
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_observe
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_orc
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_random_split
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_same_semantics
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_schema
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_semantic_hash
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_simple_read_without_schema
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_simple_udt_from_read
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_sql_with_command
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_sql_with_pos_args
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_stat_approx_quantile
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_stat_freq_items
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_stat_sample_by
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_streaming_local_relation
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_tail
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_to
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_version
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_with_local_list
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_with_local_ndarray
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectBasicTests::test_write_operations
pyspark/sql/tests/connect/test_connect_basic.py::SparkConnectSessionTests::test_error_stack_trace
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_column_accessor
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_column_arithmetic_ops
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_column_field_ops
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_columns
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_decimal
pyspark/sql/tests/connect/test_connect_column.py::SparkConnectColumnTests::test_distributed_sequence_id
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_aggregation_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_collection_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_csv_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_date_ts_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_generator_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_json_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_lambda_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_map_collection_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_math_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_nested_lambda_function
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_normal_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_string_functions_multi_args
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_string_functions_one_arg
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_time_window_functions
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_udf
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_udtf
pyspark/sql/tests/connect/test_connect_function.py::SparkConnectFunctionTests::test_window_functions
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_createDataFrame_duplicate_field_names
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_createDataFrame_with_schema
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_pandas_round_trip
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_pandas_self_destruct
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_timestamp_dst
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_timestamp_nat
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_toPandas_arrow_toggle
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_toPandas_duplicate_field_names
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_toPandas_nested_timestamp
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_toPandas_respect_session_timezone
pyspark/sql/tests/connect/test_parity_arrow.py::ArrowParityTests::test_toPandas_timestmap_tzinfo
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_join_condition
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::UDFParityTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::UDFParityTests::test_udf_in_join_condition
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::UDFParityTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/test_parity_arrow_python_udf.py::UDFParityTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/test_parity_catalog.py::CatalogParityTests::test_function_exists
pyspark/sql/tests/connect/test_parity_catalog.py::CatalogParityTests::test_get_function
pyspark/sql/tests/connect/test_parity_catalog.py::CatalogParityTests::test_list_functions
pyspark/sql/tests/connect/test_parity_catalog.py::CatalogParityTests::test_refresh_table
pyspark/sql/tests/connect/test_parity_catalog.py::CatalogParityTests::test_table_cache
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_cache_dataframe
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_cache_table
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_create_dataframe_from_pandas_with_dst
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_duplicate_field_names
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_extended_hint_types
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_freqItems
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_generic_hints
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_input_files
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_join_without_on
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_require_cross
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_to
pyspark/sql/tests/connect/test_parity_dataframe.py::DataFrameParityTests::test_to_pandas
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_checking_csv_header
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_encoding_json
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_ignore_column_of_all_nulls
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_ignorewhitespace_csv
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_jdbc
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_jdbc_format
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_linesep_json
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_linesep_text
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_multiline_csv
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_multiline_json
pyspark/sql/tests/connect/test_parity_datasources.py::DataSourcesParityTests::test_read_multiple_orc_file
pyspark/sql/tests/connect/test_parity_errors.py::ErrorsParityTests::test_array_index_out_of_bounds_exception
pyspark/sql/tests/connect/test_parity_errors.py::ErrorsParityTests::test_date_time_exception
pyspark/sql/tests/connect/test_parity_errors.py::ErrorsParityTests::test_number_format_exception
pyspark/sql/tests/connect/test_parity_errors.py::ErrorsParityTests::test_spark_runtime_exception
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_approxQuantile
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_assert_true
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_functions_broadcast
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_inline
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_input_file_name_udf
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_nested_higher_order_function
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_raise_error
pyspark/sql/tests/connect/test_parity_functions.py::FunctionsParityTests::test_window_time
pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py::GroupedApplyInPandasTests::test_grouped_over_window
pyspark/sql/tests/connect/test_parity_pandas_grouped_map.py::GroupedApplyInPandasTests::test_grouped_over_window_with_key
pyspark/sql/tests/connect/test_parity_pandas_grouped_map_with_state.py::GroupedApplyInPandasWithStateTests::test_apply_in_pandas_with_state_python_worker_random_failure
pyspark/sql/tests/connect/test_parity_pandas_map.py::MapInPandasParityTests::test_large_variable_types
pyspark/sql/tests/connect/test_parity_pandas_udf_grouped_agg.py::PandasUDFGroupedAggParityTests::test_invalid_args
pyspark/sql/tests/connect/test_parity_pandas_udf_scalar.py::PandasUDFScalarParityTests::test_nondeterministic_vectorized_udf_in_aggregate
pyspark/sql/tests/connect/test_parity_pandas_udf_scalar.py::PandasUDFScalarParityTests::test_scalar_iter_udf_init
pyspark/sql/tests/connect/test_parity_pandas_udf_scalar.py::PandasUDFScalarParityTests::test_vectorized_udf_check_config
pyspark/sql/tests/connect/test_parity_pandas_udf_scalar.py::PandasUDFScalarParityTests::test_vectorized_udf_invalid_length
pyspark/sql/tests/connect/test_parity_pandas_udf_window.py::PandasUDFWindowParityTests::test_bounded_mixed
pyspark/sql/tests/connect/test_parity_pandas_udf_window.py::PandasUDFWindowParityTests::test_bounded_simple
pyspark/sql/tests/connect/test_parity_pandas_udf_window.py::PandasUDFWindowParityTests::test_shrinking_window
pyspark/sql/tests/connect/test_parity_pandas_udf_window.py::PandasUDFWindowParityTests::test_sliding_window
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterParityTests::test_bucketed_write
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterParityTests::test_insert_into
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterParityTests::test_save_and_load
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterParityTests::test_save_and_load_builder
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterV2ParityTests::test_create_without_provider
pyspark/sql/tests/connect/test_parity_readwriter.py::ReadwriterV2ParityTests::test_table_overwrite
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_cast_to_string_with_udt
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_cast_to_udt_with_udt
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_complex_nested_udt_in_df
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_negative_decimal
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_parquet_with_udt
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_udf_with_udt
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_udt_with_none
pyspark/sql/tests/connect/test_parity_types.py::TypesParityTests::test_yearmonth_interval_type
pyspark/sql/tests/connect/test_parity_udf.py::UDFParityTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/test_parity_udf.py::UDFParityTests::test_udf_in_join_condition
pyspark/sql/tests/connect/test_parity_udf.py::UDFParityTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/test_parity_udf.py::UDFParityTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/test_parity_udtf.py::ArrowUDTFParityTests::test_udtf_arrow_sql_conf
pyspark/sql/tests/connect/test_parity_udtf.py::ArrowUDTFParityTests::test_udtf_with_table_argument_malformed_query
pyspark/sql/tests/connect/test_parity_udtf.py::ArrowUDTFParityTests::test_udtf_with_table_argument_multiple
pyspark/sql/tests/connect/test_parity_udtf.py::ArrowUDTFParityTests::test_udtf_with_table_argument_unknown_identifier
pyspark/sql/tests/connect/test_parity_udtf.py::UDTFParityTests::test_udtf_with_table_argument_malformed_query
pyspark/sql/tests/connect/test_parity_udtf.py::UDTFParityTests::test_udtf_with_table_argument_multiple
pyspark/sql/tests/connect/test_parity_udtf.py::UDTFParityTests::test_udtf_with_table_argument_unknown_identifier
pyspark/sql/tests/connect/test_utils.py::ConnectUtilsTests::test_assert_approx_equal_decimaltype_custom_rtol_pass
pyspark/sql/tests/connect/test_utils.py::ConnectUtilsTests::test_assert_equal_nested_struct_str_duplicate

@github-actions
Copy link

Spark 4.1.1 Test Report

Commit Information

Commit Revision Branch
After f431d55 refs/pull/1447/merge
Before 399317b refs/heads/main

Test Summary

Suite Commit Failed Passed Skipped Warnings Time (s)
doctest-catalog After 12 13 6.35
Before 12 13 5.65
doctest-column After 36 6.86
Before 36 6.69
doctest-dataframe After 34 86 2 3 11.08
Before 34 86 2 3 10.36
doctest-functions After 148 337 10 5 27.50
Before 148 337 10 5 25.90
test-connect After 1228 1281 291 485 233.53
Before 1228 1281 291 484 223.04

Test Details

Error Counts
         1422 Total
(+1)      327 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
          211 PythonException:  JSONDecodeError: Expecting value: line 1 column 1 (char 0)
           73 AssertionError: False is not true
           64 IllegalArgumentException: missing argument: Python UDTF return type
           60 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 250
           55 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 252
           38 DocTestFailure
           36 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 302
           31 UnsupportedOperationException: function: parse_json
           31 UnsupportedOperationException: unresolved table valued function
           27 UnsupportedOperationException: lateral join
           24 UnsupportedOperationException: named argument expression
           23 IllegalArgumentException: expected value at line 1 column 1
           22 UnsupportedOperationException: handle add artifacts
           20 UnsupportedOperationException: variant data type
           15 UnsupportedOperationException: lambda function
           14 AssertionError: 1 != 0 : dict_keys([])
           14 PySparkAssertionError: [DIFFERENT_PANDAS_DATAFRAME] DataFrames are not almost equal:
           14 UnsupportedOperationException: unknown function: kll_sketch_agg_bigint
           13 UnsupportedOperationException: time literal
           12 PythonException:  TypeError: object of type 'generator' has no len()
           11 IllegalArgumentException: invalid argument: expected function for lateral table factor
           11 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 216
           10 AssertionError: AnalysisException not raised
           10 SparkRuntimeException: Python error: [test::partitions] NotImplementedError: 
            9 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 251
            9 UnsupportedOperationException: collect metrics
            9 UnsupportedOperationException: unsupported subquery type
            8 AssertionError
            8 AssertionError: "UDTF_ARROW_TYPE_CONVERSION_ERROR" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            8 AssertionError: "UDTF_RETURN_SCHEMA_MISMATCH" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            8 UnsupportedOperationException: unknown function: kll_sketch_agg_double
            8 UnsupportedOperationException: unknown function: kll_sketch_agg_float
            7 AssertionError: 3 != 0 : []
            7 UnsupportedOperationException: function: spark_partition_id
            7 UnsupportedOperationException: named function arguments
            7 UnsupportedOperationException: unknown function: st_geogfromwkb
            7 UnsupportedOperationException: unknown function: theta_sketch_agg
            7 UnsupportedOperationException: user defined data type should only exist in a field
            6 AssertionError: "TABLE_OR_VIEW_NOT_FOUND" does not match "view not found: v"
            6 AssertionError: "UDTF_ARROW_TYPE_CAST_ERROR" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            6 AssertionError: "UDTF_RETURN_NOT_ITERABLE" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            6 AssertionError: 1 != 0
            6 AssertionError: Exception not raised
            6 IllegalArgumentException: invalid argument: found range at 40:45 expected '->', '.', '(', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|'...
            6 PySparkAssertionError: [DIFFERENT_ROWS] Results do not match: ( 100.00000 % )
            6 UnsupportedOperationException: PlanNode::CacheTable
            6 UnsupportedOperationException: direct shuffle partition ID expression
            6 UnsupportedOperationException: function: input_file_name
            6 UnsupportedOperationException: function: window
            5 AssertionError: "Python worker process terminated due to idle timeout \(timeout: 1 seconds\)" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            5 AssertionError: `query_context_type` is required when QueryContext exists. QueryContext: [].
            4 AnalysisException: temporary view not found: t2
            4 AssertionError: AnalysisException not raised by <lambda>
            4 AssertionError: unexpectedly None
            4 UnsupportedOperationException: approx quantile
            4 UnsupportedOperationException: freq items
            4 UnsupportedOperationException: transpose
            4 UnsupportedOperationException: unknown aggregate function: hll_sketch_agg
            4 clone session
            3 AnalysisException: Failed to parse placeholder id: cannot parse integer from empty string
            3 AnalysisException: Invalid Python user-defined table function return type. Expect a struct type, but got Int32.
            3 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/python/test_support/sql/ages_newlines.cs...
            3 AssertionError: "(Please use a different output data type for your UDF or DataFrame|Invalid return type with Arrow-optimized Python UDF)" does not match " JSONDecodeError: Expecting value: line 1 colu...
            3 AssertionError: 0 not greater than or equal to 1
            3 AssertionError: DayTimeIntervalType(0, 3) != DayTimeIntervalType(1, 3)
            3 AssertionError: Struc[49 chars]valType(0, 3), True), StructField('name', StringType(), True)]) != Struc[49 chars]valType(1, 3), True), StructField('name', StringType(), True)])
            3 IllegalArgumentException: data did not match any variant of untagged enum JsonDataType
            3 IllegalArgumentException: invalid argument: found PARTITION at 281:290 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=...
            3 IllegalArgumentException: invalid argument: found PARTITION at 295:304 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=...
            3 IllegalArgumentException: invalid argument: found PARTITION at 59:68 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=',...
            3 IllegalArgumentException: invalid argument: found WITH at 171:175 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=', '!...
            3 IllegalArgumentException: invalid argument: found WITH at 279:283 expected '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|', '!=', '!...
            3 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 211
            3 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 215
            3 UnsupportedOperationException: cached remote relation
            3 UnsupportedOperationException: function: from_json
            3 UnsupportedOperationException: handle analyze input files
            3 UnsupportedOperationException: pivot
            3 UnsupportedOperationException: table argument options in subquery expression
            3 UnsupportedOperationException: unknown function: distributed_sequence_id
            3 ValueError: Converting to Python dictionary is not supported when duplicate field names are present
            2 AnalysisException: No table format found for: orc
            2 AnalysisException: ambiguous attribute: ObjectName([Identifier("id")])
            2 AnalysisException: not supported: function exists
            2 AnalysisException: not supported: list functions
            2 AnalysisException: temporary view not found: variant_table
            2 AnalysisException: two values expected: [Column(Column { relation: None, name: "#2" }), Column(Column { relation: None, name: "#3" }), Literal(Utf8("/"), None)]
            2 AssertionError: ".*constructor has more than one argument.*" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "ARROW_TYPE_MISMATCH.*SQL_MAP_ARROW_ITER_UDF" does not match "Invalid argument error: column types must match schema types, expected Int32 but found Int64 at column index 0"
            2 AssertionError: "AttributeError: 'int' object has no attribute 'corr'" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "NO_ACTIVE_SESSION" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the '__init__' method: error" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'eval' method: error" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "\[UDTF_EXEC_ERROR\] User defined table function encountered an error in the 'terminate' method: terminate error" does not match " JSONDecodeError: Expecting value: line 1 column 1 (ch...
            2 AssertionError: "eval error" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "missing a required argument" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "terminate error" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: "terminate\(\) missing 1 required positional argument: 'a'" does not match " JSONDecodeError: Expecting value: line 1 column 1 (char 0)
            2 AssertionError: 3 != 0 : dict_keys([])
            2 AssertionError: {'foo': 'bar'} != {}
            2 IllegalArgumentException: invalid argument: found FUNCTION at 7:15 expected 'DATABASE', 'SCHEMA', 'OR', 'TEMP', 'TEMPORARY', 'EXTERNAL', 'TABLE', 'GLOBAL', or 'VIEW'
            2 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 213
            2 IllegalArgumentException: missing argument: Python UDF output type
            2 PySparkAssertionError: [DIFFERENT_ROWS] Results do not match: ( 99.50000 % )
            2 PythonException:  AssertionError: assert None is not None
            2 PythonException:  AttributeError: 'NoneType' object has no attribute 'cpus'
            2 PythonException:  PySparkRuntimeError: [UDTF_EVAL_METHOD_ARGUMENTS_DO_NOT_MATCH_SIGNATURE] Failed to evaluate the user-defined table function '' because the function arguments did not match the expect...
            2 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (int64) with name 'decimal_result' to Arrow Array (decimal128(10, 2)).
            2 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (int8) with name 'None' to Arrow Array (decimal128(10, 0)).
            2 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (object) with name 'None' to Arrow Array (int32).
            2 SparkRuntimeException: Error during planning: Correlated scalar subquery must be aggregated to return at most one row
            2 SparkRuntimeException: Python error: [TestDataSource::partitions] NotImplementedError: 
            2 SparkRuntimeException: Python error: [my-json::partitions] AttributeError: 'pyarrow.lib.Schema' object has no attribute 'fieldNames'
            2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
            2 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: mean_udf@4jfgqif5e847nsiw7pwudti7o(#3) PARTITION BY [#2] ORDER BY [#3 ASC ...
            2 UnsupportedOperationException: CLUSTER BY for write
            2 UnsupportedOperationException: LATERAL JOIN with criteria
            2 UnsupportedOperationException: Physical plan does not support logical expression ScalarSubquery(<subquery>)
            2 UnsupportedOperationException: Physical plan does not support logical expression Wildcard { qualifier: None, options: WildcardOptions { ilike: None, exclude: None, except: None, replace: None, rename:...
            2 UnsupportedOperationException: cast Time64(Nanosecond) to Spark data type
            2 UnsupportedOperationException: create resource profile command
            2 UnsupportedOperationException: function: from_xml
            2 UnsupportedOperationException: function: to_variant_object
            2 UnsupportedOperationException: function: try_make_interval
            2 UnsupportedOperationException: function: try_parse_json
            2 UnsupportedOperationException: function: uniform
            2 UnsupportedOperationException: handle analyze is local
            2 UnsupportedOperationException: handle analyze same semantics
            2 UnsupportedOperationException: unknown function: st_geomfromwkb
            2 UnsupportedOperationException: unknown function: try_to_date
            2 UnsupportedOperationException: unknown function: try_to_time
            2 UnsupportedOperationException: wildcard with plan ID
            2 UnsupportedOperationException: with watermark
            2 handle artifact statuses
            2 received metadata size exceeds hard limit (19158 vs. 16384);  :status:42B content-type:60B grpc-status:45B grpc-message:8800B grpc-status-details-bin:7672B ValueError: Code in Status proto (StatusCode...
            1 AnalysisException: Cannot cast string 'abc' to value of Float64 type
            1 AnalysisException: Cannot cast value 'abc' to value of Boolean type
            1 AnalysisException: Could not find config namespace "mapred"
            1 AnalysisException: Could not find config namespace "spark"
            1 AnalysisException: Error parsing timestamp from '082017' using format '%m%Y': input is not enough for unique date and time
            1 AnalysisException: Error parsing timestamp from '2014-31-12' using format '%Y-%d-%pa': input contains invalid characters
            1 AnalysisException: Error parsing timestamp from '2023-01-01' using format '%d-%m-%Y': input contains invalid characters
            1 AnalysisException: Internal error: Expect TypeSignatureClass::Native(LogicalType(Native(Int32), Int32)) but received NativeType::Float64, DataType: Float64.
(+1)        1 AnalysisException: Invalid partition id 1 in write result (expected < 1)
            1 AnalysisException: No files found in the specified paths: file:///home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/sql/functions/builtin.py
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths15zz3rv7u/text-0.text, file:///tmp/test_multi_paths15zz3rv7u/text-1.text, file:///tmp/test_multi_paths15zz3rv7u/te...
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/tmp85g7xleg/
(+1)        1 AnalysisException: No files found in the specified paths: file:///tmp/tmpo1iqj95g/
            1 AnalysisException: No table format found for: xml
            1 AnalysisException: UNION queries have different number of columns: left has 3 columns whereas right has 2 columns
            1 AnalysisException: Write failed for partition 0: External error: Python error: [TestArrowWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
(-1)        1 AnalysisException: Write failed for partition 0: External error: Python error: [TestJsonWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
(+1)        1 AnalysisException: Write failed for partition 2: External error: Python error: [TestJsonWriter::write] AttributeError: 'NoneType' object has no attribute 'partitionId'
            1 AnalysisException: ambiguous attribute: ObjectName([Identifier("b")])
            1 AnalysisException: ambiguous attribute: ObjectName([Identifier("i")])
            1 AnalysisException: cannot resolve attribute: ObjectName([Identifier("x")])
            1 AnalysisException: element_at expects List or Map type as first argument, got Null
            1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#0" }), Literal(Int32(123), None)]
(+1)        1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(7826199983141352080), None)]
(+1)        1 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(8857419673477861356), None)]
            1 AnalysisException: table already exists: tbl1
            1 AnalysisException: temporary view not found: tab2
            1 AnalysisException: to_time format argument 2 must be a scalar, not an array
            1 AnalysisException: too big
            1 AnalysisException: zero values expected: [Literal(Int32(123), None)]
(+1)        1 AssertionError: "'path' is not specified." does not match "Generic LocalFileSystem error: Unable to open file /F4gin9G4k4yYsfLM_0.zst.parquet#1: Permission denied (os error 13)"
            1 AssertionError: "ARROW_TYPE_MISMATCH.*SQL_MAP_ARROW_ITER_UDF" does not match "Invalid argument error: column types must match schema types, expected Struct("b": Int32) but found Struct("a": Int64, "b"...
            1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m. TypeError: object of type 'generator' has no len()
            1 AssertionError: "Column names of the returned pyarrow.Table do not match specified schema. Missing: m. Unexpected: v, v2. TypeError: object of type 'generator' has no len()
            1 AssertionError: "Columns do not match in their data type: column 'a' \(expected int32, actual int64\)" does not match " TypeError: object of type 'generator' has no len()
            1 AssertionError: "Columns do not match in their data type: column 'id' \(expected int32, actual int64\)" does not match " TypeError: object of type 'generator' has no len()
            1 AssertionError: "DATA_SOURCE_EXTRANEOUS_FILTERS" does not match "Python error: [test::partitions] AssertionError: assert False
            1 AssertionError: "DATA_SOURCE_PUSHDOWN_DISABLED" does not match "Python error: [<reader>::read] AssertionError: assert False
(+1)        1 AssertionError: "Database 'memory:340a963c-e62b-46bb-870d-0c7d6c814c94' dropped." does not match "No table format found for: jdbc"
(+1)        1 AssertionError: "Database 'memory:ffd73f4b-76dd-428e-a39f-2df15433f08d' dropped." does not match "No table format found for: jdbc"
            1 AssertionError: "Invalid return type" does not match " AttributeError: 'Series' object has no attribute 'columns'
            1 AssertionError: "PARTITION_TRANSFORM_EXPRESSION_NOT_IN_PARTITIONED_BY" does not match "unknown function: years"
            1 AssertionError: "Python worker process terminated due to idle timeout \(timeout: 1 seconds\)" does not match " PySparkRuntimeError: [UDTF_INVALID_OUTPUT_ROW_TYPE] The type of an individual output row ...
            1 AssertionError: "Return type of the user-defined function should be pyarrow.Table, but is tuple" does not match " TypeError: object of type 'generator' has no len()
            1 AssertionError: "UNRESOLVED_COLUMN.WITH_SUGGESTION" does not match "cannot resolve attribute: ObjectName([Identifier("b")])"
            1 AssertionError: "foobar" does not match "raise_error expects a single UTF-8 string argument"
            1 AssertionError: "is null" does not match " ArrowException: Invalid argument error: Column 'a' is declared as non-nullable but contains null values
            1 AssertionError: "requirement failed: Cogroup keys must have same size: 2 != 1" does not match "invalid argument: child plan grouping expressions must have the same length"
            1 AssertionError: '+--------------------------------+-------------------[411 chars]-+\n' != '+-----------+-----------+\n|from_csv(a)|from_csv(b)|\[105 chars]-+\n'
            1 AssertionError: '+---[17 chars]-----+\n|                        x|\n+--------[132 chars]-+\n' != '+---[17 chars]----------+\n|update_fields(x, WithField(e))|\[167 chars]-+\n'
            1 AssertionError: '+---[23 chars]---+-----+\n|  1|    1|\n+---+-----+\nonly showing top 1 row' != '+---[23 chars]---+-----+\n|  1|    1|\n+---+-----+\nonly showing top 1 row\n'
(+1)        1 AssertionError: 'INVALID_CLONE_SESSION_REQUEST.TARGET_SESSION_ID_FORMAT' not found in '<_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNIMPLEMENTED\n\tdetails = "clone session"...
            1 AssertionError: 'ST_INVALID_SRID_VALUE' != None : Expected error class was 'ST_INVALID_SRID_VALUE', got 'None'.
            1 AssertionError: 'UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY.UNSUPPORTED_IN_EXISTS_SUBQUERY' != None : Expected error class was 'UNSUPPORTED_SUBQUERY_EXPRESSION_CATEGORY.UNSUPPORTED_IN_EXISTS_SUBQUERY', ...
            1 AssertionError: 'a NULL, b BOOLEAN, c BINARY' != 'a VOID,b BOOLEAN,c BINARY'
            1 AssertionError: 'bytearray' != 'bytes'
            1 AssertionError: 0 not greater than 0
            1 AssertionError: 0.6363787615254752 != 0.9531453492357947 : Column<'rand(1)'>
            1 AssertionError: 2 != 6
            1 AssertionError: 6 != 0 : []
            1 AssertionError: ArrayIndexOutOfBoundsException not raised
            1 AssertionError: Exception not raised by <lambda>
            1 AssertionError: Lists differ: [(1, 2), (3, 4), (None, 5), (0, 0)] != [(1, 2), (3, 4), (None, 5), (None, None)]
            1 AssertionError: Lists differ: [Row([14 chars] _c1=25, _c2='I am Hyukjin\n\nI love Spark!'),[86 chars]om')] != [Row([14 chars] _c1='25', _c2='I am Hyukjin\n\nI love Spark!'[92 chars]om')]
            1 AssertionError: Lists differ: [Row([259 chars]681098, ln(id)=1.0986122886681098, struct(id, [975 chars]0'))] != [Row([259 chars]681096, ln(id)=1.0986122886681096, struct(id, [975 chars]0'))]
            1 AssertionError: Lists differ: [Row(id=90, name='90'), Row(id=91, name='91'), Ro[176 chars]99')] != [Row(id=15, name='15'), Row(id=16, name='16'), Ro[176 chars]24')]
            1 AssertionError: Lists differ: [Row(key='0'), Row(key='1'), Row(key='10'), Row(ke[1435 chars]99')] != [Row(key=0), Row(key=1), Row(key=10), Row(key=11),[1235 chars]=99)]
            1 AssertionError: Lists differ: [Row(name='Andy', age=30), Row(name='Justin', [34 chars]one)] != [Row(_corrupt_record=' "age":19}\n', name=None[104 chars]el')]
            1 AssertionError: Row(point='[1.0, 2.0]', pypoint='[3.0, 4.0]') != Row(point='(1.0, 2.0)', pypoint='[3.0, 4.0]')
            1 AssertionError: SparkConnectGrpcException not raised
            1 AssertionError: StorageLevel(False, True, True, False, 1) != StorageLevel(False, False, False, False, 1)
            1 AssertionError: Struc[30 chars]estampType(), True), StructField('val', IntegerType(), True)]) != Struc[30 chars]estampType(), True), StructField('val', IntegerType(), False)])
            1 AssertionError: Struc[32 chars]e(), False), StructField('b', DoubleType(), Fa[158 chars]ue)]) != Struc[32 chars]e(), True), StructField('b', DoubleType(), Tru[154 chars]ue)])
            1 AssertionError: Struc[40 chars]ue), StructField('val', ArrayType(DoubleType(), False), True)]) != Struc[40 chars]ue), StructField('val', PythonOnlyUDT(), True)])
            1 AssertionError: True is not false : Default URL is not secure
            1 AssertionError: YearMonthIntervalType(0, 1) != YearMonthIntervalType(0, 0)
            1 AssertionError: [1.0, 2.0] != ExamplePoint(1.0,2.0)
            1 AssertionError: datetime.datetime(2022, 12, 25, 18, 30) != datetime.datetime(2022, 12, 25, 10, 30)
            1 AttributeError: 'NoneType' object has no attribute 'extract_graph'
            1 AttributeError: 'NoneType' object has no attribute 'toText'
            1 FileNotFoundError: [Errno 2] No such file or directory: '/home/runner/work/sail/sail/.venvs/test-spark.spark-4.1.1/lib/python3.11/site-packages/pyspark/data/artifact-tests/junitLargeJar.jar'
(+1)        1 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpbigwo78g'
            1 IllegalArgumentException: invalid argument: empty data type
            1 IllegalArgumentException: invalid argument: expecting column to drop
            1 IllegalArgumentException: invalid argument: field not found in input schema: col1
            1 IllegalArgumentException: invalid argument: found ( at 114:115 expected ':', data type, ',', or ')'
            1 IllegalArgumentException: invalid argument: found abc at 0:3 expected something else, ';', statement, or end of input
            1 IllegalArgumentException: invalid argument: found collate at 13:20 expected string, '.', '[', '::', 'ESCAPE', 'IS', 'NOT', 'IN', '*', '/', '%', 'DIV', '+', '-', '||', '>>>', '>>', '<<', '&', '^', '|',...
            1 IllegalArgumentException: invalid argument: grouping sets with grouping expressions
            1 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 212
            1 IllegalArgumentException: invalid argument: invalid PySpark UDF type: 214
            1 IllegalArgumentException: invalid argument: invalid user-defined window function type
            1 IllegalArgumentException: invalid argument: table does not exist: ObjectName([Identifier("test_table")])
(+1)        1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (46879e63-b577-42d2-b079-ebda4e265071 != fb74bd72-a5bb-4080-ba28-3...
(+1)        1 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (589da4bd-141c-4ad2-a596-8b7c412160b6 != be14488d-9078-4b69-b1aa-d...
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] rdd is not implemented.
            1 PySparkNotImplementedError: [NOT_IMPLEMENTED] toJSON() is not implemented.
            1 PySparkTypeError: [UNSUPPORTED_DATA_TYPE_FOR_ARROW_CONVERSION] binary_view is not supported in conversion to Arrow.
            1 PySparkTypeError: [UNSUPPORTED_DATA_TYPE_FOR_ARROW_CONVERSION] time32[s] is not supported in conversion to Arrow.
            1 PythonException: 
            1 PythonException:  AttributeError: 'NoneType' object has no attribute 'partitionId'
            1 PythonException:  AttributeError: 'list' object has no attribute 'x'
            1 PythonException:  AttributeError: 'list' object has no attribute 'y'
            1 PythonException:  KeyError: 'a'
            1 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (int64) with name 'None' to Arrow Array (decimal128(10, 2)).
            1 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (int8) with name 'int8' to Arrow Array (decimal128(10, 0)).
            1 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (int8) with name 'value' to Arrow Array (decimal128(10, 0)).
            1 PythonException:  PySparkValueError: Exception thrown when converting pandas.Series (object) with name 'value' to Arrow Array (int32).
            1 PythonException:  TypeError: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for pyspark.sql.types._create_row).
            1 SparkRuntimeException: Assertion failed: !args.is_empty(): args should not be empty
            1 SparkRuntimeException: Assertion failed: compatible: Failed due to a difference in schemas: original schema: DFSchema { inner: Schema { fields: [Field { name: "#0", data_type: Int64, nullable: true },...
            1 SparkRuntimeException: Assertion failed: result_data_type == *expected_type: Function 'array_sort' returned value of type 'List(Field { data_type: Struct([Field { name: "key", data_type: Int32 }, Fiel...
            1 SparkRuntimeException: Compute error: Cannot perform a binary operation on arrays of different length
            1 SparkRuntimeException: Error during planning: expr type Struct("col1": Struct("a": Int64, "b": Float64)) can't cast to Struct("a": Int64, "b": Float64) in InSubquery
            1 SparkRuntimeException: Exception: path is not specified
            1 SparkRuntimeException: Execution error: Schema field count mismatch: expected 1 fields, got 2
            1 SparkRuntimeException: Internal error: Cannot run range queries on datatype: Time64(µs).
            1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected Int64 but found List(Int64) at column index 1
            1 SparkRuntimeException: Invalid argument error: column types must match schema types, expected LargeUtf8 but found Utf8 at column index 0
            1 SparkRuntimeException: Invalid argument error: must either specify a row count or at least one column
            1 SparkRuntimeException: Invalid argument error: number of columns(3) must match number of fields(2) in schema
            1 SparkRuntimeException: Json error: Not valid JSON: EOF while parsing a list at line 1 column 1
            1 SparkRuntimeException: Json error: Not valid JSON: expected value at line 1 column 2
            1 SparkRuntimeException: Parser error: Error while parsing value '0
            1 SparkRuntimeException: Python error: [TestArrowStreamWriter::writer] PySparkNotImplementedError: [NOT_IMPLEMENTED] writer is not implemented.
            1 SparkRuntimeException: Python error: [TestDataSource::writer] PySparkNotImplementedError: [NOT_IMPLEMENTED] writer is not implemented.
            1 SparkRuntimeException: Python error: [my-json::writer] AttributeError: 'pyarrow.lib.Schema' object has no attribute 'fieldNames'
            1 SparkRuntimeException: Python error: [test::partitions] AssertionError: assert False
            1 SparkRuntimeException: Python error: [testdatasourcepyarrow::partitions] PySparkNotImplementedError: [NOT_IMPLEMENTED] reader is not implemented.
            1 SparkRuntimeException: Schema error: Failed to parse DDL schema 'a INT, b INT, c VARIANT, d STRUCT<v VARIANT>, e ARRAY<VARIANT>,f MAP<STRING, VARIANT>': error in SQL parser: found VARIANT at 23:30 exp...
            1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: List { data_type: Int32, nullable: true }. Use PyArrow Schema for complex types.
            1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: Struct { fields: Fields([Field { name: "a", data_type: Int32, nullable: true, metadata: [] }, Field { name: "b", data_type: Int32, ...
            1 SparkRuntimeException: Schema error: Unsupported type in DDL schema: Struct { fields: Fields([Field { name: "y", data_type: Int32, nullable: true, metadata: [] }]) }. Use PyArrow Schema for complex ty...
            1 SparkRuntimeException: This feature is not implemented: Data type Decimal128(38, 18) not supported in row-based write path. Use DataSourceArrowWriter for full type support.
            1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(#9) PARTITION BY [#8] ORDER BY [#9 ASC NULLS...
            1 UnsupportedOperationException: Aggregate can not be used as a sliding accumulator because `retract_batch` is not implemented: avg@8jjv77o85l4r1u661eucb9ylm(plus_one@8f9fwaevnfdj031wzemmraerh(#9)) PART...
            1 UnsupportedOperationException: PlanNode::ClearCache
            1 UnsupportedOperationException: PlanNode::IsCached
            1 UnsupportedOperationException: PlanNode::RecoverPartitions
            1 UnsupportedOperationException: SHOW FUNCTIONS
            1 UnsupportedOperationException: Support for 'approx_distinct' for data type Float64 is not implemented
            1 UnsupportedOperationException: Support for 'approx_distinct' for data type Struct("name": Utf8, "value": Int64) is not implemented
            1 UnsupportedOperationException: as of join
            1 UnsupportedOperationException: bucketing for writing listing table format
            1 UnsupportedOperationException: deduplicate within watermark
            1 UnsupportedOperationException: function: collate
            1 UnsupportedOperationException: function: collation
            1 UnsupportedOperationException: function: java_method
            1 UnsupportedOperationException: function: json_tuple
            1 UnsupportedOperationException: function: reflect
            1 UnsupportedOperationException: function: regexp_extract_all
            1 UnsupportedOperationException: function: schema_of_csv
            1 UnsupportedOperationException: function: schema_of_json
            1 UnsupportedOperationException: function: schema_of_xml
            1 UnsupportedOperationException: function: sentences
            1 UnsupportedOperationException: function: session_window
            1 UnsupportedOperationException: function: to_char
            1 UnsupportedOperationException: function: to_csv
            1 UnsupportedOperationException: function: to_varchar
            1 UnsupportedOperationException: function: to_xml
            1 UnsupportedOperationException: function: try_reflect
            1 UnsupportedOperationException: function: xpath
            1 UnsupportedOperationException: function: xpath_boolean
            1 UnsupportedOperationException: function: xpath_double
            1 UnsupportedOperationException: function: xpath_float
            1 UnsupportedOperationException: function: xpath_int
            1 UnsupportedOperationException: function: xpath_long
            1 UnsupportedOperationException: function: xpath_number
            1 UnsupportedOperationException: function: xpath_short
            1 UnsupportedOperationException: function: xpath_string
            1 UnsupportedOperationException: handle analyze semantic hash
            1 UnsupportedOperationException: named window function arguments
            1 UnsupportedOperationException: unknown aggregate function: bitmap_construct_agg
            1 UnsupportedOperationException: unknown aggregate function: bitmap_or_agg
            1 UnsupportedOperationException: unknown aggregate function: count_min_sketch
            1 UnsupportedOperationException: unknown aggregate function: grouping_id
            1 UnsupportedOperationException: unknown function: ST_GeomFromWKB
            1 UnsupportedOperationException: unknown function: bitmap_and_agg
            1 UnsupportedOperationException: unknown function: product
            1 UnsupportedOperationException: unknown function: quote
            1 UnsupportedOperationException: unknown function: time_diff
            1 UnsupportedOperationException: unknown function: time_trunc
            1 UnsupportedOperationException: unknown function: timestampadd
            1 UnsupportedOperationException: unknown function: timestampdiff
            1 UnsupportedOperationException: unknown function: unwrap_udt
            1 UnsupportedOperationException: unknown window function: pd_win_max
            1 ValueError: Code in Status proto (StatusCode.INTERNAL) doesn't match status code (StatusCode.RESOURCE_EXHAUSTED)
            1 ValueError: The column label 'id' is not unique.
            1 ValueError: The column label 'struct' is not unique.
            1 failed to decode Protobuf message: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.input: Relation.rel_type: WithColumns.i...
            1 handle add artifacts
            1 received metadata size exceeds hard limit (value length 25049 vs. 16384)
(-1)        0 AnalysisException: Invalid partition id 3 in write result (expected < 3)
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/test_multi_paths1k7mju69g/text-0.text, file:///tmp/test_multi_paths1k7mju69g/text-1.text, file:///tmp/test_multi_paths1k7mju69g/te...
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/tmp4yo6dkve/
(-1)        0 AnalysisException: No files found in the specified paths: file:///tmp/tmpt7oyhrcs/
(-1)        0 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(2353396029958249200), None)]
(-1)        0 AnalysisException: one value expected: [Column(Column { relation: None, name: "#1" }), Literal(Int64(828378826545855002), None)]
(-1)        0 AssertionError: "'path' is not specified." does not match "Generic LocalFileSystem error: Unable to open file /6bgQTA6peyKj9oN2_0.zst.parquet#1: Permission denied (os error 13)"
(-1)        0 AssertionError: "Database 'memory:83eb40b8-8c09-4d7f-87c7-066df4d9f96d' dropped." does not match "No table format found for: jdbc"
(-1)        0 AssertionError: "Database 'memory:abf19e84-5151-431e-9318-4dded0331278' dropped." does not match "No table format found for: jdbc"
(-1)        0 AssertionError: 'INVALID_CLONE_SESSION_REQUEST.TARGET_SESSION_ID_FORMAT' not found in '<_InactiveRpcError of RPC that terminated with:\n\tstatus = StatusCode.UNIMPLEMENTED\n\tdetails = "clone session"...
(-1)        0 FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp123on312'
(-1)        0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (04cf1e6d-9772-4b78-aaf7-9b79f10989e5 != 040073e7-8978-494f-af80-5...
(-1)        0 PySparkAssertionError: Received incorrect server side session identifier for request. Please create a new Spark Session to reconnect. (bb4fe3f8-d538-496a-bd20-3894785ff0b9 != 673a9e7f-0ac4-4bb9-8d0f-d...
Passed Tests Diff

(empty)

Failed Tests
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.cacheTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.clearCache
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.createTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.functionExists
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.getFunction
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.isCached
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.listCatalogs
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.listFunctions
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.recoverPartitions
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.refreshByPath
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.refreshTable
pyspark/sql/catalog.py::pyspark.sql.catalog.Catalog.uncacheTable
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame._joinAsOf
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.approxQuantile
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.asTable
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.cache
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.coalesce
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.colRegex
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.dropDuplicatesWithinWatermark
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.explain
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.freqItems
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.groupingSets
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.hint
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.inputFiles
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.isLocal
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.isStreaming
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.lateralJoin
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.localCheckpoint
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.mapInArrow
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.observe
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.pandas_api
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.persist
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.randomSplit
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.rdd
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.repartition
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.repartitionById
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.repartitionByRange
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.sameSemantics
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.sampleBy
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.storageLevel
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.toJSON
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.transpose
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrame.withWatermark
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrameStatFunctions.approxQuantile
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrameStatFunctions.freqItems
pyspark/sql/dataframe.py::pyspark.sql.dataframe.DataFrameStatFunctions.sampleBy
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.aggregate
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.approx_count_distinct
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.approx_percentile
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.array_sort
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.assert_true
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.bitmap_and_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.bitmap_construct_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.bitmap_or_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.collation
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.corr
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.cosh
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.count_if
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.count_min_sketch
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.degrees
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.exists
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.exp
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.filter
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.first
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.forall
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.from_csv
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.from_json
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.from_xml
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.get_json_object
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.grouping_id
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hash
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.histogram_numeric
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hll_sketch_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hll_sketch_estimate
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hll_union
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hll_union_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.hour
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.ilike
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.inline_outer
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.input_file_block_length
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.input_file_block_start
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.input_file_name
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.is_variant_null
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.java_method
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.json_tuple
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_agg_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_agg_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_agg_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_n_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_n_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_n_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_quantile_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_quantile_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_quantile_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_rank_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_rank_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_get_rank_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_merge_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_merge_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_merge_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_to_string_bigint
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_to_string_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kll_sketch_to_string_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.kurtosis
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.like
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.log2
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.make_time
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.make_timestamp
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.make_timestamp_ntz
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.map_entries
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.map_filter
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.map_zip_with
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.minute
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.mode
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.monotonically_increasing_id
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.parse_json
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.percentile
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.percentile_approx
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.product
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.quote
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.randn
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.randstr
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.reduce
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.reflect
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.regexp_extract
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.regexp_extract_all
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.rlike
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.schema_of_csv
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.schema_of_json
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.schema_of_variant
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.schema_of_variant_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.schema_of_xml
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.second
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.sentences
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.session_window
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.shuffle
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.sin
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.spark_partition_id
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.st_asbinary
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.st_geogfromwkb
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.st_geomfromwkb
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.st_setsrid
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.st_srid
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.tan
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_difference
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_intersection
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_intersection_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_sketch_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_sketch_estimate
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_union
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.theta_union_agg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.time_diff
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.time_trunc
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.timestamp_add
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.timestamp_diff
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_char
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_csv
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_json
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_time
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_varchar
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_variant_object
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.to_xml
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.transform
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.transform_keys
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.transform_values
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_add
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_avg
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_make_interval
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_make_timestamp
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_parse_json
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_reflect
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_subtract
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_to_date
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_to_time
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.try_variant_get
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.udf
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.udtf
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.uniform
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.unwrap_udt
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.uuid
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.variant_get
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.window
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.window_time
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_boolean
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_double
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_float
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_int
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_long
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_number
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_short
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xpath_string
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.xxhash64
pyspark/sql/functions/builtin.py::pyspark.sql.functions.builtin.zip_with
pyspark/sql/tests/connect/arrow/test_parity_arrow.py::ArrowParityTests::test_createDataFrame_pandas_duplicate_field_names
pyspark/sql/tests/connect/arrow/test_parity_arrow.py::ArrowParityTests::test_pandas_self_destruct
pyspark/sql/tests/connect/arrow/test_parity_arrow.py::ArrowParityTests::test_toPandas_duplicate_field_names
pyspark/sql/tests/connect/arrow/test_parity_arrow_cogrouped_map.py::CogroupedMapInArrowParityTests::test_cogroup_apply_in_arrow_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_batching
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_column_order
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_empty_groupby
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_iter_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_not_returning_arrow_table
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_partial_iteration
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_returning_empty_dataframe
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_returning_empty_dataframe_and_wrong_column_names
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_returning_wrong_column_names
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_returning_wrong_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_returning_wrong_types_positional_assignment
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_with_key
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_apply_in_arrow_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_arrow_batch_slicing
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_negative_and_zero_batch_size
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_positional_assignment_conf
pyspark/sql/tests/connect/arrow/test_parity_arrow_grouped_map.py::ApplyInArrowParityTests::test_self_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_map.py::ArrowMapParityTests::test_map_in_arrow_with_barrier_mode
pyspark/sql/tests/connect/arrow/test_parity_arrow_map.py::ArrowMapParityTests::test_map_in_arrow_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_map.py::ArrowMapParityTests::test_nested_extraneous_field
pyspark/sql/tests/connect/arrow/test_parity_arrow_map.py::ArrowMapParityTests::test_nullability_widen
pyspark/sql/tests/connect/arrow/test_parity_arrow_map.py::ArrowMapParityTests::test_top_level_wrong_order
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_arrow_udf_int_to_decimal_coercion
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_chained_udfs_with_variant
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_complex_input_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_complex_return_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_day_time_interval_in_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_day_time_interval_type_casting
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_decimal_round
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_file_dsv2_with_udf_filter
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_kwargs
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_multiple_udfs_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_named_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_named_arguments_and_defaults
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_named_arguments_negative
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nested_array
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nested_array_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nested_map
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nested_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nondeterministic_udf
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nondeterministic_udf2
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_nonparam_udf_with_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_num_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_raise_stop_iteration
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_register
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_type_coercion_string_to_numeric
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_and_common_filter_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_as_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_cache
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_daytime_interval
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_globals_not_overwritten
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_filter_on_top_of_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_filter_on_top_of_outer_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_generate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_left_outer_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_in_subquery
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_input_serialization_valuecompare_disabled
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_kill_on_timeout
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_timestamp_ntz
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_256_args
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_aggregate_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_char_varchar_return_type
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_collated_string_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_column_vector
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_complex_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_complex_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_decorator
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_filter_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_order_by_and_limit
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_pyspark_logger
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_rand
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_udt
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_udf_with_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityLegacyTests::test_use_arrow
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_arrow_udf_int_to_decimal_coercion
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_chained_udfs_with_variant
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_complex_input_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_complex_return_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_day_time_interval_in_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_day_time_interval_type_casting
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_decimal_round
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_file_dsv2_with_udf_filter
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_kwargs
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_multiple_udfs_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_named_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_named_arguments_and_defaults
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_named_arguments_negative
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nested_array
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nested_array_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nested_map
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nested_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nondeterministic_udf
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nondeterministic_udf2
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_nonparam_udf_with_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_num_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_raise_stop_iteration
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_register
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_type_coercion_string_to_numeric
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_and_common_filter_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_as_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_cache
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_daytime_interval
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_globals_not_overwritten
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_filter_on_top_of_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_filter_on_top_of_outer_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_generate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_left_outer_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_in_subquery
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_input_serialization_valuecompare_disabled
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_kill_on_timeout
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_timestamp_ntz
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_256_args
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_aggregate_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_char_varchar_return_type
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_collated_string_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_column_vector
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_complex_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_complex_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_decorator
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_filter_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_order_by_and_limit
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_pyspark_logger
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_rand
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_udt
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_udf_with_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityNonLegacyTests::test_use_arrow
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_arrow_udf_int_to_decimal_coercion
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_chained_udfs_with_variant
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_complex_input_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_complex_return_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_day_time_interval_in_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_day_time_interval_type_casting
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_decimal_round
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_file_dsv2_with_udf_filter
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_kwargs
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_multiple_udfs_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_named_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_named_arguments_and_defaults
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_named_arguments_negative
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nested_array
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nested_array_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nested_map
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nested_struct
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nondeterministic_udf
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nondeterministic_udf2
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nondeterministic_udf_in_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_nonparam_udf_with_aggregate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_num_arguments
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_raise_stop_iteration
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_register
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_type_coercion_string_to_numeric
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_and_common_filter_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_as_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_binary_type
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_binary_type_in_nested_structures
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_cache
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_daytime_interval
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_globals_not_overwritten
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_filter_on_top_of_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_filter_on_top_of_outer_join
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_generate
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_left_outer_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_in_subquery
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_input_serialization_valuecompare_disabled
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_kill_on_timeout
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_not_supported_in_join_condition
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_timestamp_ntz
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_256_args
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_aggregate_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_char_varchar_return_type
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_collated_string_types
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_column_vector
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_complex_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_complex_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_decorator
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_filter_function
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_input_file_name
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_logging
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_order_by_and_limit
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_pyspark_logger
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_rand
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_udt
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_variant_input
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPythonUDFParityTests::test_udf_with_variant_output
pyspark/sql/tests/connect/arrow/test_parity_arrow_python_udf.py::ArrowPy

(truncated)

@github-actions
Copy link

Ibis Test Report

Commit Information

Commit Revision Branch
After f431d55 refs/pull/1447/merge
Before 399317b refs/heads/main

Test Summary

Suite Commit Failed Passed Skipped Warnings Time (s)
test-ibis After 95 1448 156 4247 235.74
Before 96 1448 156 4247 236.27

Test Details

Error Counts
(-1)      152 Total
(-1)       46 Total Unique
-------- ---- ----------------------------------------------------------------------------------------------------------
           57 IllegalArgumentException: invalid argument: found NOT at 19:22 expected '.', 'IF', 'COMMENT', 'LOCATION', 'WITH', ';', or end of input
           31 UnsupportedOperationException: lambda function
            8 SparkRuntimeException: MERGE planning expects a pre-expanded logical plan (MergeIntoWriteNode). Ensure expand_merge is enabled; MERGE is currently only supported for Delta tables.
            7 AssertionError: Series are different
            3 IllegalArgumentException: invalid argument: found TRUNCATE at 0:8 expected something else, ';', statement, or end of input
            3 ValueError: Code in Status proto (StatusCode.INTERNAL) doesn't match status code (StatusCode.RESOURCE_EXHAUSTED)
            2 AssertionError
            2 AssertionError: DataFrame.iloc[:, 0] (column name="result_col") are different
            2 assert ibis.Schema {... float64\n} == ibis.Schema {... float64\n} Full diff: ibis.Schema { carat float64 cut string color string clarity string depth float64 table float64 - price int32 ? ^^ + price i...
            1 AnalysisException: error in SQL parser: found : at 2:3 expected '0', '1', '2', '3', '4', '5', '6', '7', '8', or '9'
            1 AssertionError: DataFrame are different
            1 AssertionError: DataFrame.iloc[:, 0] (column name="playerID") are different
            1 AssertionError: DataFrame.iloc[:, 1] (column name="collect_udf") are different
            1 AssertionError: Series NA mask are different
(+1)        1 AssertionError: assert 'ibis_cached_tqroh4whofdg7lvtws26a4rayu' not in ['array_types', 'astronauts', 'awards_players', 'basic_table', 'batting', 'complicated', ...]
            1 IllegalArgumentException: invalid argument: input schema for INSERT has 1 fields, but table schema has 2 fields
            1 IllegalArgumentException: invalid argument: window boundary must be a literal
            1 SparkRuntimeException: Cast error: Casting from Date32 to Float64 not supported
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=Left, on=[(#15@0, #13@0)], projection=[#16@1, #13@2]", "  ProjectionExec: expr=[#13@0 as #15, avg(t2.#14...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftAnti, on=[(#14@0, #29@0), (#30@1, #31@1)], projection=[#14@0], NullsEqual: true", "  BoundedWindowAg...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftAnti, on=[(#14@0, #29@0)], NullsEqual: true", "  AggregateExec: mode=SinglePartitioned, gby=[#14@0 a...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftAnti, on=[(#26@0, #65@0), (#27@1, #66@1), (#28@2, #67@2), (#29@3, #68@3), (#30@4, #69@4), (#31@5, #7...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftAnti, on=[(#9@9, #26@0)]", "  RepartitionExec: partitioning=Hash([#0@0, #1@1, #2@2, #3@3, #4@4, #5@5...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftAnti, on=[(t0.#5 + Int64(1)@13, #26@0)], projection=[#0@0, #1@1, #2@2, #3@3, #4@4, #5@5, #6@6, #7@7,...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(#14@0, #29@0)], NullsEqual: true", "  AggregateExec: mode=SinglePartitioned, gby=[#14@0 a...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(#14@0, #29@0)], NullsEqual: true", "  ProjectionExec: expr=[#13@0 as #14]", "    Aggregat...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(#65@0, #104@0), (#66@1, #105@1), (#67@2, #106@2), (#68@3, #107@3), (#69@4, #108@4), (#70@...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(#65@0, #104@0), (#66@1, #105@1), (#67@2, #106@2), (#68@3, #107@3), (#69@4, #108@4), (#70@...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(#9@9, #26@0)]", "  RepartitionExec: partitioning=Hash([#0@0, #1@1, #2@2, #3@3, #4@4, #5@5...
            1 SparkRuntimeException: Error during planning: Plan: ["HashJoinExec: mode=CollectLeft, join_type=LeftSemi, on=[(t0.#5 + Int64(1)@13, #26@0)], projection=[#0@0, #1@1, #2@2, #3@3, #4@4, #5@5, #6@6, #7@7,...
            1 SparkRuntimeException: Error during planning: expr type Struct("StructColumn({'x': xs, 'y': ys})": Struct("x": Int32, "y": Int32)) can't cast to Struct("x": Int64, "y": Int64) in InSubquery
            1 TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
            1 UnsupportedOperationException: CommandNode::AlterTable
            1 UnsupportedOperationException: Physical plan does not support logical expression AggregateFunction(AggregateFunction { func: AggregateUDF { inner: ArrayAgg { signature: Signature { type_signature: Any...
            1 UnsupportedOperationException: Physical plan does not support logical expression InSubquery(InSubquery { expr: Column(Column { relation: Some(Bare { table: "t0" }), name: "#0" }), subquery: <subquery>...
            1 UnsupportedOperationException: Physical plan does not support logical expression InSubquery(InSubquery { expr: Column(Column { relation: Some(Bare { table: "t0" }), name: "#1" }), subquery: <subquery>...
            1 UnsupportedOperationException: cast Duration(Second) to Spark data type
(+1)        1 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7feb6c97b74
(+1)        1 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7feb6cd32de
(+1)        1 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7feb6cde00e
(+1)        1 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7feb6cf477e
(+1)        1 UnsupportedOperationException: unknown window function: ibis_udf_quantiles_7feb8101890
            1 assert 1 == 10 + where 1 = int(1) + and 10 = int(10)
            1 assert 3 == 2 + where 3 = int(3) + and 2 = int(2)
            1 assert frozenset({None}) == frozenset({None, 47}) Extra items in the right set: 47 Full diff: frozenset({ None, - 47, })
            1 assert {0.0, 1.0, 2.0, 3.0} == {1, 2, 3} Extra items in the left set: 0.0 Full diff: { + 0.0, - 1, + 1.0, ? ++ - 2, + 2.0, ? ++ - 3, + 3.0, ? ++ }
(-1)        0 AssertionError: assert 'ibis_cached_ynep6cf7yrc5zk5jfqeegt4y5e' not in ['array_types', 'astronauts', 'awards_players', 'basic_table', 'batting', 'complicated', ...]
(-1)        0 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7f36680e468
(-1)        0 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7f36684c818
(-1)        0 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7f3669bb472
(-1)        0 UnsupportedOperationException: unknown window function: ibis_udf_mean_udf_7f3669d5422
(-1)        0 UnsupportedOperationException: unknown window function: ibis_udf_quantiles_7f3668107ce
(-1)        0 assert 45.45000000000001 == 45.45
Passed Tests Diff
--- before.txt	2026-02-28 15:13:16.568783758 +0000
+++ after.txt	2026-02-28 15:13:16.797782497 +0000
@@ -1450,0 +1451 @@
+ibis/backends/tests/test_vectorized_udf.py::test_reduction_udf[pyspark]
Failed Tests
ibis/backends/tests/test_aggregation.py::test_aggregate_list_like[pyspark-list]
ibis/backends/tests/test_aggregation.py::test_aggregate_list_like[pyspark-ndarray]
ibis/backends/tests/test_aggregation.py::test_aggregate_mixed_udf[pyspark]
ibis/backends/tests/test_aggregation.py::test_approx_quantile[pyspark-True-False]
ibis/backends/tests/test_aggregation.py::test_approx_quantile[pyspark-True-True]
ibis/backends/tests/test_aggregation.py::test_count_distinct_star[pyspark-cond]
ibis/backends/tests/test_aggregation.py::test_count_distinct_star[pyspark-no_cond]
ibis/backends/tests/test_aggregation.py::test_date_quantile[pyspark]
ibis/backends/tests/test_aggregation.py::test_group_concat_over_window[pyspark]
ibis/backends/tests/test_array.py::test_array_agg_numeric[pyspark-no-nulls-means]
ibis/backends/tests/test_array.py::test_array_agg_numeric[pyspark-no-nulls-sums]
ibis/backends/tests/test_array.py::test_array_agg_numeric[pyspark-nulls-means]
ibis/backends/tests/test_array.py::test_array_agg_numeric[pyspark-nulls-sums]
ibis/backends/tests/test_array.py::test_array_column[pyspark]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-deferred-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-deferred-nulls]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-lambda-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-lambda-nulls]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-partial-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter[pyspark-partial-nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index[pyspark-lambda-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index[pyspark-lambda-nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index[pyspark-partial-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index[pyspark-partial-nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index_lambda[pyspark-lambda-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index_lambda[pyspark-lambda-nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index_lambda[pyspark-partial-no_nulls]
ibis/backends/tests/test_array.py::test_array_filter_with_index_lambda[pyspark-partial-nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-deferred-no_nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-deferred-nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-lambda-no_nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-lambda-nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-partial-no_nulls]
ibis/backends/tests/test_array.py::test_array_map[pyspark-partial-nulls]
ibis/backends/tests/test_array.py::test_array_map_with_conflicting_names[pyspark]
ibis/backends/tests/test_array.py::test_array_map_with_index[pyspark-lambda-no_nulls]
ibis/backends/tests/test_array.py::test_array_map_with_index[pyspark-lambda-nulls]
ibis/backends/tests/test_array.py::test_array_map_with_index[pyspark-partial-no_nulls]
ibis/backends/tests/test_array.py::test_array_map_with_index[pyspark-partial-nulls]
ibis/backends/tests/test_array.py::test_complex_array_map[pyspark]
ibis/backends/tests/test_array.py::test_table_unnest_column_expr[pyspark]
ibis/backends/tests/test_client.py::test_create_table_overwrite_temp[pyspark-no temp, overwrite]
ibis/backends/tests/test_client.py::test_insert_into_table_missing_columns[pyspark]
ibis/backends/tests/test_client.py::test_insert_overwrite_from_dataframe[pyspark]
ibis/backends/tests/test_client.py::test_insert_overwrite_from_expr[pyspark]
ibis/backends/tests/test_client.py::test_insert_overwrite_from_list[pyspark]
ibis/backends/tests/test_client.py::test_rename_table[pyspark]
ibis/backends/tests/test_client.py::test_upsert_from_dataframe[pyspark]
ibis/backends/tests/test_client.py::test_upsert_from_expr[pyspark-False]
ibis/backends/tests/test_client.py::test_upsert_from_expr[pyspark-True]
ibis/backends/tests/test_client.py::test_upsert_from_memtable[pyspark-sch0-expectation0]
ibis/backends/tests/test_client.py::test_upsert_from_memtable[pyspark-sch1-expectation1]
ibis/backends/tests/test_client.py::test_upsert_from_memtable[pyspark-sch2-expectation2]
ibis/backends/tests/test_client.py::test_upsert_from_memtable[pyspark-sch3-expectation3]
ibis/backends/tests/test_client.py::test_upsert_from_memtable[pyspark-sch4-expectation4]
ibis/backends/tests/test_dot_sql.py::test_table_dot_sql_with_join[pyspark]
ibis/backends/tests/test_export.py::test_table_to_csv[pyspark]
ibis/backends/tests/test_expr_caching.py::test_persist_expression_contextmanager[pyspark]
ibis/backends/tests/test_expr_caching.py::test_persist_expression_release[pyspark]
ibis/backends/tests/test_expr_caching.py::test_persist_expression_repeated_cache[pyspark]
ibis/backends/tests/test_generic.py::test_isin_notin_column_expr[pyspark-isin_col]
ibis/backends/tests/test_generic.py::test_isin_notin_column_expr[pyspark-isin_expr]
ibis/backends/tests/test_generic.py::test_isin_notin_column_expr[pyspark-notin_col]
ibis/backends/tests/test_generic.py::test_isin_notin_column_expr[pyspark-notin_expr]
ibis/backends/tests/test_generic.py::test_isin_uncorrelated[pyspark]
ibis/backends/tests/test_generic.py::test_isin_uncorrelated_simple[pyspark]
ibis/backends/tests/test_io.py::test_read_csv[pyspark-default]
ibis/backends/tests/test_io.py::test_read_csv[pyspark-file_name]
ibis/backends/tests/test_join.py::test_join_with_pandas[pyspark]
ibis/backends/tests/test_json.py::test_json_getitem_array[pyspark]
ibis/backends/tests/test_set_ops.py::test_difference[pyspark-all]
ibis/backends/tests/test_set_ops.py::test_difference[pyspark-distinct]
ibis/backends/tests/test_set_ops.py::test_intersect[pyspark-all]
ibis/backends/tests/test_set_ops.py::test_intersect[pyspark-distinct]
ibis/backends/tests/test_set_ops.py::test_top_level_intersect_difference[pyspark-False-difference-False]
ibis/backends/tests/test_set_ops.py::test_top_level_intersect_difference[pyspark-False-difference-True]
ibis/backends/tests/test_set_ops.py::test_top_level_intersect_difference[pyspark-False-intersect-False]
ibis/backends/tests/test_set_ops.py::test_top_level_intersect_difference[pyspark-False-intersect-True]
ibis/backends/tests/test_struct.py::test_field_overwrite_always_prefers_unpacked[pyspark]
ibis/backends/tests/test_struct.py::test_isin_struct[pyspark]
ibis/backends/tests/test_struct.py::test_single_field[pyspark-a]
ibis/backends/tests/test_struct.py::test_single_field[pyspark-b]
ibis/backends/tests/test_struct.py::test_single_field[pyspark-c]
ibis/backends/tests/test_temporal.py::test_delta[pyspark-time]
ibis/backends/tests/test_temporal.py::test_interval_add_cast_column[pyspark]
ibis/backends/tests/test_temporal.py::test_interval_add_cast_scalar[pyspark]
ibis/backends/tests/test_temporal.py::test_temporal_binop[pyspark-date-subtract-date]
ibis/backends/tests/test_vectorized_udf.py::test_reduction_udf_array_return_type[pyspark]
ibis/backends/tests/test_window.py::test_grouped_unbounded_window[pyspark-mean_udf]
ibis/backends/tests/test_window.py::test_range_expression_bounds[pyspark]
ibis/backends/tests/test_window.py::test_ungrouped_bounded_expanding_window[pyspark-mean_udf]
ibis/backends/tests/test_window.py::test_ungrouped_unbounded_window[pyspark-ordered-mean_udf]
ibis/backends/tests/test_window.py::test_ungrouped_unbounded_window[pyspark-unordered-lag]
ibis/backends/tests/test_window.py::test_ungrouped_unbounded_window[pyspark-unordered-lead]
ibis/backends/tests/test_window.py::test_ungrouped_unbounded_window[pyspark-unordered-mean_udf]

@codecov
Copy link

codecov bot commented Feb 28, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

@@            Coverage Diff             @@
##             main    #1447      +/-   ##
==========================================
- Coverage   73.82%   72.65%   -1.18%     
==========================================
  Files         844      844              
  Lines      106846   108513    +1667     
==========================================
- Hits        78882    78841      -41     
- Misses      27964    29672    +1708     
Flag Coverage Δ *Carryforward flag
ibis-tests 19.71% <ø> (+0.30%) ⬆️ Carriedforward from f62e48a
python-unit-tests 54.27% <ø> (ø)
rust-slow-tests 39.28% <ø> (-1.37%) ⬇️ Carriedforward from f62e48a
rust-unit-tests 35.52% <ø> (-0.56%) ⬇️
spark-tests 34.37% <ø> (+0.13%) ⬆️ Carriedforward from f62e48a

*This pull request uses carry forward flags. Click here to find out more.
see 52 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@linhr linhr marked this pull request as ready for review March 2, 2026 11:52
@linhr linhr merged commit 0003a53 into main Mar 2, 2026
13 checks passed
@linhr linhr deleted the release-0-5-2 branch March 2, 2026 11:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant