Add a functionality in apply_in_pandas to support spark api #3162
Merged
sfc-gh-aalam merged 12 commits intomainfrom Mar 20, 2025
Merged
Add a functionality in apply_in_pandas to support spark api #3162sfc-gh-aalam merged 12 commits intomainfrom
sfc-gh-aalam merged 12 commits intomainfrom
Conversation
🎉 Snyk checks have passed. No issues have been found so far.✅ security/snyk check is complete. No issues have been found. (View Details) ✅ license/snyk check is complete. No issues have been found. (View Details) |
sfc-gh-aalam
approved these changes
Mar 17, 2025
sfc-gh-skumbham
approved these changes
Mar 19, 2025
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Which Jira issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
Fixes SNOW-1800723
If the dataframe comes from spark the column names are not same as the spark column names but the function would be assuming the spark column names and operating like that, this change resolves that issue
This adds a functionality of (key, dataframe) which can also be the type of function spark support
Fill out the following pre-review checklist:
The test for this will be added in this PR https://github.com/snowflakedb/sas/pull/725/files, this is a fork introduced for the non public usecase of snowpark library
Please describe how your code solves the related issue.
Please write a short description of how your code change solves the related issue.
I am extracting the spark names from the column_map which will only be present if this is being sent from the accelerated spark layer.