[SNOW-1705797]: Use cached metadata to make repr faster on simple DataFrames#2760
[SNOW-1705797]: Use cached metadata to make repr faster on simple DataFrames#2760sfc-gh-rdurrani wants to merge 10 commits intomainfrom
repr faster on simple DataFrames#2760Conversation
|
With this PR, and the following benchmarking code, on a 73137675 rows × 8 columns DataFrame: df = pd.read_snowflake("FINANCIAL__ECONOMIC_ESSENTIALS.CYBERSYN.stock_price_timeseries")
df(timed using VS Code execution of jupyter notebooks), we see a median 6x improvement. New codepath times: [1.3s, 1.1s, 1.3s] |
|
For more complex operations, we're a tiny bit slower than before: benchmark code: df = pd.read_snowflake("FINANCIAL__ECONOMIC_ESSENTIALS.CYBERSYN.stock_price_timeseries")
from time import perf_counter
start = perf_counter()
df = df[(df['TICKER'] == 'GOOG') | (df['TICKER'] == 'MSFT') | (df['TICKER'] == 'SNOW')]
df = df.pivot_table(index=["TICKER", "DATE"], columns="VARIABLE_NAME", values="VALUE")
df = df["All-Day Low"]["GOOG"].resample("91D").min()
repr(df)
end = perf_counter()
print(end - start)times: old: [4.256317083010799, 4.085870499999146, 4.003666083997814] This is a 0.857749436690795x slowdown. |
There was a problem hiding this comment.
What about nested projections? This seems to only handle a single level of projection, right? That should still be fine. We can address nested projections in a followup step.
There was a problem hiding this comment.
Before putting in this PR, I tested it locally and found that this will handle nested projections - e.g. I tried the following:
df = pd.read_snowflake...
df = df[df.columns[:5:-1]]
df = df.select_dtypes()and
df = pd.DataFrame(...)
df = df[df.columns[:5:-1]]
df = df.select_dtypes()and after each of those lines of code + after the entire block of code, the format of the api_calls method remained the same - i.e. this check will work for nested projections, and the metadata caching of count is passed on for nested projections of that type.
This is interesting. It seems to be mostly the cost of performing the check - because otherwise the code used for complex dataframes is exactly the same as before. @sfc-gh-rdurrani can you try to measure the impact of data size on this regression; i.e., by using input data of different sizes - both smaller and larger? I imagine that as the data size gets bigger the overhead of performing the check becomes less and less significant in the e2e time. |
These numbers are not for this code in this PR. These numbers are for if I were to remove the if-else, and just always use the new codepath, and were meant to justify having the if-else, rather than just having all DataFrames go through the if-else. I haven't measured the latency of benchmarks like this through the new codepath (with the if-else), but in practice, the latency should be about the same (i.e. there shouldn't be a measurable impact), since getting api_calls from the plan is inexpensive. |
These gains almost match exactly those reported in our previous slack thread here |
This is much better. Thanks for the clarification. In fact, it was hard to believe that the check would take this much time. |
I stand corrected - @sfc-gh-helmeleegy I tried this benchmark with the new code (including the if-else) and got these times: |
|
Posting a tldr, since I realize the multiple messages may be a little confusing to other reviewers. To clarify - there are 3 scenarios being tested for the complex benchmark. Scenario A: Old codepath (before this PR) In Scenarion A and Scenario C, this benchmark takes the old codepath, where we add a window count to the query as our cached row count column. The only difference between Scenario A and Scenario C is that Scenario C requires an additional if-else to check that the current dataframe is not a projection of a table. In Scenario B, we eagerly materialize the count. Scenario A numbers: [4.256317083010799, 4.085870499999146, 4.003666083997814] As expected, Scenario C numbers are lower than Scenario B numbers - what's a little surprising is that Scenario B numbers are higher than Scenario A numbers, since the if-else should be cheap (getting the snowpark dataframe plan shouldn't be expensive), but this is a small number of runs, and the last number from Scenario C is inline with what you would expect from Scenario A, so this could just be random variation + bad runs. |
|
@sfc-gh-joshi are you going to pick this up (maybe in the future)? |
|
@sfc-gh-jjiao Rehan wanted to wait until |
9c5a3a6 to
384c45f
Compare
0eb0693 to
b37a7da
Compare
|
Based on the data from the new benchmarking framework (which calls repr on simple frames without any intermediate operations), it looks like this produces a very significant improvement for dataframes with many columns or many rows, but a slight regression for small/moderate amounts of columns. For example:
We may want to restrict this optimization based on the number of columns. I'm also wary of performance implications for the other APIs that now have increased query counts, so further investigation is needed before this is safe to merge. |
b37a7da to
3337f40
Compare
|
After discussion with @sfc-gh-helmeleegy, we've decided to reduce the scope of this PR a bit. The original changes modified As expected, DFs with a small number of columns see a slight performance regression, but the scale at which this occurs is rather negligible:
The performance gains for frames with many columns are very significant.
(I can provide more detailed benchmark information if requested). |
sfc-gh-nkumar
left a comment
There was a problem hiding this comment.
It seems to be regression for frames with column <2k and improvement for frames with column >2k. Are we confident this is the right trade-off to make?
Intuitively I would expect frames with less number of columns are more common use case.
src/snowflake/snowpark/modin/plugin/compiler/snowflake_query_compiler.py
Outdated
Show resolved
Hide resolved
@sfc-gh-joshi I think we should indeed collect more data points. How about complete the different combinations of: [10, 1K, 1M] rows x [10, 100, 2K] columns? Also, given that it's cheap to know the number of columns in the current dataframe, we can have a cutoff point below which we use one implementation and otherwise use the other implementation. This way, we can also avoid regressions. |
Taking the average of three trials for each dimension:
In the sheet, "df_repr" represents just doing read_snowflake + repr, while "df_repr_complex" adds 1 to each element in the frame before calling repr. The changes being negligible for "df_repr_complex" makes sense because we wouldn't be taking the eager row count code path in these scenarios since the underlying query isn't just a simple projection. The performance knee might well occur at a smaller number of columns, and I can do some more testing to find the appropriate value. |
🎉 Snyk checks have passed. No issues have been found so far.✅ security/snyk check is complete. No issues have been found. (View Details) ✅ license/snyk check is complete. No issues have been found. (View Details) |
@sfc-gh-nkumar Even though the percentage time increase looks high for small frames, the actual magnitude of the change is <0.5s, which I think should be acceptable. We could gate this change by column count to be safe, but there may also be meaningful improvements to be had at high row counts (@sfc-gh-rdurrani 's initial indicated a 7M row x 8 col dataset improved from about 7s -> 1s, but I haven't verified if this is still the case on the current version of the repository). |
|
I'm closing this in favor of SNOW-2195991 (#3358), which has substantial performance improvements while avoiding the need for an extra query altogether. |
Which Jira issue is this PR addressing? Make sure that there is an accompanying issue to your PR.
Fixes SNOW-1705797
Fill out the following pre-review checklist:
Please describe how your code solves the related issue.
This PR is to help alleviate the long repr time on DataFrames. For "simple" DataFrames, i.e. DataFrames that are simple selects/projects off of a table, its faster to rely on the metadata count to give us row count, rather than using a window function. This PR adds support for doing so on those sorts of DataFrames.
copied from comments below:
by @sfc-gh-joshi