Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Created by
brew bumpCreated with
brew bump-formula-pr.release notes
When the AUTO_INCREMENT field is unset, the value of the next AUTO_INCREMENT column defaults to 1. Thus, setting it to 0 and 1 produce identical behavior.
By normalizing the value as its written, we ensure that both operations produce the same table hash as if the value was unset.
The migration to subcontexts of the Queryist sql.Context meant that we lost the default setting of the query start time. Restore it in the sql command implementation itself.
This makes *NomsBlockStore check the incoming Context object to ensure that it itself has been invovled in the appropriate GC lifecycle callbacks.
It fixes a problem with statspro.AnalyzeTable, where the GC lifecycle callbacks happened more than once for a single session.
It fixes some callsites to appropriately make the GC lifecycle callbacks, including LateBindingQueryist opening one session command for the whole lifetime of the returned sql.Context.
Our old heuristic was to cut a file and start uploading it when it reached a fixed number of chunks. We chose that number of chunks as just
(1<<30 / 4096), which is the average chunk size the chunker targets. But the chunk targets pre-compressed sizes. So file uploads which were hitting the target could vary a lot in size. It's better to just track how many bytes we've written so far and cut it when (approximately) appropriate. This is still best effort and only approximately.Also improves error handling and structured concurrency a bit. Moves a number of goroutines up to the top-level errgroup in
Pull(). Avoids creating – and thus potentially leaking – goroutines inNewPuller(), deferring them untilPullactually gets called. Gets rid of an unnecessary request-response thread structure in the implementation ofPullChunkFetcher.Pass the SqlEngine along and reuse it in the operations where we dump table contents, parse import schemas, etc.
Counterpart to Doltgres PR:
go-mysql-server
fixes: Incorrect optimization of OR operation in expression in WHERE clause dolthub/dolt#9052
In golang, the negation of the minimum int8, int16, int32, int64 is the same.
fix: Incorrect negation of minimum signed integer dolthub/dolt#9053
We can't apply
NOT(NOT(expr)) -> exproptimization if there will be a type conversion involved.This PR also fixes bad test involving current timestamps.
fixes: Double negation is treated as original value in WHERE clause dolthub/dolt#9054
unix_timestampChanges:
fixes:
unix_timestamp's precision should keep with parameter dolthub/dolt#9025replacePkSortNot sure how to reproduce this, but this should make the code more safe.
fixes: Panic when calling min/max (without a Primary Key?) dolthub/go-mysql-server#2915
Fixes Unexpected invalid type error on boolean dolthub/dolt#9036
This PR adds an error check when attempting to group by a column index that is out of range.
fixes: Unexpected crash when using GROUP BY with non-column position dolthub/dolt#9037
Closed Issues
unix_timestamp's precision should keep with parameter