| title | aliases | summary | ||
|---|---|---|---|---|
TiDB 2.1 GA Release Notes |
|
TiDB 2.1 GA was released on November 30, 2018, with significant improvements in stability, performance, compatibility, and usability. The release includes optimizations in SQL optimizer, SQL executor, statistics, expressions, server, DDL, compatibility, Placement Driver (PD), TiKV, and tools. It also introduces TiDB Lightning for fast full data import. However, TiDB 2.1 does not support downgrading to v2.0.x or earlier due to the adoption of the new storage engine. Additionally, parallel DDL is enabled in TiDB 2.1, so clusters with TiDB version earlier than 2.0.1 cannot upgrade to 2.1 using rolling update. If upgrading from TiDB 2.0.6 or earlier to TiDB 2.1, ongoing DDL operations may slow down the upgrading process. |
On November 30, 2018, TiDB 2.1 GA is released. See the following updates in this release. Compared with TiDB 2.0, this release has great improvements in stability, performance, compatibility, and usability.
-
SQL Optimizer
-
Optimize the selection range of
Index Jointo improve the execution performance -
Optimize the selection of outer table for
Index Joinand use the table with smaller estimated value of Row Count the as the outer table -
Optimize Join Hint
TIDB_SMJso that Merge Join can be used even without proper index available -
Optimize Join Hint
TIDB_INLJto specify the Inner table to Join -
Optimize correlated subquery, push down Filter, and extend the index selection range, to improve the efficiency of some queries by orders of magnitude
-
Support using Index Hint and Join Hint in the
UPDATEandDELETEstatement -
Support pushing down more functions:
ABS/CEIL/FLOOR/IS TRUE/IS FALSE -
Optimize the constant folding algorithm for the
IFandIFNULLbuilt-in functions -
Optimize the output of the
EXPLAINstatement and use hierarchy structure to show the relationship between operators
-
-
SQL executor
-
Refactor all the aggregation functions and improve execution efficiency of the
StreamandHashaggregation operators -
Implement the parallel
Hash Aggregateoperators and improve the computing performance by 350% in some scenarios -
Implement the parallel
Projectoperators and improve the performance by 74% in some scenarios -
Read the data of the Inner table and Outer table of
Hash Joinconcurrently to improve the execution performance -
Optimize the execution speed of the
REPLACE INTOstatement and increase the performance nearly by 10 times -
Optimize the memory usage of the time data type and decrease the memory usage of the time data type by fifty percent
-
Optimize the point select performance and improve the point select efficiency result of Sysbench by 60%
-
Improve the performance of TiDB on inserting or updating wide tables by 20 times
-
Support configuring the memory upper limit of a single statement in the configuration file
-
Optimize the execution of Hash Join, if the Join type is Inner Join or Semi Join and the inner table is empty, return the result without reading data from the outer table
-
Support using the
EXPLAIN ANALYZEstatement to check the runtime statistics including the execution time and the number of returned rows of each operator
-
-
Statistics
-
Support enabling auto ANALYZE statistics only during certain period of the day
-
Support updating the table statistics automatically according to the feedback of the queries
-
Support configuring the number of buckets in the histogram using the
ANALYZE TABLE WITH BUCKETSstatement -
Optimize the Row Count estimation algorithm using histogram for mixed queries of equality query and range queries
-
-
Expressions
-
Support following built-in function:
-
json_contains -
json_contains_path -
encode/decode
-
-
-
Server
-
Support queuing the locally conflicted transactions within tidb-server instance to optimize the performance of conflicted transactions
-
Support Server Side Cursor
-
Add the HTTP API
-
Scatter the distribution of table Regions in the TiKV cluster
-
Control whether to open the
general log -
Support modifying the log level online
-
Check the TiDB cluster information
-
-
Add the
auto_analyze_ratiosystem variables to control the ratio of Analyze -
Add the
tidb_retry_limitsystem variable to control the automatic retry times of transactions -
Support using
admin show slowstatement to obtain the slow queries -
Add the
tidb_slow_log_thresholdenvironment variable to set the threshold of slow log automatically
-
-
DDL
-
Support the parallel execution of the Add index statement and other statements to avoid the time consuming Add index operation blocking other operations
-
Optimize the execution speed of
ADD INDEXand improve it greatly in some scenarios -
Support the
select tidb_is_ddl_owner()statement to facilitate deciding whether TiDB isDDL Owner -
Support the
ALTER TABLE FORCEsyntax -
Support the
ALTER TABLE RENAME KEY TOsyntax -
Add the table name and database name in the output information of
admin show ddl jobs
-
-
Compatibility
-
Support more MySQL syntaxes
-
Make the
BITaggregate function support theALLparameter -
Support the
SHOW PRIVILEGESstatement -
Support the
CHARACTER SETsyntax in theLOAD DATAstatement -
Support the
IDENTIFIED WITHsyntax in theCREATE USERstatement -
Support the
LOAD DATA IGNORE LINESstatement -
The
Show ProcessListstatement returns more accurate information
-
-
Optimize availability
-
Introduce the version control mechanism and support rolling update of the cluster compatibly
-
Enable
Raft PreVoteamong PD nodes to avoid leader reelection when network recovers after network isolation -
Enable
raft learnerby default to lower the risk of unavailable data caused by machine failure during scheduling -
TSO allocation is no longer affected by the system clock going backwards
-
Support the
Region mergefeature to reduce the overhead brought by metadata
-
-
Optimize the scheduler
-
Optimize the processing of Down Store to speed up making up replicas
-
Optimize the hotspot scheduler to improve its adaptability when traffic statistics information jitters
-
Optimize the start of Coordinator to reduce the unnecessary scheduling caused by restarting PD
-
Optimize the issue that Balance Scheduler schedules small Regions frequently
-
Optimize Region merge to consider the number of rows within the Region
-
Improve PD simulator to simulate the scheduling scenarios
-
-
API and operation tools
-
Add the
GetPrevRegioninterface to support theTiDB reverse scanfeature -
Add the
BatchSplitRegioninterface to speed up TiKV Region splitting -
Add the
GCSafePointinterface to support distributed GC in TiDB -
Add the
GetAllStoresinterface, to support distributed GC in TiDB
- pd-ctl supports:
- pd-recover doesn't need to provide the
max-replicaparameter
-
-
Metrics
-
Add related metrics for
Filter -
Add metrics about etcd Raft state machine
-
-
Performance
-
Optimize the performance of Region heartbeat to reduce the memory overhead brought by heartbeats
-
Optimize the Region tree performance
-
Optimize the performance of computing hotspot statistics
-
-
Coprocessor
-
Add more built-in functions
-
Add Coprocessor
ReadPoolto improve the concurrency in processing the requests -
Fix the time function parsing issue and the time zone related issues
-
Optimize the memory usage for pushdown aggregation computing
-
-
Transaction
-
Optimize the read logic and memory usage of MVCC to improve the performance of the scan operation and the performance of full table scan is 1 time better than that in TiDB 2.0
-
Fold the continuous Rollback records to ensure the read performance
-
Add the
UnsafeDestroyRangeAPI to support to collecting space for the dropping table/index -
Separate the GC module to reduce the impact on write
-
Add the
upper boundsupport in thekv_scancommand
-
-
Raftstore
-
Improve the snapshot writing process to avoid RocksDB stall
-
Add the
LocalReaderthread to process read requests and reduce the delay for read requests -
Support
BatchSplitto avoid large Region brought by large amounts of write -
Support
Region Splitaccording to statistics to reduce the I/O overhead -
Support
Region Splitaccording to the number of keys to improve the concurrency of index scan -
Improve the Raft message process to avoid unnecessary delay brought by
Region Split -
Enable the
PreVotefeature by default to reduce the impact of network isolation on services
-
-
Storage Engine
-
Fix the
CompactFilesbug in RocksDB and reduce the impact on importing data using Lightning -
Upgrade RocksDB to v5.15 to fix the possible issue of snapshot file corruption
-
Improve
IngestExternalFileto avoid the issue that flush could block write
-
-
tikv-ctl
-
The
compactcommand supports specifying whether to compact data in the bottommost level
-
Fast full import of large amounts of data: TiDB Lightning
-
Support new TiDB Binlog
- TiDB 2.1 does not support downgrading to v2.0.x or earlier due to the adoption of the new storage engine
-
Parallel DDL is enabled in TiDB 2.1, so the clusters with TiDB version earlier than 2.0.1 cannot upgrade to 2.1 using rolling update. You can choose either of the following two options:
- Stop the cluster and upgrade to 2.1 directly
- Roll update to 2.0.1 or later 2.0.x versions, and then roll update to the 2.1 version
- If you upgrade from TiDB 2.0.6 or earlier to TiDB 2.1, check if there is any ongoing DDL operation, especially the time consuming
Add Indexoperation, because the DDL operations slow down the upgrading process. If there is ongoing DDL operation, wait for the DDL operation finishes and then roll update.