diff --git a/docs/blogs/tech/failure-by-collation.md b/docs/blogs/tech/failure-by-collation.md new file mode 100644 index 000000000..9ed1bd71e --- /dev/null +++ b/docs/blogs/tech/failure-by-collation.md @@ -0,0 +1,165 @@ +--- +slug: failure-by-collation +title: 'SQL Tuning Practices - Index Failure Caused by Collations' +--- + +# SQL Tuning Practices - Index Failure Caused by Collations + +> According to discussions on the OceanBase community forum, the engineers on duty can provide prompt technical support for addressing simple issues that arise during installation and deployment. +> +> However, if you encounter performance optimization issues, you may need to wait until R&D engineers are free to provide troubleshooting suggestions. +> +> To enhance our SQL tuning capabilities, I intend to record and summarize key issues identified in the OceanBase community forum and share them with everyone for mutual progress. + + + +Background +====== + +This section introduces two terms: character set and collation, as well as two common hints: `LEADING` and `USE_NL`. If you have understood the terms and hints, you can ignore the section. + + +Character sets +------- + +To put it simply, character sets define how characters are encoded and stored. Here is an example: + +* If the character set is `utf8`, the uppercase letter "A" is encoded as the byte 0100 0001, which is represented as 0x41 in hexadecimal. +* If the character set is `utf16`, the uppercase letter "A" is encoded as two bytes 0000 0100 0000 0001, which is represented as 0x0041 in hexadecimal. + +Different character sets support the storage of different types and ranges of characters. For example, the `utf8` character set can store all Unicode characters, whereas the `latin1` character set supports the storage of only characters from Western European languages. + + + +Collations +--------- + +A collation is an attribute of character sets. It defines a set of rules for comparing and sorting characters. For example, the `utf8mb4` character set supports collations such as `utf8mb4_general_ci`, `utf8mb4_bin`, and `utf8mb4_unicode_ci`. + +* `utf8mb4_general_ci`: the case-insensitive general collation of `utf8mb4`. +* `utf8mb4_bin`: the case-sensitive binary collation of `utf8mb4`. +* `utf8mb4_unicode_ci`: the Unicode-based case-insensitive collation of `utf8mb4`. +* `utf8mb4` also supports collations for different languages, such as `utf8mb4_zh_pinyin_ci`, which sorts data by Pinyin. + +A character set can have multiple collations. However, a collation belongs to only one character set. For example, if you define a column as `c3 varchar(200) COLLATE utf8mb4_bin`, the character set of the column is automatically set to `utf8mb4`. + + + +Common hints +------- + +Compared with the optimizer behaviors of other databases, the behaviors of the OceanBase Database optimizer are dynamically planned, and all possible optimal paths have been considered. Hints are mainly used to explicitly specify the behavior of the optimizer, and SQL queries are executed based on hints. + +This section introduces the following two common hints: `LEADING` and `USE_NL`. + +* The `LEADING` hint specifies the order in which tables are joined. The syntax is as follows: `/*+ LEADING(table_name_list)*/`. You can use `()` in `table_name_list` to indicate the join priorities of right-side tables to specify a complex join. It is more flexible than the `ORDERED` hint. + +Here is an example: + +![2](/img/blogs/tech/failure-by-collation/2.png) + +![3](/img/blogs/tech/failure-by-collation/3.png) + + +* The `USE_NL` hint specifies to use the nested loop join algorithm for a join when the specified table is a right-side table. The syntax is as follows: `/*+ USE_NL(table_name_list)*/` + +Here is an example: + +![4](/img/blogs/tech/failure-by-collation/4.png) + +> **Note** +> +> **The `USE_NL`, `USE_HASH`, and `USE_MERGE` hints are usually used with the `LEADING` hint because the optimizer generates a plan based on the hint semantics only when the right-side table in the join matches `table_name_list`**. +> +> Here is an example: Assume that you want to modify the join method for the `t1` and `t2` tables in the plan for the `SELECT * FROM t1, t2 WHERE t1.c1 = t2.c1;` statement. +> Six plans are originally available: +> • t1 nest loop join t2 +> +> • t1 hash join t2 +> +> • t1 merge join t2 +> +> • t2 nest loop join t1 +> +> • t2 hash join t1 +> +> • t2 merge join t1 +> +> If you specify the hint `/*+ USE_NL(t1)*/`, four plans are available: +> +> • t1 nest loop join t2 +> +> • t1 hash join t2 +> +> • t1 merge join t2 +> +> • t2 nest loop join t1 +> +> The `t2 nest loop join t1` plan is generated according to the hint only when the `t1` table is the right-side table of the join. When the `t1` table is the left-side table of the join, the hint does not take effect. +> +> If you specify the hint `/*+ LEADING(t2 t1) USE_NL(t1)*/`, only one plan is available: `t2 nest loop join t1`. + +Description +==== + +Let’s now turn to a specific issue. You can view the detailed information about the issue in the [SQL statement execution order and time-consuming SQL query](https://ask.oceanbase.com/t/topic/35613707) post. The specifications of the `t1` and `t2` tables are as follows: + +* The `c3` column in the `t1` table is of type VARCHAR, with the `utf8mb4` character set and `utf8mb4_bin` collation. +* The `c3` column in the `t2` table is also of type VARCHAR, with the `utf8mb4` character set, but uses the `utf8mb4_general_ci` collation automatically set for the character set. + +![5](/img/blogs/tech/failure-by-collation/5.png) + + + +After the two `c3` columns from the `t1` and `t2` tables are joined, the `idx` index on the `t2` table is not used during the join. The `/*+leading(t1 t2) use_nl(t2)*/` hint is specified to reproduce the plan, which indicates that the `t1` table is joined with the `t2` right-side table in a nested loop join. + +![6](/img/blogs/tech/failure-by-collation/6.png) + + + +The query result shows that during the execution of the nested loop join, the `t2` table had to perform a full table scan instead of using the `idx` index to quickly locate data for each row from the `t1` left-side table, resulting in poor SQL execution performance. + + + +Analysis +==== + +Although both `c3` columns are of type VARCHAR and share the same character set, their different collations prevent the index from being used. For example, when you insert the same four rows, `A`, `a`, `B`, and `b`, into the `t1` and `t2` tables, the results are sorted differently due to the collation settings. + +![1](/img/blogs/tech/failure-by-collation/1.png) + +This discrepancy means that the values in the `idx` index on the `t2` table are stored in the following order: `A`, `a`, `B`, `b`. If you use the value `B` in the `t1` table to probe the `idx` index on the `t2` table, the optimizer first compares the value `B` with the value `A` in the `idx` index. Since the value `B` is greater than the value `A`, the optimizer continues to compare the value `B` with the next value `a` in the index. According to the collation of the value `B` in the `t1` table, the value `B` is smaller than the value `a`, so the optimizer returns a message indicating no value `B` is matched in the `t2` table. + +![1729240959](/img/blogs/tech/failure-by-collation/1729240959155.png) + +As a result, the optimizer cannot use the index on the `t2` table to quickly locate data from the `t1` table and resorts to a full table scan on the `t2` table. + + + +Solutions +==== + + + +Solution 1 +--- + +A suggestion from an SQL tuning expert Xuyu is to use the `CONVERT` function to change the collation of the `c3` column in the `t1` table to match the `utf8mb4_general_ci` collation of the `c3` column in the `t2` table before performing the join. This helps unify the collation across both tables and allows the optimizer to use the index. + +![7](/img/blogs/tech/failure-by-collation/7.png) + +The plan shows that the index on the `t2` table is used, replacing the previous full table scan. + + + +Solution 2 +--- + +While the first solution works in this specific case because the default collation of the `utf8mb4` character set is `utf8mb4_general_ci`, it is not a universal solution. The `CONVERT` function only changes the character set and uses the default collation of the target character set. If you need a different collation, you can use the `COLLATE` keyword in the join condition to explicitly specify the collation of the `c3` column in the `t1` table as `utf8mb4_general_ci`. + +![8](/img/blogs/tech/failure-by-collation/8.png) + +References +---- + +[OceanBase Quick Starts for DBAs —— Diagnostics and tuning —— Read and manage SQL execution plans in OceanBase Database](https://oceanbase.github.io/docs/user_manual/quick_starts/en-US/chapter_07_diagnosis_and_tuning/management_execution_plan) \ No newline at end of file diff --git a/docs/blogs/users/Zhongyuan-Bank.md b/docs/blogs/users/Zhongyuan-Bank.md new file mode 100644 index 000000000..894eba526 --- /dev/null +++ b/docs/blogs/users/Zhongyuan-Bank.md @@ -0,0 +1,121 @@ +--- +slug: Zhongyuan-Bank +title: 'Zhongyuan Bank‘s Practice: 70+ Tests Performed to Verify OceanBase Database, 30+ Systems Upgraded to New Database' +tags: + - User Case +--- + +This article is a summary of Episode 15 of the DB Gurus Talks series. + +Lyu Chunlei is a senior database expert, who has worked in the fields of traditional manufacturing and IT and Oracle Corporation before joining Zhongyuan Bank. His extensive experience in database O&M and management has played a crucial role as he led Zhongyuan Bank’s database team in upgrading from conventional databases to native distributed databases. + +As a highly skilled database administrator (DBA), Mr. Lyu believes that one cannot build a successful DBA career without three decisive factors. First, a deep interest in database technology, which is the inexhaustible driving force that keeps DBAs moving forward. Second, a determination for constant learning, which is much like the hiking stick of a mountaineer, helping DBAs break through limitations. Finally, a deep sense of awe towards the production environment, which encourages a strong sense of responsibility. + +These three factors have played an essential role in Lyu's work. Lyu was the chief technical expert who **led the project of upgrading Zhongyuan Bank's conventional database solution based on midrange servers to an OceanBase Database-powered architecture based on general servers. To date, over 30 systems have been deployed using the new architecture**. In Episode 15 of the DB Gurus Talks, Lyu shared how Zhongyuan Bank successfully upgraded its database system from a closed architecture to an open one. + +![1735026880](/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.png) + +Zhongyuan, literally the central plain, is the birthplace of the Chinese nation and the economic hub of ancient China, boasting the glorious history of onetime capitals like Kaifeng, Luoyang, and Anyang, and is given a transcendent status in the country's civilization landscape. + +Zhongyuan Bank was born on this land. **Founded in December 2014, this urban commercial bank headquartered in Henan Province was listed on the main board of the Hong Kong Stock Exchange in July 2017, and now has 18 branches, with total assets exceeding CNY 1.3 trillion, and a staff of nearly 20,000**. + +At the end of 2021, Zhongyuan Bank officially launched its database upgrade project. So far, more than 30 application systems on MySQL and Oracle databases have been migrated to OceanBase Database. This project has played a key role in helping the bank gain insights into its customers, markets, and business to support daily operational and management decision-making. + + + +**1. Pressing Need for Database Upgrade Amid Business Challenges** +-------------------- + +Zhongyuan Bank initially deployed most of its business systems on conventional centralized MySQL and Oracle databases, which had reliably supported the bank’s operations in the past. However, as the business of the banks grew, the transaction concurrency and data volumes of its systems increased, and its original databases could not be scaled out quickly to meet the performance and capacity demands, making it urgent for the bank to adopt a highly scalable high-performance database solution to sustain business growth. + +Moreover, adhering to the bank’s digital transformation strategy, microservices and distributed architectures were adopted, with infrastructure software such as distributed middleware and analytical distributed databases deployed. The centralized databases for transactional systems had become the bottleneck of this transformation. + +Cost considerations were another driving force for the database upgrade. The bank's previous information systems adopted midrange servers, centralized storage, and Oracle databases, leading to constantly high operational costs. In recent years, the bank needed to restructure some critical information systems to improve the overall processing capacity at reasonable infrastructure costs. + +Additionally, databases are crucial pieces of infrastructure software. Zhongyuan Bank also needed to accelerate its transition to domestic databases while ensuring stable system operations, thereby increasing its control over IT infrastructure. + +**Given the reasons above, Zhongyuan Bank initiated the selection of its next-generation database system in December 2020. Mr. Lyu, as the database team leader, was in charge of this task**. + +Having years of experience in database operations, Lyu was clear about the challenges. Over the years, the bank has deployed a large number of systems with a wide variety of applications, including self-developed, third-party sourced, and customized systems. Many were provided by different vendors, which means significant variations in development practices and code quality, making the upgrade process highly complex. + +"The bank has its development standards and requirements. However, the varying capabilities and code quality of the vendors result in quite complex system transformation and database upgrades. In particular, systems that depend heavily on Oracle database features may require the rewriting of many complex SQL statements. This is where strong support from the database vendors is needed," Lyu Chunlei explained. + +To achieve sustainable business development, Zhongyuan Bank also needed to explore cost-effective solutions for those Oracle-dependent systems while ensuring their stability. + + + +**2. Choice of OceanBase Database after Comprehensive Testing** +------------------------------- + +In line with the Financial Application Specification of Distributed Database Technology—Technical Architecture, a standard released by the People’s Bank of China, and based on its own needs and peer experiences, Zhongyuan Bank determined its core requirements for database selection: high stability, high availability, scalability, maintainability, high performance, and compatibility, while considering overall costs, tools, platforms, and ecosystem development. + +**"Stability and high availability are the two most important requirements for us. Stability is fundamental for financial services, especially in the case of failures, such as IDC or server failures. The system must provide a self-healing mechanism to minimize the impact on applications," Lyu noted.** + +Scalability is another crucial feature. The database system must support node addition or removal while it is running to enhance its performance and capacity. Lyu explained that scalability was one of the primary reasons for considering distributed databases. Zhongyuan Bank's original Oracle Real Application Clusters (RAC) could be scaled out to increase storage and compute resources, but their "share-everything" architecture made it impossible to effectively improve the I/O capacity. + +Adhering to the database selection requirements, the bank conducted a comprehensive proof of concept (POC) assessment of leading domestic database products to compare their basic capabilities, performance, high availability, maintainability, compatibility, and security, covering 79 test items. In this competitive assessment, OceanBase Database stood out for its excellent performance, high availability, and O&M efficiency, and eventually got the nod. + +"OceanBase Database not only ticked all our boxes in terms of performance, high availability, and maintainability, it would also benefit us for a lower total cost of ownership. We decided to go with it based on an all-around evaluation," said Lyu. + +Lyu also mentioned several other key features required by Zhongyuan Bank. For example, OceanBase Database is highly compatible with Oracle and MySQL and comes with an automatic migration tool that supports migration assessment and reverse synchronization. These features ensure data migration security and can support the upgrade of the bank's core business systems. Furthermore, OceanBase Database can be horizontally scaled out without affecting business applications, and its quasi-memory transaction processing architecture helps maintain high performance, allowing the bank to create a cluster with thousands of nodes and store trillions of rows in a single table. + + + +**3. Successful Upgrade Guaranteed by Advanced Technologies** +------------------------- + +Database selection was the first step in the upgrade project. The real challenges for the project team lay in data migration and business relaunch. Zhongyuan Bank formulated a detailed database migration plan, covering system selection, modification evaluation, code modification, testing, business relaunch, and post-migration review. + +When it came to system selection, Lyu explained, "The more important an information system is, the higher the security risk it faces, and the more urgent the need for an upgrade to domestic technologies. **Therefore, we picked key business systems that handle highly concurrent requests, such as those involving online services and channel management**." + +Then, the bank evaluated those systems meticulously, focusing on identifying and adapting Oracle-specific syntax. Using OceanBase Migration Assessment (OMA), the bank comprehensively analyzed and diagnosed the systems, scanning SQL syntax, table schemas, and database objects to accurately identify necessary modifications and streamline the modification process. + +After the code modifications were completed, various tests and relaunch drills were performed repeatedly until the systems met requirements. + +**The relaunch process was split into two stages: data migration and data verification**, which were completed using OceanBase Migration Service (OMS). This tool supports full and incremental migration, batch verification, and reverse writing, ensuring a smooth migration process. + +"Every change in a financial system must be reversible, so the reverse writing feature is essential to us. OMS automatically converts data types and performs reverse writes, making the entire migration process seamless," said Lyu. + +After the database system was upgraded, Lyu's team monitored and tracked the system performance to rule out possible unforeseen issues, such as performance fluctuations, despite extensive testing. + +According to Lyu, OceanBase Cloud Platform (OCP), the official database management tool, made those tasks much easier. OCP throttled poorly performing SQL queries and allowed the team to adjust execution plans using hints, ensuring rapid recovery of information systems before they could drill down to the root cause of performance issues. + +After an information system was relaunched based on OceanBase Database, Lyu's team would review the whole process, focusing on performance and resource usage before and after the relaunch, issues encountered during the process, how they were resolved, and whether to incorporate corresponding notes into standard operating procedures. The review was quite necessary. Lyu noted that unforeseen issues were likely despite meticulous preparations. With the strong support from the OceanBase delivery team, however, those issues were efficiently resolved. + +Nonetheless, Lyu expressed his expectation that the bank’s database team could handle issues independently. "Our principle is to do our jobs with minimal assistance from outside and develop in-house O&M capabilities as soon as possible. This will allow us to truly have full control," he stressed. + + + +**4. Significant Benefits from Two Years of Stable Operation** +---------------------- + +As of November 2022, the full stack of the OceanBase cluster created by the bank went live, and mobile banking services were relaunched in the new architecture. A variety of core systems such as credit and payment systems followed suit. Back then, the bank also organized a failover drill on a production cluster that had two IDCs in the same region. + +So far, OceanBase, with its exceptional performance and stability, has supported over 30 of the bank's information systems. More than 80% of them are key business systems. This achievement not only showcased the bank's technical strength but also a strong testament to its commitment to digital transformation. + +With the migration of more key business systems to OceanBase Database, the benefits have started to come in. + +**First, performance**: The information systems have maintained their performance after the migration to OceanBase Database. "Given that the previous database systems ran on proprietary and expensive hardware, while the new architecture uses general servers, it is a significant improvement by maintaining the performance," Lyu explained. + +**Second, costs**: Compared with the previous architecture comprising midrange servers, centralized storage, and Oracle databases, the new solution based on general servers offers great cost advantages. Optimal resource utilization has further reduced the costs. OceanBase Database supports quick scaling by flexibly reallocating resources from different pools, so the bank does not have to configure redundant hardware resources for critical systems. + +**Third, operational efficiency**: The rich set of features provided by OCP has greatly improved the bank's O&M efficiency. For example, a failover between the primary and standby clusters in the same region can now be completed with a few clicks in just 6 seconds, which contrasts sharply with conventional centralized databases. OceanBase Database supports multitenancy, allowing tenants in MySQL and Oracle mode to coexist in a single cluster, and to be monitored and managed under unified standards. + +**Fourth, stability**: OceanBase Database has operated smoothly since its launch. It quickly recovers from server failures, ensuring uninterrupted business continuity. + +Looking back on the past few years, Lyu noted that his experience in database upgrades and O&M is invaluable. Amid the trend of IT infrastructure localization, Zhongyuan Bank will speed up its system upgrade significantly, and Lyu's valuable experience will be a great lubricant for this process. + +Looking ahead, Lyu said that Zhongyuan Bank plans to migrate more systems to OceanBase Database. They are also exploring how to better utilize new features of OceanBase Database, such as hybrid transaction and analytical processing (HTAP), to extract more business value, including supporting lightweight AP tasks. + + + +**5. Summary** +---------- + +We are now in an era of explosive growth of AI technologies, where data processing capabilities have become the core strength of enterprises. A new round of competition in the financial industry has already begun. Zhongyuan Bank, striving to become a top-tier urban commercial bank, has been advocating for a "data-driven culture". With the high-performance, high-stability OceanBase Database and its rich suite of ecosystem tools, Zhongyuan Bank is better equipped to pursue its business objectives, fulfill social responsibilities, and drive further high-quality business development. + + + +* * * + +Special thanks to Mr. Lin Chun for his support for this episode of DB Gurus Talks. Lin Chun is the chief database expert at the China Pacific Insurance (CPIC) Digital Intelligence Research Institute. He has extensive experience in upgrading and replacing core financial system databases. He was also the keynote speaker of Episode 3 of DB Gurus Talks. To learn more about CPIC’s database upgrade practice, see [Review on Core System Database Upgrade by CPIC Chief Database Expert](https://open.oceanbase.com/blog/8761673744). \ No newline at end of file diff --git a/docs/blogs/users/trading-system.md b/docs/blogs/users/trading-system.md new file mode 100644 index 000000000..5edc752db --- /dev/null +++ b/docs/blogs/users/trading-system.md @@ -0,0 +1,118 @@ +--- +slug: trading-system +title: 'Practice of Applying OceanBase Database in a Futures Company‘s Production Environment' +tags: + - User Case +--- + +# Practice of Applying OceanBase Database in a Futures Company's Production Environment + +**Introduction** + +As the financial industry evolves and markets grow increasingly complex, futures trading systems face challenges such as handling massive transaction volumes, ensuring low latency, and maintaining system reliability. Conventional standalone and distributed databases are struggling to keep up. Futures companies and exchanges have been looking for next-generation distributed database technologies to boost the scalability, reliability, and performance of their systems. OceanBase Database, a distributed database system developed by Alibaba Group, is becoming a key part of the infrastructure in the financial sector, particularly for futures trading systems, thanks to its high availability, performance, and scalability. + +In this article, we'll explore how a futures company runs its core trading system on OceanBase Database, covering everything from database architecture design to performance optimization, transaction management, and fault recovery. + +**1\. Background and Requirements** + +**1.1 Unique demands of futures trading systems** + +As irreplaceable components of financial markets, futures trading systems must provide some key features: + +High concurrency: Futures markets are where highly frequent transactions take place, especially during periods of high volatility when transaction requests can reach millions per second. + +Low latency: Speed is everything in futures trading. Even the slightest delay can have a significant impact on trade execution. + +High availability and disaster recovery: Futures trading systems must be highly available and capable of recovering quickly from disasters, as a short downtime can result in substantial financial losses. + +Strong consistency: With large amounts of funds and real-time market data updates involved, futures trading systems must ensure strong data consistency and the atomicity of transactions. + +**1.2 Challenges for conventional databases** + +Conventional relational databases, with their single-node architecture and vertical scaling mode, struggle to cope with highly concurrent transactions and massive amounts of data. Here are some of the issues they face: + +Performance bottlenecks: As the data volume and concurrency increase, conventional databases experience poorer query and write performance, leading to response lags of databases and delays in transaction processing. + +Scalability issues: When traffic soars, conventional standalone databases can hardly be scaled horizontally. + +Poor fault recovery: Conventional databases often rely on a primary/standby model. If the primary node fails, manual intervention is required to switch services to the standby nodes, which may result in prolonged downtime and reduced availability. + +**1.3 Why OceanBase Database** + +OceanBase Database, as a distributed relational database system, offers several benefits that address the shortcomings of their conventional cousins: + +High concurrency support: The distributed architecture of OceanBase Database can be scaled horizontally in a moment, making it ideal for high-concurrency trading scenarios. + +Low latency: Through table sharding and optimized query paths, OceanBase Database controls the response time in milliseconds. + +High availability and disaster recovery: With its multi-replica mechanism and automatic failover capabilities, OceanBase Database ensures high availability even in the event of node failures. + +Strong consistency: OceanBase Database supports distributed transactions, ensuring data consistency across nodes, which is critical for the accuracy of futures trading systems. + +Given these benefits, the futures company decided to migrate its core trading system to OceanBase Database, aiming to solve issues related to scalability, performance, and fault recovery. + +**2\. OceanBase Database's Role in the Architecture of the Futures Trading System** + +**2.1 Overview** + +In the architecture of the futures trading system, OceanBase Database is deployed at the database layer to store and manage all trading data, market data, fund data, and user account information. The system architecture consists of the following layers: + +(1) Frontend trading layer: This layer includes user trading terminals and order processing modules. It receives trading requests from users and passes them through message queues to the core trading engine. + +(2) Core trading engine: This layer handles the actual trading logic, such as order matching, trade execution, and risk control. It interacts with OceanBase Database through database interfaces. + +(3) Database layer: In this layer, OceanBase Database stores and manages all trading-related data, such as order information, trade records, and position data. + +(4) Data synchronization and monitoring layer: This layer handles data backup, real-time data synchronization, monitoring, and alerting, ensuring high availability and data integrity of the system. + +In this architecture, the primary role of OceanBase Database is to provide efficient and stable data storage and processing capabilities. + +**2.2 Table sharding and distributed architecture design** + +OceanBase Database divides a data table into shards based on specific rules and distributes them across multiple nodes. To better fit the needs of the futures trading system, the following sharding and distribution strategies are implemented: + +Order table: The order table is sharded by order ID. Shards of different orders are distributed across different physical nodes to prevent any node from being overloaded. + +Trade record table: This table is sharded by trade time and trading pair, ensuring efficient data locating during queries. + +Fund flow table: This table is sharded by user ID, allowing for quick access to a user's fund changes. + +These sharding strategies allow OceanBase Database to deliver excellent query performance while maintaining high availability and scalability. + +**3\. Application of OceanBase Database in Futures Trading** + +**3.1 Order processing and transaction management** + +In the futures trading system, order processing requires strong data consistency and the atomicity of transactions. OceanBase Database ensures this through its distributed transaction protocol while handling order operations as follows: + +Order creation: When a user submits an order, OceanBase Database ensures the order data is successfully written to the database and immediately fed back to the trading system. + +Order matching and execution: During order matching, OceanBase Database performs joint queries on multiple tables such as the order table and trade record table, while maintaining data consistency to avoid matching failures or inconsistencies. + +Fund deduction and settlement: The execution of each order involves fund deductions and position updates. OceanBase Database supports cross-table and distributed transactions, ensuring accurate fund settlement and preventing issues like insufficient funds or settlement errors. + +The transaction management capabilities of OceanBase Database play a crucial role in futures trading, especially in high-concurrency scenarios, where it ensures the consistency of every order and the atomicity of trading operations. + +**3.2 Optimizations for high-concurrency trading scenarios** + +Futures markets experience considerable fluctuations in trading volume and data flow. During periods of high volatility, in particular, the trading system faces rocketing concurrent requests. In response, OceanBase Database implements the following optimizations: + +Table sharding: By dividing tables into shards and distributing them across multiple nodes, OceanBase Database parallelizes request processing, avoiding performance bottlenecks on any single node. + +Multi-level caching: OceanBase Database adopts a multi-level caching mechanism that can cache data in memory and disks. Frequently accessed data, such as real-time market data for certain trading products, is cached to reduce direct database access and improve performance. + +Query execution: OceanBase Database provides a highly optimized query execution engine. It minimizes the query latency by selecting the optimal execution plan, especially for complex queries and large datasets. + +These optimizations enable OceanBase Database to maintain a low latency and a high throughput in high-concurrency trading scenarios, ensuring the futures trading system remains stable even under tremendous load. + +**3.3 High availability and disaster recovery design** + +The futures trading system must operate around the clock, making high availability a top priority. OceanBase Database guarantees high availability and disaster recovery by leveraging the following mechanisms: + +Multi-replica mechanism: Each data node stores multiple replicas of the data, distributed across different physical servers. If a node fails, OceanBase Database automatically switches to another replica. + +Automatic fault recovery: Relying on its monitoring and fault detection mechanisms, OceanBase Database quickly identifies node failures and automatically initiates failover and recovery processes. + +Cross-IDC disaster recovery: To prevent system crashes from single points of failure, OceanBase Database can be deployed in multiple active IDCs across different regions. + +This high availability and disaster recovery design ensures that the futures trading system remains stable and operational, even in the face of hardware or network failures. \ No newline at end of file diff --git a/static/img/blogs/tech/failure-by-collation/1.png b/static/img/blogs/tech/failure-by-collation/1.png new file mode 100644 index 000000000..ee1f7edbc Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/1.png differ diff --git a/static/img/blogs/tech/failure-by-collation/1729240959155.png b/static/img/blogs/tech/failure-by-collation/1729240959155.png new file mode 100644 index 000000000..9239a53dc Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/1729240959155.png differ diff --git a/static/img/blogs/tech/failure-by-collation/1729240959155.psd b/static/img/blogs/tech/failure-by-collation/1729240959155.psd new file mode 100644 index 000000000..ff71ded03 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/1729240959155.psd differ diff --git a/static/img/blogs/tech/failure-by-collation/2.png b/static/img/blogs/tech/failure-by-collation/2.png new file mode 100644 index 000000000..cda59fc6f Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/2.png differ diff --git a/static/img/blogs/tech/failure-by-collation/3.png b/static/img/blogs/tech/failure-by-collation/3.png new file mode 100644 index 000000000..aaa8aadef Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/3.png differ diff --git a/static/img/blogs/tech/failure-by-collation/4.png b/static/img/blogs/tech/failure-by-collation/4.png new file mode 100644 index 000000000..467f516c3 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/4.png differ diff --git a/static/img/blogs/tech/failure-by-collation/5.png b/static/img/blogs/tech/failure-by-collation/5.png new file mode 100644 index 000000000..f93169346 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/5.png differ diff --git a/static/img/blogs/tech/failure-by-collation/6.png b/static/img/blogs/tech/failure-by-collation/6.png new file mode 100644 index 000000000..84c422e54 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/6.png differ diff --git a/static/img/blogs/tech/failure-by-collation/7.png b/static/img/blogs/tech/failure-by-collation/7.png new file mode 100644 index 000000000..84604e6b3 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/7.png differ diff --git a/static/img/blogs/tech/failure-by-collation/8.png b/static/img/blogs/tech/failure-by-collation/8.png new file mode 100644 index 000000000..03d424bc5 Binary files /dev/null and b/static/img/blogs/tech/failure-by-collation/8.png differ diff --git a/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.png b/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.png new file mode 100644 index 000000000..7bf99e475 Binary files /dev/null and b/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.png differ diff --git a/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.psd b/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.psd new file mode 100644 index 000000000..5a4df633f Binary files /dev/null and b/static/img/blogs/users/Zhongyuan-Bank/75a0cc1e-01fd-48de-8748-003138ff6498.psd differ