diff --git a/docs/blogs/tech/alter-table.md b/docs/blogs/tech/alter-table.md index d9b87a72c..1315c76b7 100644 --- a/docs/blogs/tech/alter-table.md +++ b/docs/blogs/tech/alter-table.md @@ -3,6 +3,8 @@ slug: alter-table title: 'Principles of ALTER TABLE in OceanBase Database' --- +# Principles of ALTER TABLE in OceanBase Database + Foreword ==== diff --git a/docs/blogs/tech/query-engines.md b/docs/blogs/tech/query-engines.md index cd4f797fb..7012b432b 100644 --- a/docs/blogs/tech/query-engines.md +++ b/docs/blogs/tech/query-engines.md @@ -3,6 +3,8 @@ slug: query-engines title: "Evolution of Database Query Engines" --- +# Evolution of Database Query Engines + > In relational databases, the query scheduler and plan executor are as crucial as the query optimizer, and their importance is increasing with advancements in computer hardware. In this article, _**Yuming**_, a technical expert from the OceanBase team who was born in the 1990s, will walk you through the milestones in the evolution of plan executors. About the author: Wei Yuchen, a technical expert from the OceanBase team of Ant Group, has been working on SQL parsing, execution, and optimization since joining the OceanBase team in 2013. @@ -39,10 +41,8 @@ However, the nested operator model has its drawbacks: The Volcano model was first introduced by Goetz Graefe in 1990 in his paper *Volcano—An Extensible and Parallel Query Evaluation System*. In the early 1990s, memory was expensive, and I/O was a significant bottleneck compared to CPU execution efficiency. This I/O bottleneck, the so-called "I/O wall" problem, between operators and storage was the primary limiting factor for query efficiency. The Volcano model allocated more memory resources to I/O caching than to CPU execution efficiency, which was a natural trade-off given the hardware constraints at the time. As hardware advances brought larger memory capacities, more data can be stored in memory. However, the relatively stagnant performance of single-core CPUs became a bottleneck. This spurred numerous optimizations aimed at improving CPU execution efficiency. -Operator Fusion -============================================================================================================== -The simplest and most effective way to optimize the execution efficiency of operators is to reduce their function calls during execution. The Project and Filter operators are the most common operators in plan trees. In OceanBase V1.0, we fuse these operators into other specific algebraic operators. This significantly reduces the number of operators in a plan tree and minimizes the number of nested next() calls between operators. Integrating the Project and Filter operators as internal capabilities of each operator also enhances code locality and CPU branch prediction. +The simplest and most effective way to integrate and optimize the execution efficiency of operators is to reduce the function calls of operators during the execution process. The Project and Filter operators are the most common operators in plan trees. In OceanBase V1.0, we fuse these operators into other specific algebraic operators. This significantly reduces the number of operators in a plan tree and minimizes the number of nested next() calls between operators. Integrating the Project and Filter operators as internal capabilities of each operator also enhances code locality and CPU branch prediction. ![1679571568](https://obcommunityprod.oss-cn-shanghai.aliyuncs.com/pord/blog/2023-04/1679571568453.png) diff --git a/docs/blogs/users/Beike-Dict-service.md b/docs/blogs/users/Beike-Dict-service.md new file mode 100644 index 000000000..2c614c418 --- /dev/null +++ b/docs/blogs/users/Beike-Dict-service.md @@ -0,0 +1,299 @@ +--- +slug: Beike-Dict-service +title: 'Beike: Practice of Cost Reduction and Efficiency Improvement Based on the Real-time Dictionary Service of OceanBase Database' +tags: + - User Case +--- + +Beike, operated by KE Holdings Inc., is an industry-leading digital housing service platform in China. It is committed to promoting the digitalization and intelligentization of China's housing service industry, aiming at pooling and empowering resources to provide Chinese families with all-in-one, premium, and efficient services, from new home and resale transactions to leasing, decoration, local life, and handyman services. + + + +Beike needed a real-time dictionary service to precisely deduplicate a great number of its business metrics in real time. This posed high requirements on the storage service, which must handle read/write operations of over 100,000 records per second, ensure data persistence, and guarantee data uniqueness. Considering the characteristics of the adopted storage system and Beike’s business needs, Beike shortlisted candidates and chose OceanBase Database over HBase. After deploying OceanBase Database, Beike achieved higher query performance and stability, while cutting down on both hardware and O&M costs. + +**Build a Real-time Dictionary Service to Solve the Bottleneck of Precise Deduplication** +------------------------- + +When it comes to data analytics, the COUNT DISTINCT function is often used to get the exact count of unique values for precise deduplication. Many business metrics of Beike, such as the number of accompanied visits, number of clients, daily active users (DAU), and monthly active users (MAU), rely on the precise deduplication service. For any online analytical processing (OLAP) engine worth its salt, supporting unique value counting for precise deduplication is a must-have feature. + + + +A conventional database performs the counting flexibly based on raw data, keeping all the details. This method, however, is a real resource hog due to multiple data shuffles during a query. When dealing with high-cardinality data, its performance can go through the floor. To tackle this issue, big data folks often turn to approximate methods, such as HyperLogLog and Count-Min Sketch. These methods consume fewer computing resources, but they introduce approximation errors and do not support the counting. Bitmap is another popular trick for achieving precise deduplication. The idea is simple. Each element of the details is mapped to a bit in a bitmap. The bit value is 1 if the element exists, and 0 if it doesn't. When you need to perform the counting, just count the number of bits set to 1 in the bitmap. But here's the catch - this bitmap-based method only works with integer fields. If you want to perform the bitmap-based counting on other data types, such as strings, you'll need to build a global dictionary to map non-integer data to integers. + + + +To support fast counting for non-integer data in real time, a real-time dictionary service is required, so that it converts non-integer data into unique integers in Flink jobs, and stores them in a downstream OLAP engine, such as StarRocks. Then, the OLAP engine, which supports the bitmap-based counting, can do the job. + + + +In a nutshell, the real-time dictionary service is basically a translator that maps non-integer data to integers in real-time data streams. Here's what it does: + +* Receives non-integer data (key) from the caller and returns corresponding integers (key:value) +* Ensures that the same key always gets the same value, keeping data persistent and unique + + + +Since it's used in real-time data streams, the service needs to be lightning-fast with minimal latency in response to the caller. + + + +The following figure shows the role of the real-time dictionary service in a data processing flow. During the real-time extract-transform-load (ETL) process, Flink calls the dictionary service, feeds in the raw data, gets the corresponding dictionary values, makes the swap, and writes the mapped values to the OLAP engine. + +![1686713915](/img/blogs/users/Beike-Dict-service/image/1686713915588.png) + +The dictionary service consists of computing and storage layers. The computing layer handles dictionary registration, data queries, and data processing, and interacts with the caller and the storage layer. The storage layer stores dictionary data and provides query services. + + + + **1. Computing layer** + +This layer handles dictionary registration and generation. + +**Dictionary registration**: String fields must register with the dictionary, and each field gets its own dictionary table in the storage service. Dictionary data is stored and queried based on dictionary tables. + +**Dictionary generation**: The caller gets dictionary values corresponding to their raw values using the dictionary ID and raw value list. The following figure shows the three steps of a query: 1) query the dictionary table based on the raw value list to get the mapped values, 2) generate new dictionary values for any nonexistent raw values, and 3) return results obtained in steps 1 and 2. + +![1686713929](/img/blogs/users/Beike-Dict-service/image/1686713929132.png) + + **2. Storage layer** + +This layer handles dictionary data storage and queries and plays a fundamental role in enabling the dictionary service. The storage service needs rock-solid reliability to prevent data loss and duplication, plus the muscle to read/write over 100,000 data rows per second with low latency. Picking the right storage service is arguably a make-or-break factor. + +Select a Real-Time Dictionary Storage Service to Prevent Data Loss or Duplication +------------------- + +To meet the storage requirements of the real-time dictionary service, the storage service must be able to read/write over 100,000 data rows per second, support data persistence, and guarantee data uniqueness. Data persistence and data uniqueness matter the most because the storage service must ensure zero data loss and zero data duplication (one key corresponding to multiple values, or the other way around). Given the characteristics of Beike's legacy storage system and its business needs, HBase and OceanBase Database were tested and compared. + +### 1. Prepare the environment + +The OceanBase and HBase clusters for the test were respectively deployed on three Dell EMC PowerEdge R640 servers, each with 48 CPU cores, 128 GB of memory, and a 1.5 TB NVMe SSD. All test tasks were executed in the same real-time Hadoop cluster. HBase 1.4.9 was used, and the HBase cluster was deployed and configured by the HBase database administrator. OceanBase Database V3.1.2 was used, with all parameters set to the default values. + +### 2. Test data + +In the test, Spark Streaming real-time tasks consumed the starrocks-prometheus-metrics topic, which involves 40,000 to 80,000 data rows per second, to generate a UUID for each data row, and then batch called the dictionary service to generate dictionaries, with the batchDuration parameter set to 1 second. The amount of data and thus the stress on the storage service was increased by initiating more real-time tasks, and the throughput capacity of the storage service was evaluated by the latency of the real-time tasks. + +The following table describes the three levels of the stress test, and each level lasted 10 minutes in the test. + +![1686713967](/img/blogs/users/Beike-Dict-service/image/1686713967140.png) + +### 3. Test process and results analysis + +#### 1) Test on HBase + + HBase itself supports data persistence, which ensures zero data loss. In addition, HBase calls the get(List) operation to execute batch queries, the incrementColumnValue operation to prevent duplicated values, and the checkAndPut operation to guarantee key uniqueness. + + + + HBase provides a dictionary service in the following procedure: + +* Calls the get(List) operation to query dictionary tables in batches. +* Calls the incrementColumnValue operation to batch generate auto-increment unique IDs for data that does not exist in dictionary tables. This ensures that the dictionary values are unique. +* Calls the checkAndPut operation to write the key:value data into the dictionary tables. A successful write means that the corresponding dictionary value is generated, whereas a failed write means that the corresponding dictionary value already exists. This way, the same key will not be written twice. +* Calls the get(List) operation again, using the data that failed to be written in the previous step, to query the dictionary values. + +![1686714090](/img/blogs/users/Beike-Dict-service/image/1686714090841.png) + + To improve the data read/write performance, the dictionary tables were pre-split into multiple regions using the HexStringSplit method as recommended by HBase database administrators, so that the data was evenly distributed across different region servers. The batch size of read/write operations was set to 100, an optimal value considering the response time of different sizes. + +The following table shows the details of batch read/write operations. + +![1686714102](/img/blogs/users/Beike-Dict-service/image/1686714102511.png) + +The following table shows the test results. + +![1686714112](/img/blogs/users/Beike-Dict-service/image/1686714112864.png) + +#### 2) Test on OceanBase Database + +OceanBase Database stores dictionary data in tables and ensures data uniqueness by using keys as the primary key and setting values to be auto-increment. The following sample statement shows how to create such a table: + + CREATE TABLE `t_olap_realtime_cd_measure_duid_dict` ( + `dict_key` VARCHAR(256) NOT NULL, + `dict_val` BIGINT(20) NOT NULL AUTO_INCREMENT, + PRIMARY KEY (`dict_key`) + ) DEFAULT CHARSET = utf8mb4 PARTITION BY KEY(dict_key) PARTITIONS 10 + + +Compared to HBase, this method simplifies data processing and can do the same job just by executing SQL statements. Here is an example: + +* Query existing dictionary values: select dict\_key,dict\_value from t\_olap\_realtime\_cd\_measure\_duid\_dict where dict\_key in (...) +* Insert the nonexistent dict\_key values in the result of the previous step into the database: insert ingore into t\_olap\_realtime\_cd\_measure\_duid\_dict (dict\_key) values (...) +* Query the database again for the data inserted in the previous step: select dict\_key,dict\_value from t\_olap\_realtime\_cd\_measure\_duid\_dict where dict\_key in (...) + + + +Using OceanBase Database, Beike does not need to pay attention to preventing duplicated keys or auto-increment values at the code level, which is handled by built-in features of the database system. OceanBase Database not only simplifies the data processing flow, but also writes data in batches, which is more efficient compared with writing one data row at a time. The batch size of read/write operations was set to 500 for OceanBase Database. + +The following table shows the details of batch read/write operations. + +![1686714163](/img/blogs/users/Beike-Dict-service/image/1686714163064.png) + +The following table shows the test results. + +![1686714172](/img/blogs/users/Beike-Dict-service/image/1686714172781.png) + +#### 3) Data analysis and comparison + + + +First, let's compare the batch read throughput (unit: row/s). + + + +Stress + +HBase + +OceanBase Database + +Level I + +83109.45 + +158579.1 + +Level II + +84355.54 + +264192.8 + +Level III + +76857.87 + +329107.3 + +![1686714194](/img/blogs/users/Beike-Dict-service/image/1686714194863.png) + +As mentioned above, the batch size was set to 100 for HBase and 500 for OceanBase Database based on their respective characteristics. The preceding figure shows that the query throughput of OceanBase Database was significantly higher than that of HBase at all three stress levels in the test, with the data volume ranging from 40,000 and 240,000 rows. + +**Now, let's compare the batch write throughput (unit: row/s)**. + + + +Stress + +HBase + +OceanBase Database + +Level I + +43256.6 + +249612.5 + +Level II + +64339.58 + +326436.7 + +Level III + +77805.46 + +358716.2 + +![1686714222](/img/blogs/users/Beike-Dict-service/image/1686714221976.png) + + + +To ensure the uniqueness of keys, HBase uses the checkAndPut method to write one data row at a time, while OceanBase Database takes keys as the primary key, and writes data in batches, specifically, 500 rows at a time. This way, the batch write throughput of OceanBase Database was much higher than that of HBase in the test. + +**Now, let's look at the average time, in milliseconds, that each database system took to finish a complete processing cycle**. + + + +Stress + +HBase + +OceanBase Database + +Level I + +657.52 + +307.45 + +Level II + +1000.85 + +386.42 + +Level III + +1279.63 + +474.59 + +![1686714234](/img/blogs/users/Beike-Dict-service/image/1686714234502.png) + +The comparison indicates that: + +* OceanBase Database spent 50% less time than HBase in finishing a complete data processing cycle. +* Both HBase and OceanBase Database completed a real-time task involving 40,000 to 80,000 data rows within 1 second. +* HBase took more than 1 second to handle two real-time tasks, involving 80,000 to 160,000 data rows. However, HBase did not show a significant latency due to uneven data volume. +* HBase took 1.27 seconds on average to complete three real-time tasks, involving 120,000 to 240,000 data rows, showing an increasing latency of the real-time tasks. +* OceanBase Database completed the data processing cycle within 0.5 seconds despite the increasing stress. + + + +**At last, let's compare the average throughput (unit: row/s)**. + + + +Stress + +HBase + +OceanBase Database + +Level I + +25033.94 + +57429.03 + +Level II + +33161.58 + +91582.48 + +Level III + +35500.47 + +112002.3 + +![1686714264](/img/blogs/users/Beike-Dict-service/image/1686714264396.png) + + + +The throughput of OceanBase Database was 2 to 3 times higher than that of HBase in the test, and the advantage is getting bigger as the data volume increases. + +Summary +------------------------------ + +After the dictionary service is deployed, it writes a great amount of data to the storage and handles frequent read requests in the early stage. As more dictionaries are created, along with their growing sizes, the dictionary service involves more read requests and fewer write requests. In this test, randomly generated UUID data was used, so all data rows were fully written and read during the entire data processing cycle. This means that the test was more stressful for the storage system than the real online environment. + + + +The performance of HBase and OceanBase Database in handling tasks at three stress levels of the test is described as follows: + +* At level I, which involved 40,000 to 80,000 rows, both HBase and OceanBase Database completed data processing within 1 second. In this scenario, both HBase and OceanBase Database met the requirements. +* At level II, which involved 80,000 to 160,000 rows, HBase took a bit more than 1 second to complete data processing, showing a slight latency. In this scenario, both HBase and OceanBase Database met the requirements. +* At level III, HBase took 1.27 seconds to complete data processing, showing an increasing latency. In this scenario, only OceanBase Database met the requirements. +* OceanBase Database showed a considerable latency when handling 280,000 to 560,000 data rows of seven real-time tasks, which took it 1.1 seconds to complete. + + + +Given the test statistics, OceanBase Database has obvious advantages in batch reads, batch writes, and throughput. To ensure unique keys and auto-increment values, HBase only writes one data row at a time, making data writes a processing bottleneck. On the contrary, OceanBase Database inherently ensures unique keys and auto-increment values, and writes data in batches by executing SQL statements, supporting a higher write throughput. + + + +Considering the data processing capability, resource usage, and data processing complexity, Beike chose OceanBase Database as the storage system for the real-time dictionary service. In the production environment, the deployment of OceanBase Database is simpler, and it has achieved higher query performance and stability, and lower hardware and O&M costs. Beike will apply OceanBase Database in more suitable scenarios. \ No newline at end of file diff --git a/docs/blogs/users/Beike-Flink-OB.md b/docs/blogs/users/Beike-Flink-OB.md new file mode 100644 index 000000000..2d440c987 --- /dev/null +++ b/docs/blogs/users/Beike-Flink-OB.md @@ -0,0 +1,102 @@ +--- +slug: Beike-Flink-OB +title: 'Performance of Beike’s Real-time Dimension Table Service Improved by 3-4 Times Based on a Flink + OceanBase Solution' +tags: + - User Case +--- + +Beike, operated by KE Holdings Inc., is an industry-leading digital housing service platform in China. It is committed to promoting the digitalization and intelligentization of China's housing service industry, aiming at pooling and empowering resources to provide Chinese families with all-in-one, premium, and efficient services, from new home and resale transactions to leasing, decoration, local life, and handyman services. + +OceanBase Database is a distributed relational database system developed fully in-house by Ant Group and Alibaba Group in 2010. It provides a native distributed architecture that handles enterprises' complex data processing needs with high performance, reliability, and scalability. As one of Alibaba Group's independent innovations in the database industry, OceanBase Database has been deployed group-wide to support Alipay and other core business lines. + +Beike has deployed OceanBase Database to support its real-time dimension table service, among things. Replacing HBase, OceanBase Database has improved the performance of the real-time dimension table service by 3-4 times, halved the hardware costs, and greatly reduced the O&M costs. + +**Drawbacks of an HBase-based Dimension Table Solution and Alternative Solution Selection** +-------------------------- + +In a typical real-time warehouse or real-time business scenario, Flink often associates a fact table with an external dimension table during real-time stream processing, so as to query the dimension table and supplement the information in the fact table. For example, Beike needs to associate the information of an order with the product information in the dimension table involved in real time. Given the fact that a conventional database, such as a MySQL database, can hardly cope with the large data volume of a dimension table and the high real-time QPS of Flink, Beike used to host dimension tables on HBase, which features a distributed columnstore NoSQL architecture. HBase delivers pretty good query performance, but has some drawbacks. + +**Drawback 1: No support for secondary indexes** + +In many scenarios, Flink associates dimension tables not only by their primary keys, but also by some other columns. However, HBase supports only a single index based on row keys, and it supports secondary indexes only with the help of extra features provided by, for example, Apache Phoenix, leading to higher development and maintenance costs. + +**Drawback 2: Multiple dependencies, complex deployment, and high costs** + +Built on top of the Hadoop ecosystem, HBase relies on Hadoop Distributed File System (HDFS) for persistent data storage, and ZooKeeper for jobs like election, node management, and cluster metadata maintenance. Users must deploy and configure Hadoop, ZooKeeper, and several other components before deploying HBase in the production environment, leading to higher O&M and hardware costs. In some special circumstances, these components even require separate HBase clusters. + +![1688630129](/img/blogs/users/Beike-Flink-OB/image/1688630129100.png) + +For the above reasons, Beike turned to distributed databases and was attracted by OceanBase Database for its open source architecture, high performance, high reliability, and scalability. Besides, OceanBase Database is a great solution to Beike's business challenges. First, OceanBase Database natively supports secondary indexes. Users can directly create additional indexes on a dimension table to improve its query performance. Second, OceanBase Database relies only on OBServer instead of any external components. It is inherently highly available and is quite easy to deploy. Third, users can quickly install supporting tools for convenient O&M. For example, users can install tools on GUI-based pages of OceanBase Cloud Platform (OCP) or deploy a cluster by using the command line interface (CLI) of OceanBase Deployer (OBD). + +The hardware cost of the HBase-based solution is about 2 times that of a solution based on OceanBase Database. HBase requires two HRegionServer nodes to ensure high availability, and stores three data replicas in Hadoop storage, which means that an HBase cluster usually maintains six data replicas. If the cluster is small, the use of ZooKeeper and Hadoop will leave many redundant servers. On the contrary, OceanBase Database stores three data replicas, reducing the hardware cost by half. + +![1688630138](/img/blogs/users/Beike-Flink-OB/image/1688630138927.png) + +Therefore, Beike intended to substitute OceanBase Database for HBase to support its real-time computing platform. Before hammering out a decision, however, Beike ran a test to comprehensively compare the performance of OceanBase Database and HBase in 1-to-1 and 1-to-N association of real-time dimension tables. + +**Performance Comparison between OceanBase Database and HBase** +-------------------------- + +### 1. Prepare the environment + +The OceanBase and HBase clusters for the test were respectively deployed on three Dell EMC PowerEdge R740 servers, each with 80 CPU cores, 188 GB of memory, and a 2.9 TB NVMe SSD. All test tasks were executed in the same real-time Hadoop cluster. HBase 1.4.9 was used, and the HBase cluster was deployed and configured by the HBase database administrator. OceanBase Database V3.1.2 was used, with all parameters set to the default values. + +### 2. Test plan + +To verify the impact of dimension table size on query performance, datasets of 100 million, 20 million, and 100 thousand rows were prepared and inserted into the OceanBase and HBase clusters respectively. The values of the primary key of the table in the OceanBase cluster (rowkeys in the HBase cluster) were the sequential values from 1 to the number of the last test data row. The following example shows the `CREATE TABLE` statement and sample data: + + show create table tb_dim_benchmark_range_partitioned; + create table `tb_dim_benchmark_range_partitioned` + ( + t1 bigint(20) NOT NULL, + t2 varchar(200) DEFAULT NULL, + …… + t30 varchar(200) DEFAULT NULL, + ) + PRIMARY KEY (`t1`) + ) DEFAULT CHARSET = utf8mb4 + ROW_FORMAT = COMPACT + COMPRESSION = 'zstd_1.3.8' REPLICA_NUM = 3 BLOCK_SIZE = 16384 USE_BLOOM_FILTER = FALSE TABLET_SIZE = + 134217728 PCTFREE = 0 + partition by range(t1) + (partition PT1 values less than (10000000), + partition PT2 values less than (20000000), + partition PT3 values less than (30000000), + partition PT4 values less than (40000000), + partition PT5 values less than (50000000), + partition PT6 values less than (60000000), + partition PT7 values less than (70000000), + partition PT8 values less than (80000000), + partition PT9 values less than (90000000), + partition PT10 values less than (100000000)); + + select * from tb_dim_benchmark_range_partitioned limit 1; +![1](/img/blogs/users/Beike-Flink-OB/image/1.png) + +To prevent the impact of dependent components (such as physical sources and sinks) on the performance of dimension table association during the test, the DataGen SQL connector (supporting the generation of random or sequential records in memory) and BlackHole SQL connector (taking in all input data for performance testing) were installed as data sources for SQL testing. + + CREATE TABLE `data_gen_source` (`t1` BIGINT, `t2` VARCHAR, `proctime` AS PROCTIME()) WITH ( + +### 3. Test results + +1) The following table shows the test data of 1-to-1 dimension table association, where the random values generated by DataGen were associated with index columns in the OceanBase cluster and rowkeys in the HBase cluster. + +![1688630156](/img/blogs/users/Beike-Flink-OB/image/1688630156052.png) + +2) The following table shows the test data of 1-to-N dimension table association, where the random values generated by DataGen were associated with secondary index columns in the OceanBase cluster. + +![1688630163](/img/blogs/users/Beike-Flink-OB/image/1688630163678.png) + +Four conclusions can be drawn from the test results: + +* For the dimension table with 20 million or 100 million rows (a large data volume), OceanBase Database outperforms HBase in terms of QPS if the task parallelism is low, and the performance of OceanBase Database is 3-4 times higher than that of HBase given a high task parallelism, which is a significant improvement. +* For the dimension table with 100 thousand rows (a small data volume), the QPS of HBase is slightly higher than that of OceanBase Database if the task parallelism is low, and OceanBase Database has obvious advantages given a high task parallelism. +* OceanBase Database delivers unsatisfactory performance in association by non-indexed columns. In a production environment, therefore, columns to be associated should be indexed for the association of a large dimension table. The functionality of Beike's real-time computing platform can be optimized. For example, if a user has associated non-indexed columns, the SQL diagnostics feature will prompt the user to create indexes. +* OceanBase Database exhibits great performance in 1-to-N association by secondary index columns, which means it meets high QPS requirements. + +**Summary** +------ + +The test results indicate that, in the same environment, OceanBase Database shows better overall performance than HBase, let alone its native support for secondary indexes, simple deployment, and lower hardware and O&M costs. Eventually, Beike chose OceanBase Database to store dimension tables for its real-time computing platform. + +Beike first deployed OceanBase Database Community Edition V3.1.2 and found that it does not support Time to Live (TTL) for regular relational tables. The good news is that OceanBase Database V3.1.4 and later support API models such as TableAPI and the HBase API, and OceanBase Database V4.0 supports global secondary indexes. Beike also suggested that OceanBase further strengthen its connections with the big data ecosystem to better support the import/export of big data to and from OceanBase Database. \ No newline at end of file diff --git a/docs/blogs/users/CR-Vanguard.md b/docs/blogs/users/CR-Vanguard.md new file mode 100644 index 000000000..b4981611c --- /dev/null +++ b/docs/blogs/users/CR-Vanguard.md @@ -0,0 +1,169 @@ +--- +slug: CR-Vanguard +title: 'CR Vanguard Upgrades Its Core System Database and Improves System Performance By 70%' +tags: + - User Case +--- + +This article is originally published by Vanguard D-Tech on WeChat Official Accounts Platform. + +> China Resources Vanguard (CR Vanguard) is an excellent retail chain of CR Group, running business in the Chinese mainland and Hong Kong. Facing its giant business network, CR Vanguard is in an urgent need to strengthen the interconnection of its numerous business lines to adapt to the rapid development of its various interrelated business environments, such as online sales, in-store sales, logistics, and finance. +> With the rapid development of information technology and the advancement of digital transformation, databases are playing an increasingly important role as the cornerstone for data management and storage. CR Vanguard hopes to provide efficient, reliable, and secure data management solutions through database upgrades, innovative technologies, and intelligent applications. In this article, Vanguard D-Tech's technical team shares their experience in migrating CR Vanguard's database system to OceanBase Database. + +Vanguard D-Tech has been actively working to implement the strategic information security plans of the state, CR Group, and CR Vanguard. We have introduced a home-grown database system to provide continuous support for key business and intelligent operations and improve the operational efficiency of business systems. This way, CR Vanguard can provide better services for end consumers in an efficient cycle that brings down costs, boosts efficiency, and ensures compliance. This will help CR Vanguard maintain sustainable development on the complex and changing market, and keep one step ahead amid fierce competition. + +I. Conventional Databases and Their Vulnerabilities +-------------- + +(i) Conventional databases today + +Conventional database systems such as MySQL and Oracle have played a valuable role in data storage and processing. However, in response to the trend of data explosion brought about by the popularization of the Internet and mobile devices, many companies choose to improve the performance and capacity of conventional databases by extending their architectures. + +(ii) Common MySQL architectures and extended MySQL architectures + +Three MySQL architectures are commonly used: + +**Master-slave architecture**. This architecture allows users to improve database performance and capacity by replicating data to one or more slave servers. + +![1709003527](/img/blogs/users/CR-Vanguard/image/1709003527466.png) + +**Sharded architecture**. In this architecture, the database and tables are sharded to achieve horizontal scaling, and data is distributed to multiple database instances. + +![1709003536](/img/blogs/users/CR-Vanguard/image/1709003536640.png) + +**Read/write splitting architecture**. In this architecture, read requests and write requests are processed on different database instances, which improves concurrency performance. + +![1709003545](/img/blogs/users/CR-Vanguard/image/1709003545687.png) + +When the business volume increases to a point where none of the preceding three architectures can ensure business stability, users often deploy an extended MySQL architecture to handle performance issues by integrating the capabilities of a sharded architecture and those of a read/write split architecture. + +![1709003557](https://obcommunityprod.oss-cn-shanghai.aliyuncs.com/prod/blog/2024-02/1709003557083.png) + +It is noteworthy that an extended cluster architecture is more complex, leading to soaring O&M and development costs and a variety of challenges, such as the barrel effect, where a fault may drag down the stability of the entire system. + +(iii) Vulnerabilities + +Conventional databases, while doing a great job in many scenarios, bother users due to some of their vulnerabilities. + +* Performance bottleneck: A conventional database will reach its performance limit soon when it handles a flood of concurrent requests. For example, a MySQL backend database can easily support a monitoring system that consists of hundreds of hosts and works with 10,000-20,000 monitoring metrics. However, when tens of thousands of hosts are working in the system to handle 500,000 or more metrics, severe data delay, sometimes more than 30 minutes, is likely to happen. In this case, the monitoring data is basically useless. +* Limited scalability: Conventional databases are not scalable enough to meet our growing demand for data processing due to limitations on hardware, such as CPU, memory, and storage. The increasing data volume has caused quite a few issues, such as database performance degradation and extended response time. To ensure the database health, we must keep an eye on the data volume and regularly clear our data, which is actually a compromise on maintaining database performance. For the best of the business, we should keep as much data available as possible. +* High maintenance costs: The maintenance and management of conventional databases are laborious and consume a large amount of resources. +* Security issues: The security of conventional databases is usually one of the top concerns. A variety of measures need to be taken to guarantee data security. For example, MySQL databases lack a holistic solution for data backup and recovery. This leads to incomplete backups, lost or corrupted backup files, prolonged recovery time, and other issues. In the middleware-based MySQL architecture, it is tricky to audit and track operations such as user access and data modifications. In most cases, it is difficult to track down the user who initiated a problematic SQL query. +* Insufficient high availability: Most conventional databases can hardly ensure high availability in the event of a failure, resulting in business interruptions. CR Vanguard has prepared all possible solutions, conventional and novel, for cluster high availability, such as the master-slave architecture, multi-slave architecture, database sharding, and geo-redundancy. In extreme cases, the recovery time objective (RTO) could reach 10 to 30 minutes. In some cases, we needed to manually decide whether to switch the system, and even the whole team had to work together analyzing risky and important database operations and making decisions. + +II. Database Selection and Upgrading +--------- + +Given the aforementioned reasons, CR Vanguard started researching domestically-developed databases, which had drawn so much attention in recent years. + +We considered the following factors in database selection. + +* Independent research and development: whether the database is a proprietary product independently developed by a Chinese company that owns its intellectual property rights, and whether the database is compatible with the systems developed by CR Vanguard. +* Compatibility: compatibility with our existing databases (such as MySQL and Oracle) and systems (such as CentOS and Red Hat) in terms of protocols, data formats, and APIs... +* High availability: faulty node troubleshooting, disaster recovery, and data backup... +* Scalability: scaling by adding nodes, data partitioning, and load balancing... +* Performance: satisfactory read/write speed, concurrent processing performance, and data processing performance... +* Costs: migration costs, development costs, and host storage costs... +* Business coupling: adaptability to business applications and performance jitter of SQL execution in different scenarios. + +We shortlisted two database products, OceanBase Database and a distributed database system (hereinafter referred to as Database A), and compared their performance, costs, and compatibility in benchmarks and stress tests. + +(i) Performance comparison in benchmark tests + +To get a fair conclusion, the two candidates featuring different architectures were compared based on a total of 64 CPU cores and 256 GB of memory, regardless of the number of hosts in use. The test results are shown in the following figure: + + +![1709003685](/img/blogs/users/CR-Vanguard/image/1709003685929.png) + +Details of the test results are described as follows: + +* In the oltp\_update\_index test, the QPS of OceanBase Database is roughly two times that of Database A in scenarios with different levels of concurrency. +* In the oltp\_read\_only, oltp\_read\_write, oltp\_update\_non\_index, and oltp\_insert tests, OceanBase Database outperforms Database A by a 40% higher QPS on average in scenarios with different levels of concurrency. +* In the oltp\_point\_select and oltp\_write\_only tests, both databases have their own strong points in scenarios with different levels of concurrency, showing comparable overall performance. + +![1709003708](/img/blogs/users/CR-Vanguard/image/1709003708334.png) + +(ii) Performance comparison in stress tests + +The stress tests were performed in the same environment as the benchmark tests, and the test results are as follows: + +![1709003720](/img/blogs/users/CR-Vanguard/image/1709003720277.png) +OceanBase Database outperformed Database A in the stress tests, delivering twice the write QPS and four times the query QPS, with the latency being only 1/4 of that of its rival. + +![1709003734](/img/blogs/users/CR-Vanguard/image/1709003734121.png) + +The comparison results indicate that OceanBase Database showed better overall performance. In addition, OceanBase Database could maximize the utilization of storage resources and reduce resource fragmentation, cutting storage costs by about 60% compared to MySQL. Conservatively, OceanBase Database could bring down the total cost of ownership by 30%. As for other features such as compatibility, high availability, and scalability, there was not much difference between the two, as shown in the following figure. + +![1709003747](/img/blogs/users/CR-Vanguard/image/1709003747778.png) + +III. Migration and Upgrade Solution for a Core System +------------- + +After exhaustive system testing, we selected a core business system to upgrade its database. + +We first assessed the performance, availability, and scalability of our existing database, and determined the migration objectives and plan. Then, we came up with a detailed migration solution covering data backup, data conversion, node migration, and post-migration testing based on the assessment results. After the migration, we merged the original shards and launched continuous monitoring and maintenance of the new system to ensure that it operates stably and meets our business requirements. + +(i) Migration assessment + +The system used a middleware-based MySQL database cluster of a sharded architecture, which is shown in the following figure. + +![1709003792](https://obcommunityprod.oss-cn-shanghai.aliyuncs.com/prod/blog/2024-02/1709003792059.png) + +The cluster consisted of 5 master database instances. Each master instance was divided into 10 shards and provided with two slave instances. Master and slave instances were integrated into a logical database based on middleware to achieve read-write splitting. We performed the following steps in the migration assessment. + +Step 1: performance estimation. The database contained 15 TB of data produced by the system. The estimated concurrency was 3,000. The top 50 high-frequency SQL statements were monitored in the backend. + +Step 2: consideration of availability and scalability. The scalability of our middleware-based MySQL architecture had already been greatly improved. We could quickly increase its capacity and computing power by adding new MySQL clusters and configuring middleware routing settings. However, a short service downtime was inevitable during cluster scaling. + +Step 3: evaluation of data volume after migration. The volume of migrated data might occupy 6 TB of space in OceanBase Database, which therefore must have a disk size of at least 7 TB to ensure the disk health. + +Step 4: stress test. We performed a high-frequency SQL stress test to verify the data loading capacity of the database. + +Step 5: evaluation and analysis of associated business. We made clear all business modules associated with the system and verified them one by one. + +In the assessment, we verified the feasibility of the new system and estimated the requirements of resources such as CPU, memory, and disks of OceanBase Database. + +(ii) Migration solution + +A challenge of the migration was how to do it smoothly without disturbing our business modules that were running stably around the clock. We designed a neat procedure and migrated the read business first, and then the write business. This read-write splitting strategy ensured a stable and smooth system migration and minimized the impact on end-user experience. + +![1709003832](/img/blogs/users/CR-Vanguard/image/1709003832627.png) + +Another challenge was to merge the shards of the original MySQL cluster into OceanBase Database. We must check each large table to confirm the uniqueness of each data record and configure appropriate partitioning keys for large tables to ensure the optimal performance of hotspot SQL queries. It was also necessary to make sure that historical data can be quickly shed to guarantee easy and efficient O&M. + +To those ends, we determined a migration and modification plan based on extensive analysis and verification. + +First, we confirmed the large tables with no duplicate data. They need no modifications after table merging. Second, we modified the large tables that might have duplicate data after migration to ensure data consistency. + +![1709003848](/img/blogs/users/CR-Vanguard/image/1709003848217.png) + +Finally, we adapted our read/write business to dual data sources, and migrated the business in batches based on rational rules. + +![1709003876](/img/blogs/users/CR-Vanguard/image/1709003876473.png) + +(iii) Real-time processing of streaming data + +Kafka plays a crucial role in processing data streams associated with database operations. Kafka supports many storage formats, such as Canal, SharePlex, and Debezium, which are widely used in the industry. OceanBase Migration Service (OMS), a data synchronization and migration tool provided with OceanBase Database, supports these formats well, making the data transfer process smoother and more stable and reliable while significantly reducing migration and development costs. + +1\. Data stream processing in the original system based on binlogs + CA scheduling + +![1709003892](/img/blogs/users/CR-Vanguard/image/1709003892136.png) + +In our original system, Kafka connectors captured changes in cluster data in real time by listening to binlogs of all MySQL nodes, making the database O&M complicated. Besides, certificate authority (CA) scheduling suffered considerable push delay. Data was pushed inefficiently when the business traffic went high, resulting in poor system reliability. + +2\. Real-time processing of streaming data based on OMS + Flink scheduling + +![1709003909](/img/blogs/users/CR-Vanguard/image/1709003909382.png) + +OMS provides a GUI-based console for centralized task management and supports data synchronization to a specific point in time with low maintenance costs. This solution uses Flink streams to achieve real-time data processing and pushes processed data to the destination system in real time through stream sinks and table sinks of Flink. This ensures that the destination system supports the reception and processing of real-time data. The solution also supports periodical state checks at checkpoints during task execution to ensure that a faulty task can be restored to the state at the checkpoint. + +The OMS + Flink solution allows us to manage real-time data with simple operations and completes the entire data transfer process within 2 seconds. This way, every data record can be accurately and reliably pushed to end users for consumption in real time. + +(iv) Migration and merging results + +Our all-around preparation and verification paid off. We successfully migrated the core system to OceanBase Database and merged its shards with zero impact on end-user experience and business operation stability. The results of production verification indicate that the system performance was improved by about 70%, and the costs were reduced by about 50%. + +IV. Vision for the Future +------ + +Next, we will make every endeavor to build a full-featured database system and strengthen our skills to get the most out of it. We will also optimize resource allocation and improve monitoring and O&M mechanisms to boost efficiency at lower costs and achieve sustainable business development. \ No newline at end of file diff --git a/docs/blogs/users/Yunji.md b/docs/blogs/users/Yunji.md new file mode 100644 index 000000000..33fc69ed7 --- /dev/null +++ b/docs/blogs/users/Yunji.md @@ -0,0 +1,83 @@ +--- +slug: Yunji +title: 'Slashing Costs by 87.5%: Yunji`s Cost-cutting Adventure with OceanBase Database' +tags: + - User Case +--- + +# Slashing Costs by 87.5%: Yunji`s Cost-cutting Adventure with OceanBase Database + +Yunji Sharing Technology Co., Ltd. (Yunji) is an e-commerce firm like Taobao and JD.com, except that its business is more driven by social networks. Yunji offers members killer deals by cherry-picking high-quality, value-for-money goods, helping millions of shoppers snag reliable goods at "wholesale prices". With recent market rollercoasters, we're on a mission to pinch every penny, especially when it comes to server and labor costs. Right now, our servers are eating up over 85% of our budget, so trimming costs is high on our to-do list.️ + +For companies, cost optimization is about cutting expenses and enhancing efficiency. In today's cutthroat environment, many companies are confronted with such challenges. For developers, cost optimization is a kick in the pants, requiring them to clean up clunky code and boost the company’s tech game. And for us database administrators, supporting the company’s operations on a shoestring budget is a badge of honor. Plus, it’s a great chance to learn knowledge and methods, from cost analysis and assessment to server improvement and manpower optimization. + +Business Challenges +==== + +Before making any decisions, we got the measure of our business. Like many other Internet companies, Yunji also hit bumps in the road with its original IT architecture, which is shown in the following figure. The application layer on top runs on microservices, with a cache provided to handle fast write operations during flash sales. + +![1701329029](/img/blogs/users/Yunji/image/1701329029113.png) + +The database layer consisted of database instances on Tencent Cloud. We created numerous database instances due to the architecture of microservices. In general, each system was empowered by a microservice that was supported by the following three database instances: a primary database, a standby database, and a user-dedicated standby database. + +Business data was pumped through components such as Flink and Canal to the big data module, where the data was crunched to generate T+0 and T+1 reports. Some of the crunched data was synchronized back to the business database to support user queries. That process formed a data loop. + +A link was built for OceanBase Migration Service (OMS) to migrate data from Tencent Cloud database instances to an OceanBase cluster. Why? For example, we created 32 database instances to support an order system, which needed to handle aggregation queries involving all its instances. As such system-wide queries were beyond the original sharded architecture, we had to synchronize the data to the OceanBase cluster and did the job there. + +So, what’s the hitch with this architecture? Well, here's the scoop: + +**Data silos**: Technically, one query should do the trick with one execution. However, to meet different business needs, multiple replicas were generated for the same data set and were stored in multiple storage systems, increasing the request volume and number of executions. A large number of data replicas also pushed up the storage costs. + +**Database and table sharding**: Database and table sharding depended on middleware that had their respective features, and therefore caused troubles to the business and O&M teams: + +* We must design a variety of tables for different queries, and all queries were based on partitioning keys. This increased the business complexity. +* Cross-shard aggregation queries and join queries turned into a nightmare because data had to be collected for processing at the application layer. +* O&M was painful. Database scaling required tons of operations and data migration, and the backup and restore processes were complicated. + +**Higher operational costs**: Horizontal or vertical decomposition of microservices resulted in increasing instance count and resource costs. In addition, instance resources were inefficiently used. CPU utilization rarely hit 20% and, when it did, any business spikes would overwhelm servers. We had to reserve some hardware resources. + +**Data safety**: To meet the requirements of Multi-Level Protection Scheme (MLPS), our system must be deployed in at least three IDCs across two regions for disaster recovery, which jacked up costs. We created on-premises and remote backups on Tencent Cloud for our production environment, and suffered O&M headaches during a remote backup, such as time-consuming data pulls and frequent failures. The tremendous data volume also incurred sky-high traffic costs. + +Costs Optimization Strategies +====== + +To address those architectural challenges, we came up with a few cost optimization strategies: + +* Replace the complex sharded architecture that involves data loops and lengthy workflows, and is prone to faults. +* Move archived data to OceanBase Database, which supports a high compression ratio. This saves storage costs and improves capacity limits. +* Merge business instances, aiming for lower server resource consumption to increase resource usage during off-peak hours. For example, more resources can be allocated to handle e-commerce business requests during the day, and to generate T+0 and T+1 reports at night. +* Consider replacing conventional databases with distributed ones that support hybrid transaction and analytical processing (HTAP), and executing online and analytical tasks within the same cluster. This simplifies data links, reduces the complexity of the business architecture, and brings down O&M workload. Plus, distributed databases can achieve higher performance with fewer machine resources under the same workload. This also saves costs. + +What about the hurdles? + +Switching to a new architecture takes time to prove it can carry our business growth. Convincing the development team to ramp up their workload is tough, too — it’s about selling the benefits of architecture transformation and a new learning curve. So, human resources and technological adaptation were major hurdles. + +A Budget-saving Solution based on OceanBase Database +===================== + +We designed a cost optimization solution in line with the following rules: + +* Strong system stability and zero business interruptions. +* High compatibility and simplified deployment with a short learning curve. +* No over-engineering, which may cripple the business system's adaptability to business fluctuations in the name of savings. + +OceanBase Database got the nod for its: + +* Compatibility with MySQL, which means less development work and stable version iterations. +* High throughput and solid ecosystem backing. +* HTAP capabilities and horizontal scaling, which are well suited for our transaction processing (TP) and analytical processing (AP) needs. + +![1701329130](/img/blogs/users/Yunji/image/1701329130162.png) + +Using OceanBase Database, we transformed the original architecture, which consisted of a cloud database layer, an extract, transform, and load (ETL) layer, and a big data module, into a streamlined OceanBase cluster supporting HTAP tasks. The new architecture hits our cost optimization goals because it works with fewer intermediate data links, reduces the development work, and provides a recovery time objective (RTO) within 8 seconds and a recovery point objective (RPO) of 0, meeting the MLPS requirements. + +![1701329144](/img/blogs/users/Yunji/image/1701329144538.png) + +Summary +=== + +This article delves into the reasons for cost optimization in today’s market environment, our original database architecture and its vulnerabilities, and the new solution that we adopted to achieve cost optimization. Not only the distributed architecture of OceanBase Database suffices for our business scenarios, its high performance, high compression ratio, high reliability, and HTAP features also help reduce the hardware, manpower, and O&M costs. Amid the recent market changes, we have slashed the monthly server cost from more than 8 million Chinese yuan at the peak to less than 1 million Chinese yuan. + +The result of cost optimization is remarkable, and the improvement of technology and system adaptability has played a significant role. Other companies may find our solution inspiring in making their cost optimization strategies. We have proved that technical tweaks amid shifting sands can cut costs, boost efficiency, and bring more financial benefits. + +Moving forward, we will try out new features of OceanBase Database. We are testing OceanBase Database V4.2.1, the first long-term support (LTS) version, hoping that it will pack a punch. \ No newline at end of file diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713915588.png b/static/img/blogs/users/Beike-Dict-service/image/1686713915588.png new file mode 100644 index 000000000..78fd094b5 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713915588.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713915588.psd b/static/img/blogs/users/Beike-Dict-service/image/1686713915588.psd new file mode 100644 index 000000000..1e99db496 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713915588.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713929132.png b/static/img/blogs/users/Beike-Dict-service/image/1686713929132.png new file mode 100644 index 000000000..681e601c8 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713929132.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713929132.psd b/static/img/blogs/users/Beike-Dict-service/image/1686713929132.psd new file mode 100644 index 000000000..980734046 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713929132.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713967140.png b/static/img/blogs/users/Beike-Dict-service/image/1686713967140.png new file mode 100644 index 000000000..fd07e5654 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713967140.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686713967140.psd b/static/img/blogs/users/Beike-Dict-service/image/1686713967140.psd new file mode 100644 index 000000000..79d41a07f Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686713967140.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714090841.png b/static/img/blogs/users/Beike-Dict-service/image/1686714090841.png new file mode 100644 index 000000000..145d9b950 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714090841.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714090841.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714090841.psd new file mode 100644 index 000000000..c8ffe7410 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714090841.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714102511.png b/static/img/blogs/users/Beike-Dict-service/image/1686714102511.png new file mode 100644 index 000000000..032cc92e9 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714102511.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714102511.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714102511.psd new file mode 100644 index 000000000..b6d7a808c Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714102511.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714112864.png b/static/img/blogs/users/Beike-Dict-service/image/1686714112864.png new file mode 100644 index 000000000..5f62e08c5 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714112864.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714112864.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714112864.psd new file mode 100644 index 000000000..84faf1b3a Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714112864.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714163064.png b/static/img/blogs/users/Beike-Dict-service/image/1686714163064.png new file mode 100644 index 000000000..6c9618945 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714163064.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714163064.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714163064.psd new file mode 100644 index 000000000..754712b82 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714163064.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714172781.png b/static/img/blogs/users/Beike-Dict-service/image/1686714172781.png new file mode 100644 index 000000000..bfc2a0085 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714172781.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714172781.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714172781.psd new file mode 100644 index 000000000..4a21dbbbd Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714172781.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714194863.png b/static/img/blogs/users/Beike-Dict-service/image/1686714194863.png new file mode 100644 index 000000000..f6ebbec9b Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714194863.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714194863.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714194863.psd new file mode 100644 index 000000000..30112588a Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714194863.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714221976.png b/static/img/blogs/users/Beike-Dict-service/image/1686714221976.png new file mode 100644 index 000000000..67274041f Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714221976.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714221976.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714221976.psd new file mode 100644 index 000000000..e570cb262 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714221976.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714234502.png b/static/img/blogs/users/Beike-Dict-service/image/1686714234502.png new file mode 100644 index 000000000..a016c4e54 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714234502.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714234502.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714234502.psd new file mode 100644 index 000000000..c2c12da9f Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714234502.psd differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714264396.png b/static/img/blogs/users/Beike-Dict-service/image/1686714264396.png new file mode 100644 index 000000000..ed7494558 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714264396.png differ diff --git a/static/img/blogs/users/Beike-Dict-service/image/1686714264396.psd b/static/img/blogs/users/Beike-Dict-service/image/1686714264396.psd new file mode 100644 index 000000000..89c977973 Binary files /dev/null and b/static/img/blogs/users/Beike-Dict-service/image/1686714264396.psd differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1.png b/static/img/blogs/users/Beike-Flink-OB/image/1.png new file mode 100644 index 000000000..9a60ac1b7 Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1.png differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.png b/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.png new file mode 100644 index 000000000..0079a0989 Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.png differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.psd b/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.psd new file mode 100644 index 000000000..9588215dd Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630129100.psd differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.png b/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.png new file mode 100644 index 000000000..57270067f Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.png differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.psd b/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.psd new file mode 100644 index 000000000..1d376e4a6 Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630138927.psd differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.png b/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.png new file mode 100644 index 000000000..4018b66eb Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.png differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.psd b/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.psd new file mode 100644 index 000000000..9eacf7f03 Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630156052.psd differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.png b/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.png new file mode 100644 index 000000000..9e4ee9f5e Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.png differ diff --git a/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.psd b/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.psd new file mode 100644 index 000000000..f36e15e9d Binary files /dev/null and b/static/img/blogs/users/Beike-Flink-OB/image/1688630163678.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003527466.png b/static/img/blogs/users/CR-Vanguard/image/1709003527466.png new file mode 100644 index 000000000..b777149d2 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003527466.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003527466.psd b/static/img/blogs/users/CR-Vanguard/image/1709003527466.psd new file mode 100644 index 000000000..4b80fcd15 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003527466.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003536640.png b/static/img/blogs/users/CR-Vanguard/image/1709003536640.png new file mode 100644 index 000000000..6cb0911a1 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003536640.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003536640.psd b/static/img/blogs/users/CR-Vanguard/image/1709003536640.psd new file mode 100644 index 000000000..73fdde314 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003536640.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003545687.png b/static/img/blogs/users/CR-Vanguard/image/1709003545687.png new file mode 100644 index 000000000..e05b975a9 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003545687.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003545687.psd b/static/img/blogs/users/CR-Vanguard/image/1709003545687.psd new file mode 100644 index 000000000..6fda80404 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003545687.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003685929.png b/static/img/blogs/users/CR-Vanguard/image/1709003685929.png new file mode 100644 index 000000000..db13c6a25 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003685929.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003685929.psd b/static/img/blogs/users/CR-Vanguard/image/1709003685929.psd new file mode 100644 index 000000000..db41eb27a Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003685929.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003708334.png b/static/img/blogs/users/CR-Vanguard/image/1709003708334.png new file mode 100644 index 000000000..0828fa411 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003708334.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003708334.psd b/static/img/blogs/users/CR-Vanguard/image/1709003708334.psd new file mode 100644 index 000000000..fb38554f3 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003708334.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003720277.png b/static/img/blogs/users/CR-Vanguard/image/1709003720277.png new file mode 100644 index 000000000..a4c82ef4b Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003720277.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003720277.psd b/static/img/blogs/users/CR-Vanguard/image/1709003720277.psd new file mode 100644 index 000000000..34bffbcb5 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003720277.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003734121.png b/static/img/blogs/users/CR-Vanguard/image/1709003734121.png new file mode 100644 index 000000000..01de26154 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003734121.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003734121.psd b/static/img/blogs/users/CR-Vanguard/image/1709003734121.psd new file mode 100644 index 000000000..a4edab9e8 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003734121.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003747778.png b/static/img/blogs/users/CR-Vanguard/image/1709003747778.png new file mode 100644 index 000000000..21826517b Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003747778.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003747778.psd b/static/img/blogs/users/CR-Vanguard/image/1709003747778.psd new file mode 100644 index 000000000..f44dc48f5 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003747778.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003832627.png b/static/img/blogs/users/CR-Vanguard/image/1709003832627.png new file mode 100644 index 000000000..46fdc992f Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003832627.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003832627.psd b/static/img/blogs/users/CR-Vanguard/image/1709003832627.psd new file mode 100644 index 000000000..2a7b4cfc5 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003832627.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003848217.png b/static/img/blogs/users/CR-Vanguard/image/1709003848217.png new file mode 100644 index 000000000..c6927a075 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003848217.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003848217.psd b/static/img/blogs/users/CR-Vanguard/image/1709003848217.psd new file mode 100644 index 000000000..bb159aeb5 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003848217.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003876473.png b/static/img/blogs/users/CR-Vanguard/image/1709003876473.png new file mode 100644 index 000000000..734972102 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003876473.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003876473.psd b/static/img/blogs/users/CR-Vanguard/image/1709003876473.psd new file mode 100644 index 000000000..e022516ba Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003876473.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003892136.png b/static/img/blogs/users/CR-Vanguard/image/1709003892136.png new file mode 100644 index 000000000..c474de34e Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003892136.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003892136.psd b/static/img/blogs/users/CR-Vanguard/image/1709003892136.psd new file mode 100644 index 000000000..5dfdc2069 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003892136.psd differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003909382.png b/static/img/blogs/users/CR-Vanguard/image/1709003909382.png new file mode 100644 index 000000000..7a95d324b Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003909382.png differ diff --git a/static/img/blogs/users/CR-Vanguard/image/1709003909382.psd b/static/img/blogs/users/CR-Vanguard/image/1709003909382.psd new file mode 100644 index 000000000..108281182 Binary files /dev/null and b/static/img/blogs/users/CR-Vanguard/image/1709003909382.psd differ diff --git a/static/img/blogs/users/Yunji/image/1701329029113.png b/static/img/blogs/users/Yunji/image/1701329029113.png new file mode 100644 index 000000000..2eb16d568 Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329029113.png differ diff --git a/static/img/blogs/users/Yunji/image/1701329029113.psd b/static/img/blogs/users/Yunji/image/1701329029113.psd new file mode 100644 index 000000000..6e02d4ffb Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329029113.psd differ diff --git a/static/img/blogs/users/Yunji/image/1701329130162.png b/static/img/blogs/users/Yunji/image/1701329130162.png new file mode 100644 index 000000000..6443ee1f5 Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329130162.png differ diff --git a/static/img/blogs/users/Yunji/image/1701329130162.psd b/static/img/blogs/users/Yunji/image/1701329130162.psd new file mode 100644 index 000000000..805ed1d1a Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329130162.psd differ diff --git a/static/img/blogs/users/Yunji/image/1701329144538.png b/static/img/blogs/users/Yunji/image/1701329144538.png new file mode 100644 index 000000000..7644421d9 Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329144538.png differ diff --git a/static/img/blogs/users/Yunji/image/1701329144538.psd b/static/img/blogs/users/Yunji/image/1701329144538.psd new file mode 100644 index 000000000..8d1d9ba35 Binary files /dev/null and b/static/img/blogs/users/Yunji/image/1701329144538.psd differ