-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat: import braft #3182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 3.5
Are you sure you want to change the base?
feat: import braft #3182
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. ✨ Finishing touches🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
58ddb80 to
512a23a
Compare
512a23a to
5697b32
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR integrates braft (Baidu Raft) into Pika to provide distributed consensus and strong consistency guarantees, transitioning from asynchronous master-slave replication to Raft-based replication.
Key Changes:
- Added praft module with Raft consensus implementation
- Implemented binlog-based integration between Raft and storage layer
- Added Raft cluster management commands (RAFT.CLUSTER, RAFT.NODE, RAFT.CONFIG)
Reviewed Changes
Copilot reviewed 23 out of 23 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/praft/src/praft.cc | Core Raft implementation including RaftManager, PikaStateMachine, and command application logic |
| src/praft/include/praft/praft.h | Public API for Raft functionality including manager, node, and state machine classes |
| src/praft/src/binlog.proto | Protobuf definitions for binlog entries used in Raft log replication |
| src/storage/src/batch.cc | Batch operation abstraction supporting both direct RocksDB writes and Raft binlog submission |
| src/storage/include/storage/batch.h | Batch interface definitions for storage operations |
| src/pika_command.cc | Modified DoBinlog() to submit write commands to Raft instead of traditional binlog |
| src/pika_kv.cc | Modified SetCmd::Do() to skip RocksDB writes in Raft mode (writes occur in on_apply) |
| src/pika_raft.cc | Implementation of Raft management commands |
| include/pika_raft.h | Raft command declarations and g_in_raft_apply flag |
| CMakeLists.txt | Build configuration updates for braft, brpc, and leveldb dependencies |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
src/praft/src/praft.cc
Outdated
| LOG(INFO) << "Parsed command: " << argv[0] << " with " << (argv.size() - 1) << " args"; | ||
|
|
||
| // Set thread-local flag to indicate we're in on_apply context | ||
| g_in_raft_apply = true; |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thread-local flag g_in_raft_apply is set without RAII protection. If an exception is thrown between lines 872 and 889, the flag will remain true and potentially cause incorrect behavior. Consider using an RAII guard class to automatically reset the flag.
| auto node = GetRaftNode(db_name); | ||
| if (!node) { | ||
| promise.set_value(rocksdb::Status::NotFound("Raft node not found")); | ||
| return pstd::Status::NotFound("Raft node not found for db: " + db_name); | ||
| } |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The GetRaftNode() call at line 771 acquires a shared lock on nodes_mutex_, but raft_nodes_ is also accessed in other methods like CreateRaftNode() with unique locks. Ensure all accesses to raft_nodes_ are properly synchronized to prevent race conditions.
src/storage/src/batch.cc
Outdated
| case pikiwidb::DataType::kSets: return kSets; | ||
| case pikiwidb::DataType::kZSets: return kZSets; | ||
| case pikiwidb::DataType::kStreams: return kStreams; | ||
| default: return kAll; // 不应该发生 |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default case returns kAll which may not be appropriate for an unknown data type. Consider logging an error and returning a more explicit error state, or throwing an exception to signal invalid input.
| default: return kAll; // 不应该发生 | |
| default: | |
| LOG(ERROR) << "Unknown proto DataType: " << static_cast<int>(proto_type) << ", returning kAll"; | |
| return kAll; // 不应该发生 |
src/pika_server.cc
Outdated
| // TODO: 这里需要将 promise 传递给 closure,让 Raft apply 完成后设置结果 | ||
| // 目前先立即返回 OK,实际应该等待 Raft 应用完成 |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO comment indicates that the promise is set to OK immediately (line 286) without waiting for Raft to complete. This creates a race condition where the client may receive success before the data is actually replicated. The promise should only be set after Raft's on_apply completes successfully.
src/pika_command.cc
Outdated
| // Wait for Raft to apply (with 10 second timeout) | ||
| auto wait_status = future.wait_for(std::chrono::seconds(10)); |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The 10-second timeout is hardcoded. Consider making this configurable through pika.conf to accommodate different deployment scenarios and network conditions.
| // Wait for Raft to apply (with 10 second timeout) | |
| auto wait_status = future.wait_for(std::chrono::seconds(10)); | |
| // Wait for Raft to apply (with configurable timeout) | |
| auto wait_status = future.wait_for(std::chrono::seconds(g_pika_server->raft_apply_timeout_sec())); |
src/praft/src/praft.cc
Outdated
| // TODO: For now, use db0 as default. Need to encode db_name in log. | ||
| // Could prepend db_name to Redis protocol: "<db_name>|*3\r\n..." | ||
| std::string db_name = "db0"; |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The default db_name is hardcoded as 'db0'. This assumption may not hold for all deployments. Consider extracting this as a configuration parameter or determining it dynamically based on the actual database configuration.
| // TODO: For now, use db0 as default. Need to encode db_name in log. | |
| // Could prepend db_name to Redis protocol: "<db_name>|*3\r\n..." | |
| std::string db_name = "db0"; | |
| // Use configured default db_name if not prepended in log. | |
| // Could prepend db_name to Redis protocol: "<db_name>|*3\r\n..." | |
| std::string db_name; | |
| if (g_pika_conf && !g_pika_conf->default_db().empty()) { | |
| db_name = g_pika_conf->default_db(); | |
| } else { | |
| db_name = "db0"; // Fallback if config not available | |
| } |
src/pika_kv.cc
Outdated
| // Plan A: If Raft is enabled, skip actual write on first call | ||
| // The write will happen when on_apply executes this command | ||
| // Use thread-local flag to detect if we're in on_apply context | ||
| if (g_pika_server->GetRaftManager() && !pika_raft::g_in_raft_apply) { |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a potential null pointer dereference if g_pika_server is null. Add a null check for g_pika_server before calling GetRaftManager().
| if (g_pika_server->GetRaftManager() && !pika_raft::g_in_raft_apply) { | |
| if (g_pika_server && g_pika_server->GetRaftManager() && !pika_raft::g_in_raft_apply) { |
src/praft/src/praft.cc
Outdated
| if (redis_proto_data[pos] != '*') { | ||
| LOG(ERROR) << "Invalid Redis protocol: missing '*'"; |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No bounds checking before accessing redis_proto_data[pos]. If redis_proto_data is empty, this will cause undefined behavior. Add a check to ensure pos < redis_proto_data.size() before accessing.
| if (redis_proto_data[pos] != '*') { | |
| LOG(ERROR) << "Invalid Redis protocol: missing '*'"; | |
| if (redis_proto_data.empty() || pos >= redis_proto_data.size() || redis_proto_data[pos] != '*') { | |
| LOG(ERROR) << "Invalid Redis protocol: missing '*' or empty input"; |
src/storage/src/batch.cc
Outdated
| // TODO: 填充 Raft 和 Binlog 元信息 | ||
| // binlog_.set_term(/* current term */); | ||
| // binlog_.set_log_index(/* current index */); | ||
| // binlog_.set_filenum(/* current filenum */); | ||
| // binlog_.set_offset(/* current offset */); |
Copilot
AI
Oct 24, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical metadata fields (term, log_index, filenum, offset) are not being populated. These fields are essential for proper Raft log management and recovery. This TODO must be addressed before production use.
5697b32 to
38197e4
Compare
38197e4 to
12e9196
Compare
12e9196 to
720c150
Compare
9dc0248 to
7ad42a2
Compare
This reverts commit 7ad42a2.
fdf3b29 to
a3479bb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Copilot reviewed 24 out of 24 changed files in this pull request and generated 18 comments.
Comments suppressed due to low confidence (7)
src/pika_raft.cc:1
- The comment contains Chinese text '风格'. Consider translating to English:
// Append binlog (pikiwidb_raft style)
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comment is in Chinese. Consider translating to English:
// Create WriteDoneClosure and pass promise
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comment is in Chinese. Consider translating to English:
// Serialize binlog
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comment is in Chinese. Consider translating to English:
// Create Raft task
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comment is in Chinese. Consider translating to English:
// Submit to Raft
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comment is in Chinese. Consider translating to English:
// Parse binlog
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
src/pika_raft.cc:1
- The comments are in Chinese. Consider translating to English:
// Call Storage::OnBinlogWrite() directly to apply binlogand// Note: log_index is currently set to 0, can be passed from external caller later
// Copyright (c) 2015-present, Qihoo, Inc. All rights reserved.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/storage/src/redis_strings.cc
Outdated
| ScopeRecordLock l(lock_mgr_, key); | ||
| return db_->Put(default_write_options_, key, strings_value.Encode()); | ||
|
|
||
| // Strings DB 只有默认列族,索引为 0 |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is in Chinese. For better maintainability and international collaboration, code comments should be in English. Consider translating to: // Strings DB only has the default column family, index 0
| // Strings DB 只有默认列族,索引为 0 | |
| // Strings DB only has the default column family, index 0 |
src/praft/src/praft.cc
Outdated
| continue; | ||
| } | ||
|
|
||
| // 直接解析 Protobuf binlog |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is in Chinese. For consistency with the rest of the codebase, consider translating to English: // Parse Protobuf binlog directly
| // 直接解析 Protobuf binlog | |
| // Parse Protobuf binlog directly |
src/praft/src/praft.cc
Outdated
| continue; | ||
| } | ||
|
|
||
| // 应用 binlog 到 storage 并获取结果 |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is in Chinese. Consider translating to English: // Apply binlog to storage and get result
| // 应用 binlog 到 storage 并获取结果 | |
| // Apply binlog to storage and get result |
src/praft/src/praft.cc
Outdated
| // 应用 binlog 到 storage 并获取结果 | ||
| rocksdb::Status apply_status = g_pika_server->GetRaftManager()->ApplyBinlogEntry(log_str); | ||
|
|
||
| // 根据应用结果设置 closure 状态 |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is in Chinese. Consider translating to English: // Set closure status based on application result
| // 根据应用结果设置 closure 状态 | |
| // Set closure status based on application result |
| for (const auto& entry : binlog.entries()) { | ||
| uint32_t cf_idx = entry.cf_idx(); | ||
|
|
||
|
|
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are extra blank lines before the db variable declaration. Consider removing one of the blank lines for consistency with the rest of the codebase.
|
|
||
|
|
||
| void Cmd::DoBinlog() { | ||
| // 如果是 Raft 模式,跳过写 binlog(改用 Protobuf binlog) |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment is in Chinese. Consider translating to English: // If in Raft mode, skip writing binlog (use Protobuf binlog instead)
| // 如果是 Raft 模式,跳过写 binlog(改用 Protobuf binlog) | |
| // If in Raft mode, skip writing binlog (use Protobuf binlog instead) |
| uint64_t before_do_binlog_us = pstd::NowMicros(); | ||
| this->command_duration_ms = (before_do_binlog_us - before_do_command_us) / 1000; | ||
| DoBinlog(); | ||
| DoBinlog(); |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trailing whitespace at the end of the line. Consider removing it for code cleanliness.
| DoBinlog(); | |
| DoBinlog(); |
| -DCMAKE_INSTALL_PREFIX=${STAGED_INSTALL_PREFIX} | ||
| -DCMAKE_BUILD_TYPE=${LIB_BUILD_TYPE} | ||
| -DWITH_GFLAGS=ON | ||
| -DWITH_GFLAGS=OFF |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The glog library is being configured with -DWITH_GFLAGS=OFF (changed from ON). Since gflags is a dependency and is already being built, this may cause compatibility issues. Consider verifying that glog without gflags integration is intentional and doesn't break existing logging configurations that may depend on gflags.
| -DWITH_GFLAGS=OFF | |
| -DWITH_GFLAGS=ON |
| -DBUILD_TESTING=OFF | ||
| -DBUILD_SHARED_LIBS=OFF | ||
| -DWITH_UNWIND=${LIBUNWIND_ON} | ||
| -DWITH_UNWIND=OFF |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The glog library is being configured with -DWITH_UNWIND=OFF (changed from ${LIBUNWIND_ON}). This disables stack trace support in glog, which can make debugging crashes more difficult. Consider verifying that disabling libunwind support is intentional and acceptable for production use.
| -DWITH_UNWIND=OFF | |
| -DWITH_UNWIND=ON |
| DEPENDS | ||
| URL | ||
| https://github.com/madler/zlib/releases/download/v1.2.13/zlib-1.2.13.tar.gz | ||
| https://github.com/madler/zlib/releases/download/v1.3.1/zlib-1.3.1.tar.gz |
Copilot
AI
Oct 31, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The zlib version is being upgraded from 1.2.13 to 1.3.1. While version upgrades are generally good, ensure this has been tested thoroughly as zlib is a critical compression library used throughout the system. The corresponding MD5 hash has been updated, which is correct.
a3479bb to
edec48b
Compare
15eef57 to
def88bd
Compare
def88bd to
0a646cb
Compare
8b62744 to
16746d4
Compare
16746d4 to
3e973c3
Compare
c6caca8 to
f55b36d
Compare
Pika Raft 数据一致性实现 - Code Review 文档
📋 目录
实现目标
要解决的问题
将 Pika 从异步主从复制升级为强一致性分布式系统:
技术选型
从零实现路径
步骤1: 集成 braft 框架
1.1 创建 praft 模块
目录结构:
核心类:
文件:
src/praft/include/praft/praft.h,src/praft/src/praft.cc1.2 实现 Raft 命令
文件:
src/pika_raft.cc,include/pika_raft.h步骤2: 设计数据一致性方案
问题分析
传统流程(非 Raft):
问题: Master 成功返回但 Slave 未写入 → 数据不一致
目标流程(Raft):
保证: 客户端收到成功时,数据已写入多数派节点
核心设计决策
决策1: 何时提交到 Raft?
DoBinlog()中提交决策2: 何时写入 RocksDB?
on_apply()回调中写入决策3: 如何避免递归?
thread_local bool g_in_raft_apply标志on_apply调用,避免二次提交步骤3: 实现同步等待机制
问题
Raft 的
apply()是异步的,但客户端需要同步等待结果。解决方案
使用 C++11
promise/future:关键: closure 的
Run()方法中调用promise->set_value()步骤4: 实现命令重放机制
日志格式
选择 Redis 协议作为 Raft Log 格式:
优点:
ToRedisProtocol()方法on_apply 实现
步骤5: 修改命令执行逻辑
修改 SetCmd::Do()
目的: 客户端调用时跳过 RocksDB 写入
修改 Cmd::DoBinlog()
目的: Raft 模式下提交到 Raft 而非传统 binlog
步骤6: 性能优化 - 关闭 RocksDB WAL
问题
RocksDB WAL 和 Raft Log 都提供持久化,导致双重写入。
解决方案
Raft 模式下自动关闭 WAL:
性能提升:
核心实现
文件清单
src/praft/src/praft.ccsrc/praft/include/praft/praft.hsrc/pika_command.ccsrc/pika_kv.ccsrc/pika_raft.ccinclude/pika_raft.h关键代码片段
1. WriteDoneClosure (同步等待)
文件:
src/praft/include/praft/praft.h:45-65Review 要点:
unique_ptr自动管理内存shared_ptr<promise>避免悬挂指针Run()2. SubmitCommandWithPromise (提交入口)
文件:
src/praft/src/praft.cc:715-747Review 要点:
promise通过 move 语义传递new分配,Run()中释放raft_nodes_需要加锁3. ApplyCommandFromRedisProtocol (命令重放)
文件:
src/praft/src/praft.cc:822-901Review 要点:
g_in_raft_apply标志必须成对设置/清除改进建议:
4. on_apply (Raft 状态机)
文件:
src/praft/src/praft.cc:557-609Review 要点:
dynamic_cast可能返回 nullptr,需要检查5. SetCmd::Do() (写入控制)
文件:
src/pika_kv.cc:67-77Review 要点:
6. Cmd::DoBinlog() (Raft 提交)
文件:
src/pika_command.cc:962-1008Review 要点:
数据流分析
完整写入流程
关键时序点
保证: T6 时刻,数据已写入多数派节点的 RocksDB
Code Review 检查清单
1. 一致性保证 ✅
✅ 检查项
客户端调用时是否跳过了 RocksDB 写入?
SetCmd::Do()中的g_in_raft_apply判断src/pika_kv.cc:71-77on_apply 调用时是否执行了 RocksDB 写入?
g_in_raft_apply = true时的逻辑src/praft/src/praft.cc:872-886是否避免了递归提交到 Raft?
DoBinlog()中的g_in_raft_apply判断src/pika_command.cc:965客户端是否在 Raft 提交成功后才返回?
future.wait_for()和future.get()src/pika_command.cc:991-10052. 线程安全⚠️
✅ 检查项
g_in_raft_apply是否使用 thread_local?thread_localsrc/praft/src/praft.cc:31g_in_raft_apply是否成对设置和清除?= true是否有对应的= falsesrc/praft/src/praft.cc:872, 889promise 是否通过 shared_ptr 管理?
std::shared_ptr<std::promise<...>>src/praft/include/praft/praft.h:50closure 是否通过 unique_ptr 自动释放?
Run()中的std::unique_ptr<WriteDoneClosure> self_guard(this)src/praft/include/praft/praft.h:55风险1:
g_in_raft_apply标志异常未清除如果
Execute()抛出异常,标志可能不会被清除:建议: 使用 RAII 封装
风险2: 访问
raft_nodes_的并发安全SubmitCommandWithPromise访问raft_nodes_时没有加锁:建议: 添加读写锁
3. 错误处理 ✅
✅ 检查项
Raft 节点不存在时是否正确处理?
raft_nodes_.find()返回end()的处理src/praft/src/praft.cc:720-723Raft 提交失败时是否返回错误?
status.ok()的判断src/pika_command.cc:984-988超时时是否正确处理?
future_status::timeout的处理src/pika_command.cc:991-995命令解析失败时是否正确处理?
GetCmd()返回 nullptr 的处理src/praft/src/praft.cc:876-8804. 内存管理 ✅
✅ 检查项
WriteDoneClosure 是否正确释放?
Run()中的unique_ptr或手动 deletesrc/praft/include/praft/praft.h:55promise 的生命周期是否正确?
src/pika_command.cc:982,src/praft/src/praft.cc:733Redis 协议字符串是否会内存泄漏?
std::string自动管理src/praft/src/praft.cc:8605. 性能考虑⚠️
✅ 检查项
WAL 是否已关闭?
DisableWal(true)的调用src/pika_raft.cc:98, 113超时时间是否合理?
wait_for(std::chrono::seconds(10))src/pika_command.cc:991日志大小是否合理?
6. 向下兼容 ✅
✅ 检查项
非 Raft 模式是否正常工作?
GetRaftManager()返回 nullptr 的处理src/pika_kv.cc:71,src/pika_command.cc:965旧版 binlog 是否兼容?
on_apply中的格式检测src/praft/src/praft.cc:571-577测试验证
测试环境
节点配置:
基础功能测试
测试1: 集群初始化
预期输出:
预期日志 (
node1/log/pika.INFO):测试2: 数据复制
预期日志 (Leader -
node1/log/pika.INFO):预期日志 (Follower -
node2/log/pika.INFO):测试3: 并发写入
预期: 所有写入成功,无错误
测试4: 异常处理
压力测试
测试5: 高并发写入
测试6: Leader 故障切换
已知限制
功能限制
仅支持 SET 命令
Do()方法无批量操作支持
Snapshot 未实现
性能限制
同步等待延迟
固定超时时间
单线程 on_apply
建议改进
短期改进(P0)
添加 RAII 封装
g_in_raft_apply添加读写锁保护
raft_nodes_std::shared_mutex nodes_mutex_; std::shared_lock<std::shared_mutex> lock(nodes_mutex_);超时时间可配置
中期改进(P1)
支持更多写命令
Snapshot 支持
监控指标
长期改进(P2)
异步回调模式
Batch 操作支持
多线程 on_apply
总结
✅ 已实现
⏳ 待测试
Review 总结
核心设计 ✅
整体设计合理,通过
thread_local标志区分客户端调用和 on_apply 调用,避免递归,保证数据一致性。代码质量 ✅
潜在风险⚠️
g_in_raft_apply标志需要 RAII 封装(防止异常导致泄漏)raft_nodes_访问需要加锁(并发安全)建议 💡
可以合并 ✅
代码实现符合预期,建议修复 P0 问题后合并到主分支,然后进行充分的测试验证。
文档版本: v1.0
审查日期: 2025-10-24
审查结论: ✅ 通过(需修复 P0 问题)