Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
c6ca56f
Activate improved compression.
Mustang98 Oct 13, 2025
5d60799
[merkle-update-compression] New V3 interfaces for block broadcast
Mustang98 Oct 13, 2025
b1b47f4
[merkle-update-compression] MU compression optimizations
Mustang98 Oct 13, 2025
81719c5
[merkle-update-compression] New interface
Mustang98 Oct 13, 2025
2d54f92
[merkle-update-compression] Add asynchronious state extraction for V3…
Mustang98 Oct 13, 2025
e02a8e4
[merkle-update-compression] Refactoring, applying new compression for…
Mustang98 Oct 27, 2025
efd9259
[merkle-update-compression] Revert experimental changes in the compre…
Mustang98 Nov 27, 2025
8f05ab5
Copy depth balance optimizations into improved compression
Mustang98 Nov 27, 2025
f7a8f36
[merkle-update-compression] Add left MU subtree state-based decompres…
Mustang98 Nov 27, 2025
f12c44b
[merkle-update-compression] Fix previous block extraction from proof …
Mustang98 Dec 1, 2025
9d03312
[merkle-update-compression] Implemented signatures check for compress…
Mustang98 Dec 9, 2025
1b130ea
[merkle-update-compression] Use updated state extraction intefrace fo…
Mustang98 Dec 9, 2025
5dc2e60
Merge remote-tracking branch 'upstream/testnet' into merkle-update-co…
Mustang98 Dec 10, 2025
2ca0c43
[merkle-update-compression] style format fix
Mustang98 Dec 10, 2025
3db052c
[merkle-update-compression] Rename new alogirthm
Mustang98 Dec 10, 2025
d61f037
[merkle-update-compression] refactor state extraction in custom overlay
Mustang98 Dec 11, 2025
610e143
[merkle-update-compression] Remove unnecessary includes
Mustang98 Dec 11, 2025
596ed23
[merkle-update-compression] small refactoring
Mustang98 Dec 11, 2025
315dffc
[merkle-update-compression] Remove redundant includes
Mustang98 Dec 11, 2025
ae8e9b0
[activate-improved-compression] Activate new compression for candidat…
Mustang98 Dec 11, 2025
03254bf
[activate-improved-compression] Activate improved compression in bloc…
Mustang98 Dec 11, 2025
3d3a205
[activate-improved-compression] Activate improved compression for blo…
Mustang98 Dec 11, 2025
8761b27
[activate-improved-compression] Activate improved compression for ser…
Mustang98 Dec 11, 2025
d73c3e0
[merkle-update-compression] Don't check signatures after decompressio…
Mustang98 Dec 17, 2025
847ecc2
[merkle-update-compression] Remove incorrect check
Mustang98 Dec 17, 2025
685c7cf
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Dec 17, 2025
bb83c7c
[merkle-update-compression] Extend benchmark logs for candidate_seria…
Mustang98 Dec 17, 2025
edffd1d
[merkle-update-compression] Compress only main Merkle Update tree, sk…
Mustang98 Dec 18, 2025
5673ce7
[merkle-update-compression] Reorder bool signatures_checked parameter…
Mustang98 Dec 18, 2025
98c5841
[merkle-update-compression] small refactoring
Mustang98 Dec 22, 2025
0762ab2
[merkle-update-compression] Style fix
Mustang98 Dec 22, 2025
a4344a7
Merge branch 'testnet' into merkle-update-compression
Mustang98 Dec 22, 2025
55ca643
[merkle-update-compression] Fix merge conflict
Mustang98 Dec 22, 2025
5f73338
[merkle-update-compression] Logs fix
Mustang98 Dec 22, 2025
ff6f968
[merkle-update-compression] Corrected state-based compression and dec…
Mustang98 Dec 31, 2025
4fee4bf
[merkle-update-compression] Replace prunned branch construction method
Mustang98 Dec 31, 2025
4df6040
Revert "[merkle-update-compression] Replace prunned branch constructi…
Mustang98 Jan 6, 2026
a7cf416
Revert "[merkle-update-compression] Corrected state-based compression…
Mustang98 Jan 6, 2026
fc29b86
[merkle-update-compression] Revert straightforward approach of PB rec…
Mustang98 Jan 6, 2026
3f986f4
Merge branch 'testnet' into merkle-update-compression
Mustang98 Jan 6, 2026
14b9c44
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 6, 2026
b6e27f4
[activate-improved-compression] style fix
Mustang98 Jan 6, 2026
24dd4e1
[merkle-update-compression] Add compression algorithm name to benchma…
Mustang98 Jan 13, 2026
e84c26c
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 13, 2026
0eef86a
[merkle-update-compression] fix style
Mustang98 Jan 13, 2026
1e92ba2
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 13, 2026
7743bd0
[activate-improved-compression] Add compression algorithm name to ben…
Mustang98 Jan 13, 2026
70d7079
[activate-improved-compression] Fix refactoring bug
Mustang98 Jan 14, 2026
53b0626
[merkle-update-compression] Set verbosity=2 for benchmark logs
Mustang98 Jan 14, 2026
006210f
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 14, 2026
bd3eb2d
[activate-improved-compression] Logs verbosity
Mustang98 Jan 15, 2026
43a9de9
[activate-improved-compression] Skip signatures validation temporarily
Mustang98 Jan 15, 2026
fdade48
Revert "[activate-improved-compression] Skip signatures validation te…
Mustang98 Jan 15, 2026
3a4e781
Merge branch 'testnet' into merkle-update-compression
Mustang98 Jan 24, 2026
59d88cc
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 24, 2026
63de10a
[merkle-update-compression] style fix
Mustang98 Jan 24, 2026
c556565
Merge branch 'merkle-update-compression' into activate-improved-compr…
Mustang98 Jan 24, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
242 changes: 204 additions & 38 deletions crypto/vm/boc-compression.cpp

Large diffs are not rendered by default.

26 changes: 20 additions & 6 deletions crypto/vm/boc-compression.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,18 +27,32 @@ This file is part of TON Blockchain Library.
namespace vm {

constexpr size_t kDecompressedSizeBytes = 4;
constexpr size_t kMUCellOrderInRoot = 2;

enum class CompressionAlgorithm : int { BaselineLZ4 = 0, ImprovedStructureLZ4 = 1 };
enum class CompressionAlgorithm : int {
BaselineLZ4 = 0,
ImprovedStructureLZ4 = 1,
ImprovedStructureLZ4WithState = 2,
};

td::Result<td::BufferSlice> boc_compress_baseline_lz4(const std::vector<td::Ref<vm::Cell>>& boc_roots);
td::Result<std::vector<td::Ref<vm::Cell>>> boc_decompress_baseline_lz4(td::Slice compressed, int max_decompressed_size);

td::Result<td::BufferSlice> boc_compress_improved_structure_lz4(const std::vector<td::Ref<vm::Cell>>& boc_roots);
td::Result<std::vector<td::Ref<vm::Cell>>> boc_decompress_improved_structure_lz4(td::Slice compressed,
int max_decompressed_size);
td::Result<td::BufferSlice> boc_compress_improved_structure_lz4(const std::vector<td::Ref<vm::Cell>>& boc_roots,
bool compress_merkle_update = false,
td::Ref<vm::Cell> state = td::Ref<vm::Cell>());
td::Result<std::vector<td::Ref<vm::Cell>>> boc_decompress_improved_structure_lz4(
td::Slice compressed, int max_decompressed_size, bool decompress_merkle_update = false,
td::Ref<vm::Cell> state = td::Ref<vm::Cell>());

td::Result<td::BufferSlice> boc_compress(const std::vector<td::Ref<vm::Cell>>& boc_roots,
CompressionAlgorithm algo = CompressionAlgorithm::BaselineLZ4);
td::Result<std::vector<td::Ref<vm::Cell>>> boc_decompress(td::Slice compressed, int max_decompressed_size);
CompressionAlgorithm algo = CompressionAlgorithm::BaselineLZ4,
td::Ref<vm::Cell> state = td::Ref<vm::Cell>());
td::Result<std::vector<td::Ref<vm::Cell>>> boc_decompress(td::Slice compressed, int max_decompressed_size,
td::Ref<vm::Cell> state = td::Ref<vm::Cell>());

td::Result<bool> boc_need_state_for_decompression(const td::Slice& compressed);

td::Result<std::string> boc_get_algorithm_name(const td::Slice& compressed);

} // namespace vm
2 changes: 1 addition & 1 deletion tl/generate/scheme/ton_api.tl
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@ ton.blockIdApprove root_cell_hash:int256 file_hash:int256 = ton.BlockId;

tonNode.dataFull id:tonNode.blockIdExt proof:bytes block:bytes is_link:Bool = tonNode.DataFull;
tonNode.dataFullCompressed id:tonNode.blockIdExt flags:# compressed:bytes is_link:Bool = tonNode.DataFull;
tonNode.dataFullCompressedV2 id:tonNode.blockIdExt flags:# compressed:bytes is_link:Bool = tonNode.DataFull;
tonNode.dataFullCompressedV2 id:tonNode.blockIdExt flags:# proof:bytes block_compressed:bytes is_link:Bool = tonNode.DataFull;
tonNode.dataFullEmpty = tonNode.DataFull;

tonNode.capabilities#f5bf60c0 version_major:int version_minor:int flags:# = tonNode.Capabilities;
Expand Down
56 changes: 31 additions & 25 deletions validator-session/candidate-serializer.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,28 @@

namespace ton::validatorsession {

namespace {
constexpr const char* k_called_from_validator_session = "validator_session";
} // namespace

td::Result<td::BufferSlice> serialize_candidate(const tl_object_ptr<ton_api::validatorSession_candidate>& block,
bool compression_enabled) {
if (!compression_enabled) {
auto t_compression_start = td::Time::now();
auto res = serialize_tl_object(block, true);
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark serialize_candidate block_id="
<< block->root_hash_.to_hex()
<< " called_from=" << k_called_from_validator_session
<< " time_sec=" << (td::Time::now() - t_compression_start)
<< " compression=" << "none"
<< " original_size=" << block->data_.size() + block->collated_data_.size()
<< " compressed_size=" << block->data_.size() + block->collated_data_.size();
return res;
}
size_t decompressed_size;
TRY_RESULT(compressed,
compress_candidate_data(block->data_, block->collated_data_, decompressed_size, block->root_hash_))
return create_serialize_tl_object<ton_api::validatorSession_compressedCandidate>(
0, block->src_, block->round_, block->root_hash_, (int)decompressed_size, std::move(compressed));
TRY_RESULT(compressed, compress_candidate_data(block->data_, block->collated_data_, k_called_from_validator_session,
block->root_hash_))
return create_serialize_tl_object<ton_api::validatorSession_compressedCandidateV2>(
0, block->src_, block->round_, block->root_hash_, std::move(compressed));
}

td::Result<tl_object_ptr<ton_api::validatorSession_candidate>> deserialize_candidate(td::Slice data,
Expand All @@ -52,7 +56,7 @@ td::Result<tl_object_ptr<ton_api::validatorSession_candidate>> deserialize_candi
auto t_decompression_start = td::Time::now();
TRY_RESULT(res, fetch_tl_object<ton_api::validatorSession_candidate>(data, true));
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark deserialize_candidate block_id="
<< res->root_hash_.to_hex()
<< res->root_hash_.to_hex() << " called_from=" << k_called_from_validator_session
<< " time_sec=" << (td::Time::now() - t_decompression_start)
<< " compression=" << "none"
<< " compressed_size=" << res->data_.size() + res->collated_data_.size();
Expand All @@ -70,8 +74,9 @@ td::Result<tl_object_ptr<ton_api::validatorSession_candidate>> deserialize_candi
if (c.decompressed_size_ > max_decompressed_data_size) {
return td::Status::Error("decompressed size is too big");
}
TRY_RESULT(p, decompress_candidate_data(c.data_, false, c.decompressed_size_,
max_decompressed_data_size, c.root_hash_));
TRY_RESULT(p,
decompress_candidate_data(c.data_, false, c.decompressed_size_, max_decompressed_data_size,
k_called_from_validator_session, c.root_hash_));
return create_tl_object<ton_api::validatorSession_candidate>(c.src_, c.round_, c.root_hash_,
std::move(p.first), std::move(p.second));
}();
Expand All @@ -81,15 +86,16 @@ td::Result<tl_object_ptr<ton_api::validatorSession_candidate>> deserialize_candi
if (c.data_.size() > max_decompressed_data_size) {
return td::Status::Error("Compressed data is too big");
}
TRY_RESULT(p, decompress_candidate_data(c.data_, true, 0, max_decompressed_data_size, c.root_hash_));
TRY_RESULT(p, decompress_candidate_data(c.data_, true, 0, max_decompressed_data_size,
k_called_from_validator_session, c.root_hash_));
return create_tl_object<ton_api::validatorSession_candidate>(c.src_, c.round_, c.root_hash_,
std::move(p.first), std::move(p.second));
}();
}));
return res;
}

td::Result<td::BufferSlice> compress_candidate_data(td::Slice block, td::Slice collated_data, size_t& decompressed_size,
td::Result<td::BufferSlice> compress_candidate_data(td::Slice block, td::Slice collated_data, std::string called_from,
td::Bits256 root_hash) {
vm::BagOfCells boc1, boc2;
TRY_STATUS(boc1.deserialize(block));
Expand All @@ -102,23 +108,21 @@ td::Result<td::BufferSlice> compress_candidate_data(td::Slice block, td::Slice c
roots.push_back(boc2.get_root_cell(i));
}
auto t_compression_start = td::Time::now();
TRY_RESULT(data, vm::std_boc_serialize_multi(std::move(roots), 2));
decompressed_size = data.size();
td::BufferSlice compressed = td::lz4_compress(data);
TRY_RESULT(compressed, vm::boc_compress(roots, vm::CompressionAlgorithm::ImprovedStructureLZ4));
TRY_RESULT(algorithm_name, vm::boc_get_algorithm_name(compressed));
LOG(DEBUG) << "Compressing block candidate: " << block.size() + collated_data.size() << " -> " << compressed.size();
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark compress_candidate_data block_id=" << root_hash.to_hex()
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark serialize_candidate block_id=" << root_hash.to_hex()
<< " called_from=" << called_from
<< " time_sec=" << (td::Time::now() - t_compression_start)
<< " compression=" << "compressed"
<< " compression=" << "compressedV2_" << algorithm_name
<< " original_size=" << block.size() + collated_data.size()
<< " compressed_size=" << compressed.size();
return compressed;
}

td::Result<std::pair<td::BufferSlice, td::BufferSlice>> decompress_candidate_data(td::Slice compressed,
bool improved_compression,
int decompressed_size,
int max_decompressed_size,
td::Bits256 root_hash) {
td::Result<std::pair<td::BufferSlice, td::BufferSlice>> decompress_candidate_data(
td::Slice compressed, bool improved_compression, int decompressed_size, int max_decompressed_size,
std::string called_from, td::Bits256 root_hash) {
std::vector<td::Ref<vm::Cell>> roots;
auto t_decompression_start = td::Time::now();
if (!improved_compression) {
Expand All @@ -127,15 +131,17 @@ td::Result<std::pair<td::BufferSlice, td::BufferSlice>> decompress_candidate_dat
return td::Status::Error("decompressed size mismatch");
}
TRY_RESULT_ASSIGN(roots, vm::std_boc_deserialize_multi(decompressed));
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark decompress_candidate_data block_id=" << root_hash.to_hex()
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark deserialize_candidate block_id=" << root_hash.to_hex()
<< " called_from=" << called_from
<< " time_sec=" << (td::Time::now() - t_decompression_start)
<< " compression=" << "compressed"
<< " compressed_size=" << compressed.size();
<< " compression=" << "compressed" << " compressed_size=" << compressed.size();
} else {
TRY_RESULT_ASSIGN(roots, vm::boc_decompress(compressed, max_decompressed_size));
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark decompress_candidate_data block_id=" << root_hash.to_hex()
TRY_RESULT(algorithm_name, vm::boc_get_algorithm_name(compressed));
VLOG(VALIDATOR_SESSION_BENCHMARK) << "Broadcast_benchmark deserialize_candidate block_id=" << root_hash.to_hex()
<< " called_from=" << called_from
<< " time_sec=" << (td::Time::now() - t_decompression_start)
<< " compression=" << "compressedV2"
<< " compression=" << "compressedV2_" << algorithm_name
<< " compressed_size=" << compressed.size();
}
if (roots.empty()) {
Expand Down
12 changes: 6 additions & 6 deletions validator-session/candidate-serializer.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@
along with TON Blockchain Library. If not, see <http://www.gnu.org/licenses/>.
*/
#pragma once
#include <string>

#include "auto/tl/ton_api.h"
#include "ton/ton-types.h"

Expand All @@ -26,12 +28,10 @@ td::Result<tl_object_ptr<ton_api::validatorSession_candidate>> deserialize_candi
bool compression_enabled,
int max_decompressed_data_size);

td::Result<td::BufferSlice> compress_candidate_data(td::Slice block, td::Slice collated_data, size_t& decompressed_size,
td::Result<td::BufferSlice> compress_candidate_data(td::Slice block, td::Slice collated_data, std::string called_from,
td::Bits256 root_hash);
td::Result<std::pair<td::BufferSlice, td::BufferSlice>> decompress_candidate_data(td::Slice compressed,
bool improved_compression,
int decompressed_size,
int max_decompressed_size,
td::Bits256 root_hash);
td::Result<std::pair<td::BufferSlice, td::BufferSlice>> decompress_candidate_data(
td::Slice compressed, bool improved_compression, int decompressed_size, int max_decompressed_size,
std::string called_from, td::Bits256 root_hash);

} // namespace ton::validatorsession
43 changes: 30 additions & 13 deletions validator/collator-node/utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,28 @@

namespace ton::validator {

constexpr const char* k_called_from_collator_node = "collator_node";

tl_object_ptr<ton_api::collatorNode_Candidate> serialize_candidate(const BlockCandidate& block, bool compress) {
if (!compress) {
return create_tl_object<ton_api::collatorNode_candidate>(
auto t_compression_start = td::Time::now();
auto res = create_tl_object<ton_api::collatorNode_candidate>(
PublicKey{pubkeys::Ed25519{block.pubkey.as_bits256()}}.tl(), create_tl_block_id(block.id), block.data.clone(),
block.collated_data.clone());
VLOG(COLLATOR_NODE_BENCHMARK) << "Broadcast_benchmark serialize_candidate block_id=" << block.id.root_hash.to_hex()
<< " called_from=" << k_called_from_collator_node
<< " time_sec=" << (td::Time::now() - t_compression_start)
<< " compression=" << "none"
<< " original_size=" << block.data.size() + block.collated_data.size()
<< " compressed_size=" << block.data.size() + block.collated_data.size();
return res;
}
size_t decompressed_size;
td::BufferSlice compressed =
validatorsession::compress_candidate_data(block.data, block.collated_data, decompressed_size, block.id.root_hash)
.move_as_ok();
return create_tl_object<ton_api::collatorNode_compressedCandidate>(
td::BufferSlice compressed = validatorsession::compress_candidate_data(
block.data, block.collated_data, k_called_from_collator_node, block.id.root_hash)
.move_as_ok();
return create_tl_object<ton_api::collatorNode_compressedCandidateV2>(
0, PublicKey{pubkeys::Ed25519{block.pubkey.as_bits256()}}.tl(), create_tl_block_id(block.id),
(int)decompressed_size, std::move(compressed));
std::move(compressed));
}

td::Result<BlockCandidate> deserialize_candidate(tl_object_ptr<ton_api::collatorNode_Candidate> f,
Expand All @@ -45,14 +54,21 @@ td::Result<BlockCandidate> deserialize_candidate(tl_object_ptr<ton_api::collator
*f, td::overloaded(
[&](ton_api::collatorNode_candidate& c) {
res = [&]() -> td::Result<BlockCandidate> {
auto t_decompression_start = td::Time::now();
auto hash = td::sha256_bits256(c.collated_data_);
auto key = PublicKey{c.source_};
if (!key.is_ed25519()) {
return td::Status::Error("invalid pubkey");
}
auto e_key = Ed25519_PublicKey{key.ed25519_value().raw()};
return BlockCandidate{e_key, create_block_id(c.id_), hash, std::move(c.data_),
std::move(c.collated_data_)};
auto block_id = create_block_id(c.id_);
BlockCandidate res{e_key, block_id, hash, std::move(c.data_), std::move(c.collated_data_)};
VLOG(COLLATOR_NODE_BENCHMARK)
<< "Broadcast_benchmark deserialize_candidate block_id=" << block_id.root_hash.to_hex()
<< " called_from=" << k_called_from_collator_node
<< " time_sec=" << (td::Time::now() - t_decompression_start) << " compression=" << "none"
<< " compressed_size=" << res.data.size() + res.collated_data.size();
return std::move(res);
}();
},
[&](ton_api::collatorNode_compressedCandidate& c) {
Expand All @@ -63,9 +79,9 @@ td::Result<BlockCandidate> deserialize_candidate(tl_object_ptr<ton_api::collator
if (c.decompressed_size_ > max_decompressed_data_size) {
return td::Status::Error("decompressed size is too big");
}
TRY_RESULT(p, validatorsession::decompress_candidate_data(c.data_, false, c.decompressed_size_,
max_decompressed_data_size,
create_block_id(c.id_).root_hash));
TRY_RESULT(p, validatorsession::decompress_candidate_data(
c.data_, false, c.decompressed_size_, max_decompressed_data_size,
k_called_from_collator_node, create_block_id(c.id_).root_hash));
auto collated_data_hash = td::sha256_bits256(p.second);
auto key = PublicKey{c.source_};
if (!key.is_ed25519()) {
Expand All @@ -79,7 +95,8 @@ td::Result<BlockCandidate> deserialize_candidate(tl_object_ptr<ton_api::collator
[&](ton_api::collatorNode_compressedCandidateV2& c) {
res = [&]() -> td::Result<BlockCandidate> {
TRY_RESULT(p, validatorsession::decompress_candidate_data(
c.data_, true, 0, max_decompressed_data_size, create_block_id(c.id_).root_hash));
c.data_, true, 0, max_decompressed_data_size, k_called_from_collator_node,
create_block_id(c.id_).root_hash));
auto collated_data_hash = td::sha256_bits256(p.second);
auto key = PublicKey{c.source_};
if (!key.is_ed25519()) {
Expand Down
2 changes: 2 additions & 0 deletions validator/collator-node/utils.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,8 @@

namespace ton::validator {

constexpr int VERBOSITY_NAME(COLLATOR_NODE_BENCHMARK) = verbosity_WARNING;

tl_object_ptr<ton_api::collatorNode_Candidate> serialize_candidate(const BlockCandidate& block, bool compress);
td::Result<BlockCandidate> deserialize_candidate(tl_object_ptr<ton_api::collatorNode_Candidate> f,
int max_decompressed_data_size);
Expand Down
Loading
Loading