Skip to content
Merged
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
88ccce7
Updated log-surgeon to use simulation branch.
SharafMohamed Mar 17, 2025
8aa0350
Update schema to have all timestamps and capture.
SharafMohamed Mar 17, 2025
b3f217b
Testing new log surgeon.
SharafMohamed Mar 24, 2025
ea09df0
Testing log surgeon x2.
SharafMohamed Mar 24, 2025
551447f
Add timers.
SharafMohamed Mar 31, 2025
ee3f682
Switch timers.
SharafMohamed Apr 7, 2025
042a4f6
Make make dictionaries compilable.
SharafMohamed Apr 23, 2025
c311fb8
Merge branch 'main' into new-log-surgeon
SharafMohamed Jun 25, 2025
4b53427
Lint and try to reduce the number of jobs during deps:core to avoid m…
SharafMohamed Jun 25, 2025
3d27fdf
Remove JOBS from task, the correct way is to set the parallel tasks v…
SharafMohamed Jun 25, 2025
018eccb
Remove components/core/submodules/log-surgeon.
davidlion Jul 10, 2025
5971025
Unset PROF_ENABLED.
davidlion Jul 10, 2025
b932b6a
Get clp building with log surgeon locally.
davidlion Jul 10, 2025
8637db5
Merge remote-tracking branch 'upstream/main' into pr-1033
davidlion Jul 10, 2025
50cfd39
Unit tests build, but fail with possible logical errors.
davidlion Jul 10, 2025
0e17e3f
Merge branch 'new-log-surgeon' of https://github.com/SharafMohamed/cl…
SharafMohamed Jul 14, 2025
2083aa4
Remove duplicate REQUIRE check.
SharafMohamed Jul 14, 2025
e778979
Fix unit-test bugs.
SharafMohamed Jul 16, 2025
da18646
Fix unit-test typo and spacing.
SharafMohamed Jul 16, 2025
8b4b24c
Merge branch 'main' into new-log-surgeon
SharafMohamed Jul 16, 2025
6dcadca
Update log-surgeon to newest version.
SharafMohamed Jul 16, 2025
0844041
Remove profiling changes.
SharafMohamed Jul 16, 2025
3c20be7
Test disabling macos-14.
SharafMohamed Jul 16, 2025
e8d5fbb
Test disabling macos-13.
SharafMohamed Jul 16, 2025
c8edd58
Readd macos-13 and macos-14 to the CI.
SharafMohamed Jul 16, 2025
c0f86e9
Bump log-surgeon version.
davidlion Jul 16, 2025
f631545
Add spacing to schemas.txt.
davidlion Jul 16, 2025
bb23a83
Drop unused parameter from load_lexer_from_file.
davidlion Jul 16, 2025
ea777c3
Remove a missed benchmark change.
SharafMohamed Jul 16, 2025
1d63a20
Format fix.
davidlion Jul 16, 2025
dcf3734
Merge commit 'refs/pull/1033/head' of https://github.com/y-scope/clp …
davidlion Jul 16, 2025
9552272
Remove reverse lexer; Rename forward lexer to just lexer.
SharafMohamed Jul 16, 2025
5cc0b14
Lint.
SharafMohamed Jul 16, 2025
23de2c3
Remove TODO.
SharafMohamed Jul 16, 2025
1fa8b79
Merge branch 'main' into new-log-surgeon
SharafMohamed Jul 16, 2025
bd2ae8e
Add heading comments to schemas.
davidlion Jul 17, 2025
b6c02bf
Merge commit 'refs/pull/1033/head' of https://github.com/y-scope/clp …
davidlion Jul 17, 2025
d43c4c7
Update schema timestamps to escape '.'.
SharafMohamed Jul 17, 2025
327d41e
Merge branch 'main' into new-log-surgeon
SharafMohamed Jul 17, 2025
d557467
Update regex in schema for leading spaces.
SharafMohamed Jul 17, 2025
a41ee92
Add issue link in schemas.txt.
davidlion Jul 19, 2025
f3f3c59
Merge commit 'refs/pull/1033/head' of https://github.com/y-scope/clp …
davidlion Jul 19, 2025
fbf6195
Make lexer safer in clo; Remove dead declaration in clg.
davidlion Jul 21, 2025
d1756ef
Update components/core/config/schemas.txt
davidlion Jul 21, 2025
8d12a2b
Update components/core/config/schemas.txt
davidlion Jul 21, 2025
46bc35f
Update components/core/config/schemas.txt
davidlion Jul 21, 2025
063297d
Delete redundant timestamps.
davidlion Jul 21, 2025
75dab67
Allow dates with space + single digit.
davidlion Jul 21, 2025
716e871
Tweak schemas.
davidlion Jul 21, 2025
70db06f
Consolidate timestamps into fewer regex supporting many more combinat…
davidlion Jul 21, 2025
64c0216
Add missing date case caught by rabbit.
davidlion Jul 21, 2025
4a05836
Avoid recreating the lexer object.
SharafMohamed Jul 21, 2025
4a730a4
Use Lexer object directly instead of a unique_ptr.
SharafMohamed Jul 21, 2025
d654f46
Throw error if parsed timestamp can't be encoded.
SharafMohamed Jul 21, 2025
a02efee
Reogranize schema slightly.
davidlion Jul 21, 2025
82f9825
Update error message; Reorded if statement; Lint.
SharafMohamed Jul 21, 2025
dd5a8a4
Merge branch 'new-log-surgeon' of https://github.com/SharafMohamed/cl…
SharafMohamed Jul 21, 2025
922053b
Add relative timestamp regex.
SharafMohamed Jul 21, 2025
88cea99
Merge branch 'main' into new-log-surgeon
davidlion Jul 23, 2025
e0169f1
Merge branch 'main' into new-log-surgeon
davidlion Jul 25, 2025
09889d6
Revert schemas.txt.
davidlion Jul 25, 2025
b37b7c1
Merge commit 'refs/pull/1033/head' of https://github.com/y-scope/clp …
davidlion Jul 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 51 additions & 8 deletions components/core/config/schemas.txt
Original file line number Diff line number Diff line change
@@ -1,19 +1,62 @@
// Delimiters
delimiters: \t\r\n!"#$%&'\(\)\*,:;<>?@\[\]\^_`\{\|\}~

// Timestamps (using the `timestamp` keyword)
// E.g. 2015-01-31T15:50:45.392
timestamp:\d{4}\-\d{2}\-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}
// E.g. 2015-01-31T15:50:45,392
timestamp:\d{4}\-\d{2}\-\d{2}T\d{2}:\d{2}:\d{2},\d{3}
// E.g. [2015-01-31T15:50:45
timestamp:\[\d{4}\-\d{2}\-\d{2}T\d{2}:\d{2}:\d{2}
// E.g. [20170106-16:56:41]
timestamp:\[\d{4}\d{2}\d{2}\-\d{2}:\d{2}:\d{2}\]
// E.g. 2015-01-31 15:50:45,392
// E.g. INFO [main] 2015-01-31 15:50:45,085
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2},\d{3}
// E.g. 2015-01-31 15:50:45.392
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
// E.g. [2015-01-31 15:50:45,085]
timestamp:\[\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2},\d{3}\]
// E.g. 2015-01-31 15:50:45
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}(\.\d{3}){0,1}
// E.g. [20150131-15:50:45]
timestamp:\[\d{8}\-\d{2}:\d{2}:\d{2}\]
// E.g. Started POST /api/v3/internal/allowed for 127.0.0.1 at 2017-06-18 00:20:44
// E.g. update-alternatives 2015-01-31 15:50:45
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}
// E.g. Start-Date: 2015-01-31 15:50:45
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}
// E.g. 2015/01/31 15:50:45
timestamp:\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}
// E.g. 15/01/31 15:50:45
timestamp:\d{2}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}
// E.g. 150131 9:50:45
timestamp:\d{2}\d{2}\d{2} [ 0-9]{2}:\d{2}:\d{2}
// E.g. 01 Jan 2016 15:50:17,085
timestamp:\d{2} [A-Z][a-z]{2} \d{4} \d{2}:\d{2}:\d{2},\d{3}
// E.g. Jan 01, 2016 3:50:17 PM
timestamp:[A-Z][a-z]{2} \d{2}, \d{4} [ 0-9]{2}:\d{2}:\d{2} [AP]M
// E.g. January 31, 2015 15:50
timestamp:[A-Z][a-z]+ \d{2}, \d{4} \d{2}:\d{2}
// E.g. E [31/Jan/2015:15:50:45
// E.g. localhost - - [01/Jan/2016:15:50:17
// E.g. 192.168.4.5 - - [01/Jan/2016:15:50:17
timestamp:\[\d{2}/[A-Z][a-z]{2}/\d{4}:\d{2}:\d{2}:\d{2}
// E.g. 192.168.4.5 - - [01/01/2016:15:50:17
timestamp:\[\d{2}/\d{2}/\d{4}:\d{2}:\d{2}:\d{2}
// E.g. ERROR: apport (pid 4557) Sun Jan 1 15:50:45 2015
timestamp:[A-Z][a-z]{2} [A-Z][a-z]{2} [ 0-9]{2} \d{2}:\d{2}:\d{2} \d{4}
// E.g. <<<2016-11-10 03:02:29:936
timestamp:\<\<\<\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}:\d{3}
// E.g. Jan 21 11:56:42
timestamp:[A-Z][a-z]{2} \d{2} \d{2}:\d{2}:\d{2}
// E.g. 01-21 11:56:42.392
timestamp:\d{2}\-\d{2} \d{2}:\d{2}:\d{2}.\d{3}
// E.g. 2016-05-08 11:34:04.083464
timestamp:\d{4}\-\d{2}\-\d{2} \d{2}:\d{2}:\d{2}.\d{6}

// Delimiters
delimiters: \t\r\n!"#$%&'\(\)\*,:;\<\>\?@\[\]\^_`\{\|\}~=

// Specially-encoded variables (using the `int` and `float` keywords)
int:\-{0,1}[0-9]+
float:\-{0,1}[0-9]+\.[0-9]+

// Dictionary variables
// Dictionary variables (the way `equals` and `hasNumber` are written only work if `=` is a delim)
hex:[a-fA-F]+
equals:.*=(?<val>.*[a-zA-Z0-9].*)
hasNumber:.*\d.*
equals:.*=.*[a-zA-Z0-9].*
14 changes: 9 additions & 5 deletions components/core/src/clp/Grep.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -866,21 +866,25 @@ bool Grep::get_bounds_of_next_potential_var(
string_reader.open(value.substr(begin_pos, end_pos - begin_pos));
parser_input_buffer.read_if_safe(reader_wrapper);
forward_lexer.reset();
forward_lexer.scan(parser_input_buffer, search_token);
auto [err, token] = forward_lexer.scan(parser_input_buffer);
if (log_surgeon::ErrorCode::Success != err) {
return false;
}
search_token = SearchToken{token.value()};
search_token.m_type_ids_set.insert(search_token.m_type_ids_ptr->at(0));
}
// TODO: use a set so its faster
// auto const& set = search_token.m_type_ids_set;
// if (set.find(static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID))
// if (set.find(static_cast<int>(log_surgeon::SymbolId::TokenUncaughtStringID))
// == set.end()
// && set.find(static_cast<int>(log_surgeon::SymbolID::TokenEndID))
// && set.find(static_cast<int>(log_surgeon::SymbolId::TokenEndID))
// == set.end())
// {
// is_var = true;
// }
auto const& type = search_token.m_type_ids_ptr->at(0);
if (type != static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID)
&& type != static_cast<int>(log_surgeon::SymbolID::TokenEndID))
if (type != static_cast<int>(log_surgeon::SymbolId::TokenUncaughtString)
&& type != static_cast<int>(log_surgeon::SymbolId::TokenEnd))
{
is_var = true;
}
Expand Down
24 changes: 20 additions & 4 deletions components/core/src/clp/Profiler.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -41,25 +41,41 @@ class Profiler {
// Types
enum class ContinuousMeasurementIndex : size_t {
Compression = 0,
ParseLogFile,
Search,
Length
};
enum class FragmentedMeasurementIndex : size_t {
Watch,
ParseLogFile,
Parse,
GetNext,
ProcessState,
ScanWhileLoop,
TokenCreation,
ProcessCharWatch,
ProcessChar,
Length
};

// Constants
// NOTE: We use lambdas so that we can programmatically initialize the constexpr array
static constexpr auto cContinuousMeasurementEnabled = []() {
std::array<bool, enum_to_underlying_type(ContinuousMeasurementIndex::Length)> enabled{};
enabled[enum_to_underlying_type(ContinuousMeasurementIndex::Compression)] = true;
enabled[enum_to_underlying_type(ContinuousMeasurementIndex::ParseLogFile)] = true;
enabled[enum_to_underlying_type(ContinuousMeasurementIndex::Search)] = true;
enabled[enum_to_underlying_type(ContinuousMeasurementIndex::Compression)] = false;
enabled[enum_to_underlying_type(ContinuousMeasurementIndex::Search)] = false;
return enabled;
}();
static constexpr auto cFragmentedMeasurementEnabled = []() {
std::array<bool, enum_to_underlying_type(FragmentedMeasurementIndex::Length)> enabled{};
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::ParseLogFile)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::Parse)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::GetNext)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::ProcessState)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::ScanWhileLoop)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::Watch)] = true;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::TokenCreation)] = true;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::ProcessCharWatch)] = false;
enabled[enum_to_underlying_type(FragmentedMeasurementIndex::ProcessChar)] = false;
return enabled;
}();

Expand Down
41 changes: 21 additions & 20 deletions components/core/src/clp/Utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@

#include <boost/algorithm/string.hpp>
#include <boost/lexical_cast.hpp>
#include <log_surgeon/Constants.hpp>
#include <log_surgeon/SchemaParser.hpp>
#include <spdlog/spdlog.h>
#include <string_utils/string_utils.hpp>
Expand Down Expand Up @@ -125,7 +126,6 @@ void load_lexer_from_file(
bool reverse,
log_surgeon::lexers::ByteLexer& lexer
) {
log_surgeon::SchemaParser sp;
std::unique_ptr<log_surgeon::SchemaAST> schema_ast
= log_surgeon::SchemaParser::try_schema_file(schema_file_path);
if (!lexer.m_symbol_id.empty()) {
Expand All @@ -134,52 +134,52 @@ void load_lexer_from_file(

// cTokenEnd and cTokenUncaughtString never need to be added as a rule to the lexer as they are
// not parsed
lexer.m_symbol_id[log_surgeon::cTokenEnd] = static_cast<int>(log_surgeon::SymbolID::TokenEndID);
lexer.m_symbol_id[log_surgeon::cTokenEnd] = static_cast<int>(log_surgeon::SymbolId::TokenEnd);
lexer.m_symbol_id[log_surgeon::cTokenUncaughtString]
= static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID);
= static_cast<int>(log_surgeon::SymbolId::TokenUncaughtString);
// cTokenInt, cTokenFloat, cTokenFirstTimestamp, and cTokenNewlineTimestamp each have unknown
// rule(s) until specified by the user so can't be explicitly added and are done by looping over
// schema_vars (user schema)
lexer.m_symbol_id[log_surgeon::cTokenInt] = static_cast<int>(log_surgeon::SymbolID::TokenIntId);
lexer.m_symbol_id[log_surgeon::cTokenInt] = static_cast<int>(log_surgeon::SymbolId::TokenInt);
lexer.m_symbol_id[log_surgeon::cTokenFloat]
= static_cast<int>(log_surgeon::SymbolID::TokenFloatId);
= static_cast<int>(log_surgeon::SymbolId::TokenFloat);
lexer.m_symbol_id[log_surgeon::cTokenFirstTimestamp]
= static_cast<int>(log_surgeon::SymbolID::TokenFirstTimestampId);
= static_cast<int>(log_surgeon::SymbolId::TokenFirstTimestamp);
lexer.m_symbol_id[log_surgeon::cTokenNewlineTimestamp]
= static_cast<int>(log_surgeon::SymbolID::TokenNewlineTimestampId);
= static_cast<int>(log_surgeon::SymbolId::TokenNewlineTimestamp);
// cTokenNewline is not added in schema_vars and can be explicitly added as '\n' to catch the
// end of non-timestamped log messages
lexer.m_symbol_id[log_surgeon::cTokenNewline]
= static_cast<int>(log_surgeon::SymbolID::TokenNewlineId);
= static_cast<int>(log_surgeon::SymbolId::TokenNewline);

lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenEndID)] = log_surgeon::cTokenEnd;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID)]
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenEnd)] = log_surgeon::cTokenEnd;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenUncaughtString)]
= log_surgeon::cTokenUncaughtString;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenIntId)] = log_surgeon::cTokenInt;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenFloatId)]
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenInt)] = log_surgeon::cTokenInt;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenFloat)]
= log_surgeon::cTokenFloat;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenFirstTimestampId)]
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenFirstTimestamp)]
= log_surgeon::cTokenFirstTimestamp;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenNewlineTimestampId)]
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenNewlineTimestamp)]
= log_surgeon::cTokenNewlineTimestamp;
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolID::TokenNewlineId)]
lexer.m_id_symbol[static_cast<int>(log_surgeon::SymbolId::TokenNewline)]
= log_surgeon::cTokenNewline;

lexer.add_rule(
lexer.m_symbol_id["newLine"],
std::move(
std::make_unique<log_surgeon::finite_automata::RegexASTLiteral<
log_surgeon::finite_automata::RegexNFAByteState>>(
log_surgeon::finite_automata::ByteNfaState>>(
log_surgeon::finite_automata::RegexASTLiteral<
log_surgeon::finite_automata::RegexNFAByteState>('\n')
log_surgeon::finite_automata::ByteNfaState>('\n')
)
)
);

for (auto const& delimiters_ast : schema_ast->m_delimiters) {
auto* delimiters_ptr = dynamic_cast<log_surgeon::DelimiterStringAST*>(delimiters_ast.get());
if (delimiters_ptr != nullptr) {
lexer.add_delimiters(delimiters_ptr->m_delimiters);
lexer.set_delimiters(delimiters_ptr->m_delimiters);
}
}
vector<uint32_t> delimiters;
Expand All @@ -203,7 +203,7 @@ void load_lexer_from_file(
// transform '.' from any-character into any non-delimiter character
rule->m_regex_ptr->remove_delimiters_from_wildcard(delimiters);

bool is_possible_input[log_surgeon::cUnicodeMax] = {false};
std::array<bool, log_surgeon::cSizeOfUnicode> is_possible_input{};
rule->m_regex_ptr->set_possible_inputs_to_true(is_possible_input);
bool contains_delimiter = false;
uint32_t delimiter_name;
Expand Down Expand Up @@ -243,7 +243,8 @@ void load_lexer_from_file(
lexer.add_rule(lexer.m_symbol_id[rule->m_name], std::move(rule->m_regex_ptr));
}
if (reverse) {
lexer.generate_reverse();
// TODO: This isn't used anymore for the new search, supporting it here is a waste of time
// lexer.generate_reverse();
} else {
lexer.generate();
}
Expand Down
5 changes: 2 additions & 3 deletions components/core/src/clp/clp/FileCompressor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ bool FileCompressor::compress_file(
std::string file_name = std::filesystem::canonical(file_to_compress.get_path()).string();

PROFILER_SPDLOG_INFO("Start parsing {}", file_name)
Profiler::start_continuous_measurement<Profiler::ContinuousMeasurementIndex::ParseLogFile>();
Profiler::start_fragmented_measurement<Profiler::FragmentedMeasurementIndex::ParseLogFile>();

m_file_reader.open(file_to_compress.get_path());

Expand Down Expand Up @@ -186,8 +186,7 @@ bool FileCompressor::compress_file(

m_file_reader.close();

Profiler::stop_continuous_measurement<Profiler::ContinuousMeasurementIndex::ParseLogFile>();
LOG_CONTINUOUS_MEASUREMENT(Profiler::ContinuousMeasurementIndex::ParseLogFile)
Profiler::stop_fragmented_measurement<Profiler::FragmentedMeasurementIndex::ParseLogFile>();
PROFILER_SPDLOG_INFO("Done parsing {}", file_name)

return succeeded;
Expand Down
9 changes: 9 additions & 0 deletions components/core/src/clp/clp/run.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,15 @@ int run(int argc, char const* argv[]) {

Profiler::stop_continuous_measurement<Profiler::ContinuousMeasurementIndex::Compression>();
LOG_CONTINUOUS_MEASUREMENT(Profiler::ContinuousMeasurementIndex::Compression)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::Watch)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::Parse)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::GetNext)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::ProcessState)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::ScanWhileLoop)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::TokenCreation)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::ParseLogFile)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::ProcessCharWatch)
LOG_FRAGMENTED_MEASUREMENT(Profiler::FragmentedMeasurementIndex::ProcessChar)

return 0;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,15 @@ set(
../LogTypeDictionaryReader.hpp
../ParsedMessage.cpp
../ParsedMessage.hpp
../Profiler.hpp
../Profiler.cpp
../ReaderInterface.cpp
../ReaderInterface.hpp
../ReadOnlyMemoryMappedFile.cpp
../ReadOnlyMemoryMappedFile.hpp
../spdlog_with_specializations.hpp
../Stopwatch.cpp
../Stopwatch.hpp
../streaming_compression/Decompressor.hpp
../streaming_compression/passthrough/Decompressor.cpp
../streaming_compression/passthrough/Decompressor.hpp
Expand Down
12 changes: 6 additions & 6 deletions components/core/src/clp/streaming_archive/writer/Archive.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -364,8 +364,8 @@ void Archive::write_msg_using_schema(LogEventView const& log_view) {
log_surgeon::Token& token = log_output_buffer->get_mutable_token(i);
int token_type = token.m_type_ids_ptr->at(0);
if (log_output_buffer->has_delimiters() && (timestamp_pattern != nullptr || i > 1)
&& token_type != static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID)
&& token_type != static_cast<int>(log_surgeon::SymbolID::TokenNewlineId))
&& token_type != static_cast<int>(log_surgeon::SymbolId::TokenUncaughtString)
&& token_type != static_cast<int>(log_surgeon::SymbolId::TokenNewline))
{
m_logtype_dict_entry.add_constant(token.get_delimiter(), 0, 1);
if (token.m_start_pos == token.m_buffer_size - 1) {
Expand All @@ -375,12 +375,12 @@ void Archive::write_msg_using_schema(LogEventView const& log_view) {
}
}
switch (token_type) {
case static_cast<int>(log_surgeon::SymbolID::TokenNewlineId):
case static_cast<int>(log_surgeon::SymbolID::TokenUncaughtStringID): {
case static_cast<int>(log_surgeon::SymbolId::TokenNewline):
case static_cast<int>(log_surgeon::SymbolId::TokenUncaughtString): {
m_logtype_dict_entry.add_constant(token.to_string(), 0, token.get_length());
break;
}
case static_cast<int>(log_surgeon::SymbolID::TokenIntId): {
case static_cast<int>(log_surgeon::SymbolId::TokenInt): {
encoded_variable_t encoded_var;
if (!EncodedVariableInterpreter::convert_string_to_representable_integer_var(
token.to_string(),
Expand All @@ -397,7 +397,7 @@ void Archive::write_msg_using_schema(LogEventView const& log_view) {
m_encoded_vars.push_back(encoded_var);
break;
}
case static_cast<int>(log_surgeon::SymbolID::TokenFloatId): {
case static_cast<int>(log_surgeon::SymbolId::TokenFloat): {
encoded_variable_t encoded_var;
if (!EncodedVariableInterpreter::convert_string_to_representable_float_var(
token.to_string(),
Expand Down
Loading
Loading