-
-
Notifications
You must be signed in to change notification settings - Fork 620
Add Logstash 9.2 support #3922
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Logstash 9.2 support #3922
Conversation
This test validates that SST progress variables are properly exposed in SHOW STATUS during State Snapshot Transfer when a node joins a cluster. The test focuses on verifying that all SST progress variables exist: - cluster_<name>_node_state (donor/joiner/synced) - cluster_<name>_sst_total (0-100 or dash when complete) - cluster_<name>_sst_stage (stage name or dash) - cluster_<name>_sst_stage_total (0-100 or dash) - cluster_<name>_sst_tables (count or dash) - cluster_<name>_sst_table (name or dash) Test creates a 2-node cluster with a simple table, triggers SST via JOIN CLUSTER, and verifies that all progress variables are present in SHOW STATUS output regardless of SST completion timing. Also adds watchdog = 0 to base searchd config with idempotent check.
- Fix output regex: now matches 'watchdog already exists' or 'watchdog added' - Add cleanup step to remove watchdog from base config after test - Prevents affecting other tests that run after this one
- Add 9.2 to TESTED_VERSIONS list - Update LATEST_TESTED_VERSION to 9.2 - Update expected test output for new version check Logstash 9.2 will use the existing >= 9.1 configuration which includes 'mode => read' for file input. Closes #3866
…ticoresoftware/manticoresearch into test/issue-3866-logstash-9.2
- Fixed superuser handling for Logstash 9.2+: use logstash.yml with allow_superuser setting instead of patching runner.rb - Updated version detection logic to apply correct configuration per version - Version 9.0-9.1: use runner.rb patch with ALLOW_SUPERUSER=1 - Version 9.2+: use logstash.yml with allow_superuser: true and --path.settings flag - Configuration logic remains unchanged: version 9.0+ uses host metadata, 9.1 uses simple add_field - Added test section for Logstash 9.2.0 - Updated version check file with 9.2 as LATEST_TESTED_VERSION
test_support_filebeat_versions❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/integrations/filebeat/test-integrations-support-filebeat-versions.rec––– input –––
rm -f /var/log/manticore/searchd.log; stdbuf -oL searchd $SEARCHD_FLAGS > /dev/null; if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore/searchd.log;fi
––– output –––
OK
––– input –––
set -b
––– output –––
OK
––– input –––
export PATH=/usr/bin:/usr/local/bin:/usr/sbin:/sbin:/bin
––– output –––
OK
––– input –––
apt-get update > /dev/null 2>&1 && apt-get install -y curl jq > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
timeout 420 bash -c 'echo "[]" > /tmp/filebeat_tags.json; page=1; attempts=0; max_attempts=3; while [ $attempts -lt $max_attempts ]; do attempts=$((attempts+1)); if curl -s --fail --max-time 10 "https://hub.docker.com/v2/repositories/elastic/filebeat/tags/?page_size=1000&page=$page" | tee /tmp/page.json | jq -e ".next" > /dev/null; then jq -r ".results[].name" /tmp/page.json >> /tmp/filebeat_tags.json; page=$((page+1)); attempts=0; else break; fi; done; jq -r ".results[].name" /tmp/page.json >> /tmp/filebeat_tags.json; VERSIONS=$(cat /tmp/filebeat_tags.json | grep -E "^([7-9]|[1-9][0-9]+).[0-9]+.[0-9]+$" | grep -E "^(7.(1[7-9]|[2-9][0-9])|[8-9].[0-9]+|9.[0-9]+|[1-9][0-9]+.[0-9]+).[0-9]+$" | sed -E "s/^([0-9]+.[0-9]+).[0-9]+$/\1/" | grep -v "rc|beta|alpha" | sort -V | uniq); echo "$VERSIONS"; mkdir -p /tmp/filebeat_cache; echo "Preparation done"; for version in $VERSIONS; do archive="/tmp/filebeat_cache/filebeat-${version}.0-linux-x86_64.tar.gz"; echo ">>> Checking Filebeat $version ..."; if [ -f "$archive" ] && gzip -t "$archive" >/dev/null 2>&1; then echo "✓ Archive for $version is OK"; else echo ">>> Downloading Filebeat $version ..."; wget -q --timeout=30 "https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${version}.0-linux-x86_64.tar.gz" -O "$archive" && { if gzip -t "$archive" >/dev/null 2>&1; then echo "✓ Archive for $version is OK"; else echo "✗ Archive for $version is corrupted"; rm -f "$archive"; fi; }; fi; done'
––– output –––
OK
––– input –––
set +H && mkdir -p /tmp/filebeat_cache && echo "Preparation done"
––– output –––
OK
––– input –––
cat << 'EOF' > /tmp/filebeat-single-test.sh
#!/usr/bin/env bash
set -euo pipefail
if [ $# -ne 1 ]; then
echo "✗ Usage: $0 <filebeat_version>" >&2
return 1 2>/dev/null || exit 1
fi
version="$1"
full_version="${version}.0"
echo ">>> Testing Filebeat version: $version"
# Prepare test log
echo -e "2023-05-31 10:42:55 trigproc systemd:amd64 245.4-4ubuntu3.21 <none>\n2023-05-31 10:42:55 trigproc libc-bin:amd64 2.31-0ubuntu9.9 <none>\n2023-05-31 10:42:55 status triggers-awaited ca-certificates-java:all 20190405ubuntu1.1\n2023-05-31 10:42:55 status installed libc-bin:amd64 2.31-0ubuntu9.9\n2023-05-31 10:42:55 status half-configured libc-bin:amd64 2.31-0ubuntu9.9" > /var/log/dpkg.log
log_lines=$(wc -l < /var/log/dpkg.log)
if [ "$log_lines" -eq 5 ]; then
echo "✓ Log file has 5 lines"
else
echo "✗ Error: Expected 5 lines, got $log_lines" >&2
return 1 2>/dev/null || exit 1
fi
# Check Manticore availability
if ! curl -s localhost:9308/cli_json -d 'SHOW TABLES' | jq -e '.[0].data' > /dev/null; then
echo "✗ Error: Manticore Search unavailable" >&2
return 1 2>/dev/null || exit 1
fi
echo "✓ Manticore Search available"
# Create table
mysql -h0 -P9306 -e "
DROP TABLE IF EXISTS dpkg_log;
CREATE TABLE dpkg_log (
id BIGINT,
message TEXT INDEXED STORED,
host JSON,
agent JSON,
input JSON,
log JSON,
ecs JSON,
\`@timestamp\` TEXT INDEXED STORED
);"
# Install Filebeat
mkdir -p /usr/share/filebeat /tmp/fb-data-${version}
tar -xzf "/tmp/filebeat_cache/filebeat-${full_version}-linux-x86_64.tar.gz" -C /usr/share/filebeat
FB_DIR="/usr/share/filebeat/filebeat-${full_version}-linux-x86_64"
# Clean previous registry data
rm -rf /tmp/fb-data-${version}/*
skip_standard_test=0
# For all 9.x versions, use filestream with fingerprint disabled
if [[ "$version" =~ ^9\. ]]; then
echo ">>> Testing Filebeat $version with filestream input and fingerprint disabled..."
cat > "${FB_DIR}/filebeat.yml" <<YML
filebeat.inputs:
- type: filestream
id: dpkg-filestream-input
enabled: true
paths: ["/var/log/dpkg.log"]
prospector.scanner.check_interval: 1s
prospector.scanner.fingerprint.enabled: false
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
compression_level: 0
allow_older_versions: true
path.data: /tmp/fb-data-${version}
setup.ilm.enabled: false
setup.template.enabled: false
setup.template.name: "dpkg_log"
setup.template.pattern: "dpkg_log"
YML
echo ">>> Starting Filebeat..."
"${FB_DIR}/filebeat" run -c "${FB_DIR}/filebeat.yml" > /tmp/fb-log-${version}.txt 2>&1 &
FB_PID=$!
echo ">>> Waiting for Filebeat to publish events..."
for i in {1..30}; do
row_count=$(mysql -N -s -h0 -P9306 -e "SELECT COUNT(*) FROM dpkg_log" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
if [[ "$row_count" =~ ^[0-9]+$ ]] && [ "$row_count" -ge 5 ]; then
echo "✓ Filebeat $version processed logs"
kill $FB_PID 2>/dev/null || true
wait $FB_PID 2>/dev/null || true
if [ "$row_count" -eq 5 ]; then
echo "✓ Row count check for $version: $row_count rows"
structure=$(curl -s localhost:9308/cli_json -d 'DESCRIBE dpkg_log' | jq -c '[.[0].data[]] | sort_by(.Field)')
has_timestamp=$(echo "$structure" | grep -q "\"Field\":\"@timestamp\"" && echo "1" || echo "0")
has_message=$(echo "$structure" | grep -q "\"Field\":\"message\"" && echo "1" || echo "0")
if [ "$has_timestamp" = "1" ] && [ "$has_message" = "1" ]; then
echo "✓ Structure check for $version: passed"
echo "✓ Filebeat version $version tested successfully"
skip_standard_test=1
fi
fi
break
fi
sleep 1
done
if [ "$skip_standard_test" -eq 0 ]; then
kill $FB_PID 2>/dev/null || true
wait $FB_PID 2>/dev/null || true
mysql -h0 -P9306 -e "TRUNCATE TABLE dpkg_log" 2>/dev/null || true
echo "✗ Error: Filebeat $version failed to process logs" >&2
return 1 2>/dev/null || exit 1
fi
fi
# For all other versions (7.17, 8.x), use the standard approach
if [ "$skip_standard_test" -eq 0 ]; then
# Versions 7.17.0, 8.0.0, 8.1.0 crash on glibc 2.35+ due to missing rseq syscall support
# Fixed in 7.17.2+ and 8.2.0+
# See: https://github.com/elastic/beats/issues/30576
# See: https://github.com/elastic/beats/pull/30620
if [[ "$version" =~ ^(7\.17|8\.0|8\.1)$ ]]; then
echo ">>> Using special configuration for Filebeat $version (glibc 2.35+ compatibility fix)..."
cat > "${FB_DIR}/filebeat.yml" <<YML
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/dpkg.log
close_eof: true
scan_frequency: 1s
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
compression_level: 0
$(if [[ "$version" =~ ^8\.[1-9]$ ]]; then echo "allow_older_versions: true"; fi)
# Fix for glibc 2.35+ rseq syscall issue
seccomp:
default_action: allow
syscalls:
- action: allow
names:
- rseq
path.data: /tmp/fb-data-${version}
setup.ilm.enabled: false
setup.template.enabled: false
setup.template.name: "dpkg_log"
setup.template.pattern: "dpkg_log"
YML
elif [[ "$version" =~ ^8\.[1-9]$ || "$version" =~ ^8\.[1-9][0-9]+$ ]]; then
# For versions 8.1 and higher (except 8.0, 8.1 which are handled above), add allow_older_versions option
cat > "${FB_DIR}/filebeat.yml" <<YML
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/dpkg.log
close_eof: true
scan_frequency: 1s
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
compression_level: 0
allow_older_versions: true
path.data: /tmp/fb-data-${version}
setup.ilm.enabled: false
setup.template.enabled: false
setup.template.name: "dpkg_log"
setup.template.pattern: "dpkg_log"
YML
else
# For versions before 8.1
cat > "${FB_DIR}/filebeat.yml" <<YML
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/dpkg.log
close_eof: true
scan_frequency: 1s
output.elasticsearch:
hosts: ["http://localhost:9308"]
index: "dpkg_log"
compression_level: 0
path.data: /tmp/fb-data-${version}
setup.ilm.enabled: false
setup.template.enabled: false
setup.template.name: "dpkg_log"
setup.template.pattern: "dpkg_log"
YML
fi
echo ">>> Starting Filebeat..."
"${FB_DIR}/filebeat" run -c "${FB_DIR}/filebeat.yml" > /tmp/fb-log-${version}.txt 2>&1 &
FB_PID=$!
echo ">>> Waiting for Filebeat to publish events..."
for i in {1..30}; do
row_count=$(mysql -N -s -h0 -P9306 -e "SELECT COUNT(*) FROM dpkg_log" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
if [[ "$row_count" =~ ^[0-9]+$ ]] && [ "$row_count" -ge 5 ]; then
echo "✓ Filebeat $version processed logs"
kill $FB_PID 2>/dev/null || true
wait $FB_PID 2>/dev/null || true
break
fi
sleep 1
done
# Final verification
row_count=$(mysql -N -s -h0 -P9306 -e "SELECT COUNT(*) FROM dpkg_log" 2>/dev/null | grep -o '[0-9]\+' || echo "0")
if [ "$row_count" -eq 5 ]; then
echo "✓ Row count check for $version: $row_count rows"
structure=$(curl -s localhost:9308/cli_json -d 'DESCRIBE dpkg_log' | jq -c '[.[0].data[]] | sort_by(.Field)')
has_timestamp=$(echo "$structure" | grep -q "\"Field\":\"@timestamp\"" && echo "1" || echo "0")
has_message=$(echo "$structure" | grep -q "\"Field\":\"message\"" && echo "1" || echo "0")
if [ "$has_timestamp" = "1" ] && [ "$has_message" = "1" ]; then
echo "✓ Structure check for $version: passed"
echo "✓ Filebeat version $version tested successfully"
else
echo "✗ Structure check for $version: failed" >&2
return 1 2>/dev/null || exit 1
fi
else
echo "✗ Row count check for $version: expected 5, got $row_count" >&2
return 1 2>/dev/null || exit 1
fi
fi
EOF
––– output –––
OK
––– input –––
chmod +x /tmp/filebeat-single-test.sh
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 7.17
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.0
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.1
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.2
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.3
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.4
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.5
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.6
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.7
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.8
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.9
––– output –––
>>> Testing Filebeat version: 8.9
✓ Log file has 5 lines
✓ Manticore Search available
- >>> Starting Filebeat...
+ ERROR 2013 (HY000) at line 3: Lost connection to MySQL server during query
- >>> Waiting for Filebeat to publish events...
- ✓ Filebeat 8.9 processed logs
- ✓ Row count check for 8.9: 5 rows
- ✓ Structure check for 8.9: passed
- ✓ Filebeat version 8.9 tested successfully
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.10
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.11
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.12
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.13
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.14
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.15
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.16
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.17
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.18
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 8.19
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 9.0
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 9.1
––– output –––
OK
––– input –––
timeout 60 bash /tmp/filebeat-single-test.sh 9.2
––– output –––
OK
––– input –––
rm -rf /tmp/fb-data-* /tmp/fb-log-*.txt /tmp/page.json /tmp/filebeat_tags.json
––– output –––
OK |
Updated documentation to reflect support for Logstash version 9.2: - English documentation: manual/english/Integration/Logstash.md - Russian documentation: manual/russian/Integration/Logstash.md - Chinese documentation: manual/chinese/Integration/Logstash.md Changed supported versions from '7.6+' to '7.6-9.2' to clarify the tested and supported version range. Related to PR #3922 which adds Logstash 9.2 support to integration tests.
clt❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/mcl/auto-embeddings-openai-remote.rec––– input –––
rm -f /var/log/manticore/searchd.log; stdbuf -oL searchd --stopwait > /dev/null; stdbuf -oL searchd ${SEARCHD_ARGS:-} > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 'accepting connections' <(tail -n 1000 -f /var/log/manticore/searchd.log); then echo 'Accepting connections!'; else echo 'Timeout or failed!'; fi
––– output –––
OK
––– input –––
cosine_similarity() {
local file1="$1" file2="$2"
awk '
NR==FNR { a[NR]=$1; suma2+=$1*$1; next }
{
dot += a[FNR]*$1
sumb2 += $1*$1
}
END {
print dot / (sqrt(suma2) * sqrt(sumb2))
}' "$file1" "$file2"
}
––– output –––
OK
––– input –––
export -f cosine_similarity
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "CREATE TABLE test_invalid_model (title TEXT, embedding FLOAT_VECTOR KNN_TYPE='hnsw' HNSW_SIMILARITY='l2' MODEL_NAME = 'openai/invalid-model-name-12345' FROM = 'title') " 2>&1
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "CREATE TABLE test_valid_model_no_api_key (title TEXT, embedding FLOAT_VECTOR KNN_TYPE='hnsw' HNSW_SIMILARITY='l2' MODEL_NAME = 'openai/text-embedding-ada-002' FROM = 'title') " 2>&1
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "CREATE TABLE test_openai_remote (title TEXT, content TEXT, description TEXT, embedding FLOAT_VECTOR KNN_TYPE='hnsw' HNSW_SIMILARITY='l2' MODEL_NAME = 'openai/text-embedding-ada-002' FROM = 'title, content' API_KEY='${OPENAI_API_KEY}') "; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "SHOW CREATE TABLE test_openai_remote"
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "INSERT INTO test_openai_remote (id, title, content, description) VALUES(1, 'machine learning algorithms', 'deep neural networks and artificial intelligence', 'advanced AI research')"; echo $?
––– output –––
- 0
+ ERROR 1064 (42000) at line 1: Failed to send request to remote model
+ 1
––– input –––
mysql -h0 -P9306 -e "SELECT COUNT(*) as record_count FROM test_openai_remote WHERE id=1"
––– output –––
+--------------+
| record_count |
+--------------+
- | 1 |
+ | 0 |
+--------------+
––– input –––
mysql -h0 -P9306 -e "INSERT INTO test_openai_remote (id, title, content, description) VALUES(2, 'machine learning algorithms', 'deep neural networks and artificial intelligence', 'different description')"
mysql -h0 -P9306 -e "SELECT embedding FROM test_openai_remote WHERE id=1" | \
grep -v embedding | \
sed 's/[0-9]\+\(\.[0-9]\+\)\?/\n&\n/g' | \
grep -E '^[0-9]+(\.[0-9]+)?$' | \
awk '{printf "%.5f\n", $1}' > /tmp/vector1.txt
mysql -h0 -P9306 -e "SELECT embedding FROM test_openai_remote WHERE id=2" | \
grep -v embedding | \
sed 's/[0-9]\+\(\.[0-9]\+\)\?/\n&\n/g' | \
grep -E '^[0-9]+(\.[0-9]+)?$' | \
awk '{printf "%.5f\n", $1}' > /tmp/vector2.txt
SIMILARITY=$(cosine_similarity /tmp/vector1.txt /tmp/vector2.txt)
echo "Cosine similarity: $SIMILARITY"
RESULT=$(awk -v sim="$SIMILARITY" 'BEGIN {
if (sim > 0.99)
print "SUCCESS: Same FROM fields produce similar vectors (similarity: " sim ")"
else
print "FAIL: Different vectors (FROM does not include description field and should not change generated vector value) (similarity: " sim ")"
}')
echo "$RESULT"
rm -f /tmp/vector1.txt /tmp/vector2.txt
––– output –––
- Cosine similarity: #!/(1|0\.[0-9]+)/!#
+ Cosine similarity: -nan
- SUCCESS: Same FROM fields produce similar vectors (similarity: #!/(1|0\.[0-9]+)/!#)
+ FAIL: Different vectors (FROM does not include description field and should not change generated vector value) (similarity: -nan)
––– input –––
mysql -h0 -P9306 -e "CREATE TABLE test_openai_title_only (title TEXT, content TEXT, embedding FLOAT_VECTOR KNN_TYPE='hnsw' HNSW_SIMILARITY='l2' MODEL_NAME = 'openai/text-embedding-ada-002' FROM = 'title' API_KEY='${OPENAI_API_KEY}') "; mysql -h0 -P9306 -e "INSERT INTO test_openai_title_only (id, title, content) VALUES(1, 'machine learning algorithms', 'completely different content here')"; MD5_MULTI=$(mysql -h0 -P9306 -e "SELECT embedding FROM test_openai_remote WHERE id=1" | grep -v embedding | md5sum | awk '{print $1}'); MD5_SINGLE=$(mysql -h0 -P9306 -e "SELECT embedding FROM test_openai_title_only WHERE id=1" | grep -v embedding | md5sum | awk '{print $1}'); echo "multi_field_md5: $MD5_MULTI"; echo "single_field_md5: $MD5_SINGLE"; if [ "$MD5_MULTI" != "$MD5_SINGLE" ]; then echo "SUCCESS: Different FROM specifications produce different vectors"; else echo "INFO: FROM field comparison result"; fi
––– output –––
OK
––– input –––
mysql -h0 -P9306 -e "CREATE TABLE test_openai_invalid_field (title TEXT, embedding FLOAT_VECTOR KNN_TYPE='hnsw' HNSW_SIMILARITY='l2' MODEL_NAME = 'openai/text-embedding-ada-002' FROM = 'nonexistent_field') " 2>&1
––– output –––
OK
––– input –––
if mysql -h0 -P9306 -e "SHOW TABLES LIKE 'test_openai_no_from'" | grep -q test_openai_no_from; then mysql -h0 -P9306 -e "INSERT INTO test_openai_no_from (id, title, embedding) VALUES(1, 'test title', '(0.1, 0.2, 0.3, 0.4, 0.5)')"; echo "insert_result: $?"; else echo "insert_result: skipped (table not created)"; fi
––– output –––
OK
––– input –––
if mysql -h0 -P9306 -e "SHOW TABLES LIKE 'test_openai_no_from'" | grep -q test_openai_no_from; then mysql -h0 -P9306 -e "SHOW CREATE TABLE test_openai_no_from"; else echo "table_structure: skipped (table not created)"; fi
––– output –––
OK
––– input –––
if [ -n "$OPENAI_API_KEY" ] && [ "$OPENAI_API_KEY" != "dummy_key_for_testing" ]; then echo "API key is available for testing"; else echo "API key not available - using dummy for error testing"; fi
––– output –––
OK |
test_check_logstash_versions❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/integrations/logstash/test-integrations-check-logstash-versions.rec––– input –––
rm -f /var/log/manticore/searchd.log; stdbuf -oL searchd $SEARCHD_FLAGS > /dev/null; if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore/searchd.log;fi
––– output –––
OK
––– input –––
set -b
––– output –––
OK
––– input –––
export PATH=/usr/bin:/usr/local/bin:/usr/sbin:/sbin:/bin
––– output –––
OK
––– input –––
apt-get update > /dev/null 2>&1 && apt-get install -y curl jq > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
bash << 'SCRIPT'
# Static list of TESTED versions
TESTED_VERSIONS="7.17
8.0
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
8.9
8.10
8.11
8.12
8.13
8.14
8.15
8.16
8.17
8.18
8.19
9.0
9.1
9.2"
# Check for NEW versions (after latest tested)
LATEST_TESTED_VERSION="9.2"
NEW_VERSIONS=$(curl -s "https://hub.docker.com/v2/repositories/library/logstash/tags/?page_size=100" \
| jq -r ".results[].name" \
| grep "^[0-9][0-9]*\.[0-9][0-9]*\.[0-9][0-9]*$" \
| sed "s/\.[0-9]*$//" \
| awk '!/rc|beta|alpha/' \
| sort -t. -k1,1n -k2,2n | uniq \
| awk -v latest="$LATEST_TESTED_VERSION" '
function version_compare(v1, v2) {
split(v1, a, ".")
split(v2, b, ".")
if (a[1] != b[1]) return a[1] - b[1]
return a[2] - b[2]
}
version_compare($0, latest) > 0')
if [ -n "$NEW_VERSIONS" ]; then
echo "🆕 NEW Logstash versions detected:"
echo "$NEW_VERSIONS"
echo ""
echo "❌ Need to test new versions and update the following:"
echo ""
echo "📝 Files to update:"
echo " 1. test/clt-tests/integrations/logstash/test-integrations-check-logstash-versions.rec"
echo " - Update TESTED_VERSIONS list"
echo " - Update LATEST_TESTED_VERSION='$LATEST_TESTED_VERSION' -> new version"
echo ""
echo " 2. test/clt-tests/integrations/logstash/test-integrations-test-logstash-versions.rec"
echo " - Add new test section for new version"
echo ""
echo " 3. Update documentation with new version support:"
echo " - manual/english/Integration/Logstash.md"
echo " - Search for: 'versions up to X.X'"
echo " - Update version number to latest tested"
echo ""
echo " 4. Update website content (separate repository):"
echo " - https://github.com/manticoresoftware/site"
echo " - content/english/blog/integration-of-manticore-with-logstash-filebeat/index.md"
echo " - Search for: 'Logstash versions up to X.X'"
echo " - Update version number to latest tested"
echo ""
exit 1
else
echo "✅ No new versions found after $LATEST_TESTED_VERSION"
fi
# Use static list for testing
echo "Using tested versions:"
echo "$TESTED_VERSIONS"
SCRIPT
––– output –––
OK
––– input –––
bash << 'SCRIPT'
# Check documentation versions for Logstash
echo "Checking manual documentation for Logstash version references..."
# Latest tested version
LOGSTASH_LATEST="9.2"
# Check Logstash in manual documentation
LOGSTASH_MANUAL="manual/english/Integration/Logstash.md"
if [ -f "$LOGSTASH_MANUAL" ]; then
echo "✓ Checking $LOGSTASH_MANUAL"
if grep -q "versions up to ${LOGSTASH_LATEST}" "$LOGSTASH_MANUAL"; then
echo "✅ Logstash manual contains correct version: up to ${LOGSTASH_LATEST}"
else
echo "❌ Logstash manual does NOT contain expected version: up to ${LOGSTASH_LATEST}" >&2
echo "Found in manual:" >&2
grep -i "versions.*supported\|Currently.*versions" "$LOGSTASH_MANUAL" || echo "No version info found" >&2
exit 1
fi
else
echo "⚠ Logstash manual not found at $LOGSTASH_MANUAL (skipping check)"
fi
echo ""
echo "✅ Logstash manual documentation check passed!"
SCRIPT
––– output –––
Checking manual documentation for Logstash version references...
- ✓ Checking manual/english/Integration/Logstash.md
+ ⚠ Logstash manual not found at manual/english/Integration/Logstash.md (skipping check)
- ✅ Logstash manual contains correct version: up to 9.2
+ ✅ Logstash manual documentation check passed!
- ✅ Logstash manual documentation check passed! |
Following Grafana test pattern, this test now only: - Checks for new Logstash versions - Displays comprehensive update instructions when new versions found Manual documentation validation is not performed in this test as: 1. Test runs from different directory making file paths complex 2. Grafana test follows same pattern (instructions only) 3. Documentation updates are validated during PR review
This file was removed from master branch as translation workflow has changed. Removing it to resolve merge conflicts.
…Add automated documentation validation test
clt❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/bugs/3847-conflict-handling-verification.rec––– input –––
set -b +m
––– output –––
OK
––– input –––
grep -q 'threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf || sed -i '/searchd {/a\ threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf
––– output –––
OK
––– input –––
export INSTANCE=1
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=2
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=3
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
wait_for_sync() { sleep 0.5; for i in {1..10}; do c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ] && [ -n "$c1" ]; then return 0; fi; sleep 0.5; done; return 1; }
––– output –––
OK
––– input –––
mkdir /var/{lib,log}/manticore-{1,2,3}/test
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE CLUSTER test 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P3306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
sleep 2
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE TABLE tbl1 (id bigint, attr1 int)"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "ALTER CLUSTER test ADD tbl1"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (1,1), (3,2), (10,3), (11,4), (12,5), (13,6), (14,7), (15,8), (20,9)"; echo $?
––– output –––
OK
––– input –––
wait_for_sync && echo "Cluster synchronized" || echo "Sync timeout"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
manticore-load --host=127.0.0.1 --threads=4 --port=1306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" --together --host=127.0.0.1 --threads=4 --port=2306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" > /dev/null 2>&1 & LOAD_PID=$!; sleep 1; echo "Load started: $LOAD_PID"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 999)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (11, 111)" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 101)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 102)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (100, 1)" & sleep 0.05; mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (200, 2)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (13, 999)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id>13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 888)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 333)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
Conflicts: %{NUMBER}/30
- PASS
+ FAIL
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 303)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=111 WHERE id=15" 2>&1 & mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=222 WHERE id=15" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1001)" & sleep 0.05; mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 1010)" & sleep 0.05; mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (20, 1020)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=100 WHERE id=12" 2>&1 & mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=200 WHERE id=12" 2>&1 & mysql -h0 -P3306 -e "UPDATE test:tbl1 SET attr1=300 WHERE id=12" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 14)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (15, 1500)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
kill $LOAD_PID 2>/dev/null; wait $LOAD_PID 2>/dev/null; echo "Load stopped"
––– output –––
OK
––– input –––
wait_for_sync && echo "Final sync successful" || echo "Final sync failed"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ]; then echo "All nodes synchronized ($c1 rows)"; else echo "Discrepancies: node1=$c1, node2=$c2, node3=$c3"; fi
––– output –––
OK
––– input –––
for i in 1 2 3; do grep -q 'FATAL:' /var/log/manticore-${i}/searchd.log && echo "Node #$i has FATAL" || echo "Node #$i OK"; done
––– output –––
OK |
test_kafka❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/integrations/kafka/test-integration-kafka-ms.rec––– input –––
(dockerd > /var/log/dockerd.log 2>&1 &) > /dev/null
––– output –––
OK
––– input –––
if timeout 30 grep -qm1 'API listen on /var/run/docker.sock' <(tail -n 0 -f /var/log/dockerd.log); then echo 'Done'; else echo 'Timeout failed'; fi
––– output –––
OK
––– input –––
docker ps
––– output –––
OK
––– input –––
KAFKA_VERSION="4.1.0"
echo "Using Kafka version: $KAFKA_VERSION"
––– output –––
OK
––– input –––
docker network create app-network --driver bridge > /dev/null; echo $?
––– output –––
OK
––– input –––
docker run -it --network=app-network --platform linux/amd64 --name manticore -d ghcr.io/manticoresoftware/manticoresearch:test-kit-latest bash > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
docker exec manticore stdbuf -oL searchd --logdebugvv > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
docker run -it -d --network=app-network --name kafka --platform linux/amd64 \
-v ./test/clt-tests/integrations/kafka/import.sh:/import.sh \
-v ./test/clt-tests/integrations/kafka/dump.json:/tmp/dump.json \
-e KAFKA_NODE_ID=1 \
-e KAFKA_PROCESS_ROLES=broker,controller \
-e KAFKA_CONTROLLER_QUORUM_VOTERS=1@kafka:9093 \
-e KAFKA_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
-e KAFKA_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
-e KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS=0 \
-e CLUSTER_ID=MkU3OEVBNTcwNTJENDM2Qk \
apache/kafka:$KAFKA_VERSION > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
for i in $(seq 1 60); do
if docker exec kafka /opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server localhost:9092 >/dev/null 2>&1; then
echo "Kafka ready"
break
fi
sleep 3
done
––– output –––
OK
––– input –––
docker exec kafka /opt/kafka/bin/kafka-topics.sh --create --topic my-data --partitions 4 --bootstrap-server localhost:9092 2>&1 | grep -o 'Created topic my-data\.' | head -n 1
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, is_active bool) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka (id bigint, name text, short_name text, received_at text, size multi, is_active bool);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table TO destination_kafka AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, is_active FROM kafka;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW SOURCES;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW SOURCE kafka;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW MVS;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW MV view_table;"
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT * FROM destination_kafka ORDER BY id ASC;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "DROP SOURCE kafka;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "DROP table destination_kafka;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW TABLES;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_destination"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, location json, is_active bool) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_destination' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka (id bigint, name text, short_name text, received_at text, size multi, lat float, lon float, distance float, is_active bool);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_destination TO destination_kafka AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, location.lat as lat, location.lon as lon, GEODIST(lat, lon, 49.0, 3.0, {in=degrees, out=m}) AS distance, is_active FROM kafka"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT id, name, short_name, received_at, size, lat, lon, is_active AS distance FROM destination_kafka ORDER BY id ASC;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_metadata"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_metadata (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, metadata json) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_metadata' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_metadata (id bigint, name text, short_name text, received_at text, size multi, views int, info text);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_metadata TO destination_kafka_metadata AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, metadata.views as views, metadata.info as info FROM kafka_metadata WHERE views > 1000;"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka_metadata" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_metadata;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT * FROM destination_kafka_metadata ORDER BY id ASC;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_tags"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_tags (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, tags json) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_tags' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_tags (id bigint, name text, short_name text, received_at text, size multi, tags json);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_tags TO destination_kafka_tags AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, tags FROM kafka_tags WHERE tags IN ('item1', 'item2');"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka_tags" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_tags;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT * FROM destination_kafka_tags ORDER BY id ASC;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_alter"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_alter (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, metadata json) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_alter' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_alter (id bigint, name text, short_name text, received_at text, size multi, views bigint);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_alter TO destination_kafka_alter AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, metadata.views as views FROM kafka_alter;"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m1 "REPLACE%20INTO%20destination_kafka_alter" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_table_alter suspended=1;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_alter;"
––– output –––
OK
––– input –––
sleep 10; docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_alter;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_table_alter suspended=0;"; echo $?
––– output –––
OK
––– input –––
timeout 120 bash -c 'while [[ $(docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_alter;" | grep -o "[0-9]*") -ne 57 ]]; do sleep 1; done && echo "Data processing completed."'
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_alter;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_ts"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_ts (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, timestamp_unix timestamp) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_ts' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_ts (id bigint, name text, short_name text, received_at text, size multi, timestamp_field timestamp);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_ts TO destination_kafka_ts AS SELECT id, term as name, abbrev as short_name, UTC_TIMESTAMP() as received_at, GlossDef.size as size, timestamp_unix as timestamp_field FROM kafka_ts WHERE timestamp_field >= 1690761600;"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka_ts" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_ts;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT * FROM destination_kafka_ts ORDER BY id ASC;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_combined"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_combined (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, location json, metadata json, tags json, timestamp_unix timestamp) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_combined' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_combined (id bigint, name text, short_name text, received_at text, size multi, lat float, lon float, views int, info text, tags json, timestamp_combined timestamp);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_combined TO destination_kafka_combined AS SELECT id, term AS name, abbrev AS short_name, UTC_TIMESTAMP() AS received_at, GlossDef.size AS size, location.lat AS lat, location.lon AS lon, metadata.views AS views, metadata.info AS info, tags, timestamp_unix AS timestamp_combined FROM kafka_combined WHERE views > 1000 AND timestamp_combined >= 1690761600 AND tags IN ('item1', 'item2') AND lat > 50 AND lon > 5;"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_kafka_combined" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
- Data processing completed.
+ Data processing failed.
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_combined;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT * FROM destination_kafka_combined ORDER BY id ASC;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_stop"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_stop (id bigint, term text, abbrev '\$abbrev' text, GlossDef json, location json, metadata json, tags json, timestamp_unix timestamp) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_stop' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_kafka_stop (id bigint, name text, short_name text, received_at text, size multi, lat float, lon float, views int, info text, tags json, timestamp_combined timestamp);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_table_stop TO destination_kafka_stop AS SELECT id, term AS name, abbrev AS short_name, UTC_TIMESTAMP() AS received_at, GlossDef.size AS size, location.lat AS lat, location.lon AS lon, metadata.views AS views, metadata.info AS info, tags, timestamp_unix AS timestamp_combined FROM kafka_stop;"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m1 "REPLACE%20INTO%20destination_kafka_stop" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_stop;"
––– output –––
OK
––– input –––
sleep 1; docker exec manticore stdbuf -oL searchd --stopwait
––– output –––
OK
––– input –––
sleep 1; docker exec manticore stdbuf -oL searchd --logdebugvv > /dev/null 2>&1; echo $?
––– output –––
OK
––– input –––
timeout 120 bash -c 'while [[ $(docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_stop;" | grep -o "[0-9]*") -ne 57 ]]; do sleep 1; done && echo "Data processing completed."'
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_kafka_stop;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE invalid-source (id bigint, term text) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_invalid' num_consumers='1' batch=50;"
––– output –––
OK
––– input –––
CONSUMER_GROUP="manticore_drop_source"
––– input –––
docker exec kafka /opt/kafka/bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group ${CONSUMER_GROUP} --reset-offsets --to-latest --topic ${TOPIC_NAME:-my-data} --execute > /dev/null; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_drop_source (id bigint, term text) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_drop_source' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE TABLE destination_drop_source (id bigint, name text);"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE MATERIALIZED VIEW view_drop_source TO destination_drop_source AS SELECT id, term as name FROM kafka_drop_source;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW MV view_drop_source\G;" | grep suspended
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "DROP SOURCE kafka_drop_source;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW MV view_drop_source\G;" | grep suspended
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "CREATE SOURCE kafka_drop_source (id bigint, term text) type='kafka' broker_list='kafka:9092' topic_list='my-data' consumer_group='manticore_drop_source' num_consumers='1' batch=50;"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SHOW MV view_drop_source\G;" | grep suspended
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_drop_source suspended=0"; echo $?
––– output –––
OK
––– input –––
timeout 60 bash -c 'docker exec manticore bash -c "tail -f /var/log/manticore/searchd.log" | grep -m2 "REPLACE%20INTO%20destination_drop_source" > /dev/null' & GREP_PID=$!; sleep 2; docker exec kafka bash /import.sh; wait $GREP_PID && echo "Data processing completed." || echo "Data processing failed."
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "SELECT COUNT(*) FROM destination_drop_source;"
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_drop_source suspended=1"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_drop_source suspended=1"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_drop_source suspended=0"; echo $?
––– output –––
OK
––– input –––
docker exec manticore mysql -h0 -P9306 -e "ALTER MATERIALIZED VIEW view_drop_source suspended=0"; echo $?
––– output –––
OK |
clt❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/bugs/3847-conflict-handling-verification.rec––– input –––
set -b +m
––– output –––
OK
––– input –––
grep -q 'threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf || sed -i '/searchd {/a\ threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf
––– output –––
OK
––– input –––
export INSTANCE=1
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=2
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=3
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
wait_for_sync() { sleep 0.5; for i in {1..10}; do c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ] && [ -n "$c1" ]; then return 0; fi; sleep 0.5; done; return 1; }
––– output –––
OK
––– input –––
mkdir /var/{lib,log}/manticore-{1,2,3}/test
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE CLUSTER test 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P3306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
sleep 2
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE TABLE tbl1 (id bigint, attr1 int)"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "ALTER CLUSTER test ADD tbl1"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (1,1), (3,2), (10,3), (11,4), (12,5), (13,6), (14,7), (15,8), (20,9)"; echo $?
––– output –––
OK
––– input –––
wait_for_sync && echo "Cluster synchronized" || echo "Sync timeout"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
manticore-load --host=127.0.0.1 --threads=4 --port=1306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" --together --host=127.0.0.1 --threads=4 --port=2306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" > /dev/null 2>&1 & LOAD_PID=$!; sleep 1; echo "Load started: $LOAD_PID"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 999)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (11, 111)" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 101)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 102)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (100, 1)" & sleep 0.05; mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (200, 2)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (13, 999)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id>13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 888)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 333)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 303)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=111 WHERE id=15" 2>&1 & mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=222 WHERE id=15" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1001)" & sleep 0.05; mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 1010)" & sleep 0.05; mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (20, 1020)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=100 WHERE id=12" 2>&1 & mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=200 WHERE id=12" 2>&1 & mysql -h0 -P3306 -e "UPDATE test:tbl1 SET attr1=300 WHERE id=12" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 14)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..30}; do result=$( (mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (15, 1500)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/30"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
Conflicts: %{NUMBER}/30
- PASS
+ FAIL
––– input –––
kill $LOAD_PID 2>/dev/null; wait $LOAD_PID 2>/dev/null; echo "Load stopped"
––– output –––
OK
––– input –––
wait_for_sync && echo "Final sync successful" || echo "Final sync failed"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ]; then echo "All nodes synchronized ($c1 rows)"; else echo "Discrepancies: node1=$c1, node2=$c2, node3=$c3"; fi
––– output –––
OK
––– input –––
for i in 1 2 3; do grep -q 'FATAL:' /var/log/manticore-${i}/searchd.log && echo "Node #$i has FATAL" || echo "Node #$i OK"; done
––– output –––
OK |
…ions (7.17, 8.0-9.1, 9.2+) with complete ready-to-use configuration examples. Each section explains version-specific requirements.
clt❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/bugs/3847-conflict-handling-verification.rec––– input –––
set -b +m
––– output –––
OK
––– input –––
grep -q 'threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf || sed -i '/searchd {/a\ threads = 4' test/clt-tests/base/searchd-with-flexible-ports.conf
––– output –––
OK
––– input –––
export INSTANCE=1
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=2
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=3
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
wait_for_sync() { sleep 0.5; for i in {1..10}; do c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" 2>/dev/null | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ] && [ -n "$c1" ]; then return 0; fi; sleep 0.5; done; return 1; }
––– output –––
OK
––– input –––
mkdir /var/{lib,log}/manticore-{1,2,3}/test
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE CLUSTER test 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P3306 -e "JOIN CLUSTER test at '127.0.0.1:1312' 'test' as path"; echo $?
––– output –––
OK
––– input –––
sleep 2
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "CREATE TABLE tbl1 (id bigint, attr1 int)"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "ALTER CLUSTER test ADD tbl1"; echo $?
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (1,1), (3,2), (10,3), (11,4), (12,5), (13,6), (14,7), (15,8), (20,9)"; echo $?
––– output –––
OK
––– input –––
wait_for_sync && echo "Cluster synchronized" || echo "Sync timeout"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
manticore-load --host=127.0.0.1 --threads=4 --port=1306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" --together --host=127.0.0.1 --threads=4 --port=2306 --total=1000000 --query="REPLACE INTO test:tbl1 (id, attr1) VALUES (%RAND, %RAND)" > /dev/null 2>&1 & LOAD_PID=$!; sleep 1; echo "Load started: $LOAD_PID"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 999)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (11, 111)" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 101)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" & sleep 0.05; mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 102)" & wait
––– output –––
OK
––– input –––
mysql -h0 -P2306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (100, 1)" & sleep 0.05; mysql -h0 -P1306 -e "INSERT INTO test:tbl1 (id, attr1) VALUES (200, 2)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id=13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (13, 999)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE id>13" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 888)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..100}; do mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 0.5; result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 333)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/100"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..100}; do mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 0.5; result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=1 WHERE attr1=2" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/100"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
Conflicts: %{NUMBER}/100
- PASS
+ FAIL
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 2)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=3" 2>&1 & mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (3, 303)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=1" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=111 WHERE id=15" 2>&1 & mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=222 WHERE id=15" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (1, 1001)" & sleep 0.05; mysql -h0 -P2306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (10, 1010)" & sleep 0.05; mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (20, 1020)" & wait
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P1306 -e "UPDATE test:tbl1 SET attr1=100 WHERE id=12" 2>&1 & mysql -h0 -P2306 -e "UPDATE test:tbl1 SET attr1=200 WHERE id=12" 2>&1 & mysql -h0 -P3306 -e "UPDATE test:tbl1 SET attr1=300 WHERE id=12" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 14)" > /dev/null 2>&1; sleep 2
––– output –––
OK
––– input –––
conflicts=0; for i in {1..50}; do result=$( (mysql -h0 -P1306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P2306 -e "DELETE FROM test:tbl1 WHERE id=14" 2>&1 & mysql -h0 -P3306 -e "REPLACE INTO test:tbl1 (id, attr1) VALUES (14, 1500)" 2>&1 & wait) ); if echo "$result" | grep -q "error at PostRollback"; then ((conflicts++)); fi; done; echo "Conflicts: $conflicts/50"; test $conflicts -ge 1 && echo "PASS" || echo "FAIL"
––– output –––
OK
––– input –––
kill $LOAD_PID 2>/dev/null; wait $LOAD_PID 2>/dev/null; echo "Load stopped"
––– output –––
OK
––– input –––
wait_for_sync && echo "Final sync successful" || echo "Final sync failed"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P2306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
mysql -h0 -P3306 -NB -e "SELECT COUNT(*) FROM test:tbl1\G"
––– output –––
OK
––– input –––
c1=$(mysql -h0 -P1306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c2=$(mysql -h0 -P2306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); c3=$(mysql -h0 -P3306 -sN -e "SELECT COUNT(*) FROM test:tbl1" | grep -oE '[0-9]+' | head -1); if [ "$c1" = "$c2" ] && [ "$c2" = "$c3" ]; then echo "All nodes synchronized ($c1 rows)"; else echo "Discrepancies: node1=$c1, node2=$c2, node3=$c3"; fi
––– output –––
OK
––– input –––
for i in 1 2 3; do grep -q 'FATAL:' /var/log/manticore-${i}/searchd.log && echo "Node #$i has FATAL" || echo "Node #$i OK"; done
––– output –––
OK |
…r (apt-get) in Filebeat and Logstash version check tests to match the test-kit-latest Docker image OS (Ubuntu 24.04).
clt❌ CLT tests in Failed tests:🔧 Edit failed tests in UI: test/clt-tests/sharding/cluster/test-drop-sharded-clustering-table.rec––– input –––
export INSTANCE=1
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export INSTANCE=2
––– output –––
OK
––– input –––
mkdir -p /var/{run,lib,log}/manticore-${INSTANCE}
––– output –––
OK
––– input –––
stdbuf -oL searchd -c test/clt-tests/base/searchd-with-flexible-ports.conf > /dev/null
––– output –––
OK
––– input –––
if timeout 10 grep -qm1 '\[BUDDY\] started' <(tail -n 1000 -f /var/log/manticore-${INSTANCE}/searchd.log); then echo 'Buddy started!'; else echo 'Timeout or failed!'; cat /var/log/manticore-${INSTANCE}/searchd.log; fi
––– output –––
OK
––– input –––
export CLUSTER_NAME=c
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "create cluster ${CLUSTER_NAME}"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "show status like 'cluster_${CLUSTER_NAME}_status'\G"
––– output –––
OK
––– input –––
for n in `seq 2 $INSTANCE`; do mysql -h0 -P${n}306 -e "join cluster ${CLUSTER_NAME} at '127.0.0.1:1312'"; done;
––– output –––
OK
––– input –––
mysql -h0 -P${INSTANCE}306 -e "show status like 'cluster_${CLUSTER_NAME}_status'\G"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "create table ${CLUSTER_NAME}:tbl1(id bigint) shards='3' rf='2';"; echo $?;
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:tbl1;"; echo $?;
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "create table ${CLUSTER_NAME}:Tbl2(id bigint) shards='3' rf='1';"; echo $?
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:Tbl2;"; echo $?
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "create table ${CLUSTER_NAME}:tbl_missing_type(id) shards='3' rf='1';"
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:tbl_missing_type;"; echo $?
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
LONG_TABLE_NAME=$(printf "tbl%065d" 1)
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "create table ${CLUSTER_NAME}:${LONG_TABLE_NAME}(id bigint) shards='3' rf='1';"
––– output –––
OK
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:${LONG_TABLE_NAME};"
––– output –––
+ ERROR 1064 (42000) at line 1: Waiting timeout exceeded.
––– input –––
echo "=== Node 1306 ==="; mysql -h0 -P1306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 1306 failed!"; echo "=== Node 2306 ==="; mysql -h0 -P2306 -e "SHOW TABLES\G" | sed 's/^[[:space:]]*//' || echo "Node 2306 failed!"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:nonexistent_table;"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE nonexistent_cluster:tbl1;"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:tbl1;"
––– output –––
OK
––– input –––
mysql -h0 -P1306 -e "INSERT INTO ${CLUSTER_NAME}:tbl1 VALUES (1);" & sleep 1; mysql -h0 -P1306 -e "DROP TABLE ${CLUSTER_NAME}:tbl1;"
––– output –––
OK |
Type of Change
Description of the Change
Integration Test Changes
allow_superuser: truesetting via/etc/logstash/logstash.ymlconfiguration fileALLOW_SUPERUSER=1environment variable with runner.rb patchThis addresses an architectural change introduced in Logstash 9.2 where the superuser security check was refactored, making the previous patching approach incompatible. The new method uses the official configuration mechanism via
--path.settingsflag.Documentation Updates
1. Manual Documentation
manual/english/Integration/Logstash.mdmanual/english/Integration/Filebeat.md2. Automated Validation Test
test/clt-tests/integrations/logstash/test-integrations-check-logstash-versions.recRelated Issue