Skip to content

chore/test: use sqlite for testing put actions#43

Open
sysrex wants to merge 55 commits intomainfrom
viet-caching
Open

chore/test: use sqlite for testing put actions#43
sysrex wants to merge 55 commits intomainfrom
viet-caching

Conversation

@sysrex
Copy link
Contributor

@sysrex sysrex commented Nov 21, 2025

test

key := hexutil.Encode(commitment)
d.log.Info("Failed to store commitment to the DA server", "err", err, "key", key)
w.WriteHeader(http.StatusInternalServerError)
s.log.Error("Invalid commitment format", "error", err, "hex", commitmentHex)

Check failure

Code scanning / CodeQL

Log entries created from user input High

This log entry depends on a
user-provided value
.

Copilot Autofix

AI 3 months ago

The problem occurs because the unsanitized, user-controlled input (commitmentHex) is being written directly to a log entry. To fix this, we should sanitize commitmentHex before logging it. Since the log entries are plain text, the best mitigation is to strip newlines (\n, \r) and control characters from commitmentHex using strings.ReplaceAll or similar prior to logging.

This only affects line 481 in celestia_server.go, inside the HandleGet method block. We should sanitize the value immediately before logging. No method definitions are required—just a couple lines of sanitization, using strings.ReplaceAll. No new imports are needed, since strings is already imported.


Suggested changeset 1
celestia_server.go

Autofix patch

Autofix patch
Run the following command in your local git repository to apply this patch
cat << 'EOF' | git apply
diff --git a/celestia_server.go b/celestia_server.go
--- a/celestia_server.go
+++ b/celestia_server.go
@@ -478,7 +478,9 @@
 
 	encodedCommitment, err := hex.DecodeString(commitmentHex)
 	if err != nil {
-		s.log.Error("Invalid commitment format", "error", err, "hex", commitmentHex)
+		safeCommitmentHex := strings.ReplaceAll(commitmentHex, "\n", "")
+		safeCommitmentHex = strings.ReplaceAll(safeCommitmentHex, "\r", "")
+		s.log.Error("Invalid commitment format", "error", err, "hex", safeCommitmentHex)
 		http.Error(rw, "invalid commitment format", http.StatusBadRequest)
 		return
 	}
EOF
@@ -478,7 +478,9 @@

encodedCommitment, err := hex.DecodeString(commitmentHex)
if err != nil {
s.log.Error("Invalid commitment format", "error", err, "hex", commitmentHex)
safeCommitmentHex := strings.ReplaceAll(commitmentHex, "\n", "")
safeCommitmentHex = strings.ReplaceAll(safeCommitmentHex, "\r", "")
s.log.Error("Invalid commitment format", "error", err, "hex", safeCommitmentHex)
http.Error(rw, "invalid commitment format", http.StatusBadRequest)
return
}
Copilot is powered by AI and may make mistakes. Always verify output.
- Merge RPCClientConfig and TxClientConfig into single CelestiaClientConfig
- Rename celestia.server -> celestia.bridge-addr (clearer naming)
- Add explicit read-only mode with separate initialization path
- Simplify flag names: remove tx-client prefix from keyring/grpc flags
- Update config builder and tests for new structure
- Add MaxParallelSubmissions config for concurrent blob submissions
- Refactor submitPendingBlobs to process multiple batches in parallel
- Add prepareBatches for better batch organization before submission
- Update event listener and backfill worker for new architecture
- Add comprehensive worker tests for parallel submission scenarios
- Add 'da-server init' command to initialize keyring and print signer addresses
- Support text, JSON, and TOML output formats for trusted_signers config
- Essential for TxWorkerAccounts setup where all worker addresses must be trusted
- Includes clear usage instructions for writer/reader configuration workflow
- Handle UNIQUE constraint violations gracefully in InsertBlob
- Return existing blob ID on race conditions (idempotent behavior)
- Add UpdateBatchCommitment for TxWorkerAccounts signer changes
- Add MarkBatchConfirmedByID for batch-ID based confirmation
- Add GetBatchesByIDs for bulk batch retrieval
- Update manual test-writer-reader.go for parallel submission testing
- Update config-writer.toml and config-reader.toml with new flag names
- Update config.toml.example with comprehensive documentation
- Update integration tests for new config structure
- Update concurrency_test.go for new API
- Update go.mod/go.sum dependencies
- Update README.md with parallel submission and init command docs
- Update METRICS.md with new metric descriptions
- Update cmd/daserver/README.md for new flag names
- Add docs/ARCHITECTURE.md with system design documentation
Mock test file no longer needed after architecture refactor.
Implement a dual-mode backfill strategy for reliable blob discovery:

1. Subscription mode (runSubscriber):
   - Real-time block header subscription via WebSocket
   - Processes new blocks immediately as they arrive
   - Gracefully falls back when not supported (e.g., QuikNode)

2. GetAll mode (runBackfiller):
   - Historical backfilling using parallel GetAll calls
   - Uses NetworkHead() for reliable chain tip tracking
   - Configurable concurrency via blocks_per_scan

Key changes:
- Add headerAPI.Module to CelestiaStore for NetworkHead() support
- Replace binary search tip probing with single API call
- Parallel GetAll fetching with errgroup and SetLimit
- Automatic fallback when subscription not supported
- Improve submission worker with parallel batch submission
- Add comprehensive manual writer-reader test with timing metrics
- Update all tests for new architecture

This ensures bulletproof blob discovery without missing any heights,
addressing issues where blobs were getting stuck in read-only mode.
- Add immediate revert on submission failure (retry on next tick)
- Increase stuck batch threshold to 5 minutes (avoid racing with slow submissions)
- Run submission ticks in goroutines (non-blocking)
- Add RevertBatchToPending DB method
- Add GetStuckBatches for cleanup
- Fix WARN logs for normal 'batched' status
- Add cleanupStuckBatches to event listener
Key changes:
- Remove DB writes before Celestia submission (eliminates 1.5s bottleneck)
- Create batch records only AFTER successful Celestia submission
- Add CreateBatchAndConfirm for atomic batch creation + blob confirmation
- Handle duplicate submissions gracefully (UNIQUE constraint)
- Wait for all submission goroutines to complete before next tick
- Reduce log verbosity (most logs now DEBUG level)
- Simplify reconciliation (summary logs only)
- Trust Celestia Submit() return - no Get() verification needed

Flow changes:
- Before: prepareBatches (slow DB) → submit → updateHeight
- After: prepareBatches (in-memory) → submit → CreateBatchAndConfirm

This fixes race conditions where same blobs were submitted multiple times
and improves confirmation latency from ~30s to ~10s (Celestia inclusion time).
- Add 'verified' status to blobs and batches schema
- Add GetUnverifiedBatches() and MarkBatchVerified() to store
- Refactor event_listener to verify batches using blob.Get
- Search nearby heights (±10 blocks) to detect blob.Submit height bugs
- Add header module to submission worker for timeout recovery
- Log height mismatch errors with actual vs recorded height for debugging

The verification worker now:
1. Fetches confirmed batches older than reconcile_age
2. Tries blob.Get with each trusted signer's commitment
3. On success: marks batch as 'verified'
4. On failure: searches nearby heights for blob
5. If found at different height: logs BLOB HEIGHT MISMATCH error
6. If not found anywhere: reverts batch to pending for resubmission
- Add getStatsAggregator to batch GET request logs every 10 seconds
- Replace per-request INFO logs with periodic summary:
  'GET requests served requests=N total_bytes=M avg_latency_ms=X heights=...'
- Remove emoji from log messages
- Accept 'verified' status (in addition to 'confirmed') in GET handler

This reduces log noise significantly when serving many GET requests.
- Update worker_test.go for new SubmissionWorker signature (header module)
- Update manual test configs for new verification parameters
Add TxPriority config option (1=low, 2=medium, 3=high) for controlling
Celestia transaction priority. Defaults to 2 (medium) if not set.
Return headerAPI.Module from initCelestiaClient for write mode,
matching the read-only client behavior. This enables NetworkHead()
queries for both server modes.
- Poll ALL unconfirmed blobs every round (never skip/abandon any)
- Add Celestia DA throughput calculation (MB/s)
- Track blob rate (blobs/s)
- Show recovery celebration for blobs that took >60s
- Update config with 800ms PUT interval
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants