Skip to content

Inference msgs optimization: optimize key verification#779

Open
DimaOrekhovPS wants to merge 27 commits intoupgrade-v0.2.11from
do/inference-optimization-keys-2
Open

Inference msgs optimization: optimize key verification#779
DimaOrekhovPS wants to merge 27 commits intoupgrade-v0.2.11from
do/inference-optimization-keys-2

Conversation

@DimaOrekhovPS
Copy link
Collaborator

@DimaOrekhovPS DimaOrekhovPS commented Feb 20, 2026

Optimize inference signature verification flow

Signature verification policy

Whichever message arrives first (start or finish) performs cryptographic signature verification. The second message only compares its signature-bound fields against what was already persisted — no re-verification.

  • Dev signature: Verified on first message. Second message compares original_prompt_hash, request_timestamp, transfer_agent, requested_by.
  • TA signature: Verified on first message only if it's finish. Start-first skips TA verification; comparison happens when finish arrives. Finish-first verifies TA cryptographically; comparison happens when start arrives.
  • Executor signature: Never verified (disabled by policy).

Field persistence for cross-message comparison

The new flow requires the first message to persist all fields that the second message will compare against. Added OriginalPromptHash, PromptHash, and RequestedBy assignments to ProcessFinishInference, and OriginalPromptHash to ProcessStartInference in inference_state.go.

Also updated calculations.startProcessed() from PromptHash != "" to MaxTokens != 0 — now that ProcessFinishInference populates PromptHash, it's no longer a reliable start-processed marker.

Configurable mismatch behavior

Added failOnCompareMismatch flag on msgServer (default true). Set to false via NewMsgServerImplWithCompareMismatchMode to log mismatches without rejecting — useful for safe rollout.

Copilot AI review requested due to automatic review settings February 20, 2026 05:07
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR optimizes signature verification for inference messages by implementing a dual-path approach that handles out-of-order message arrival (start-first vs finish-first). The optimization reduces redundant cryptographic signature verification: the first message (whether Start or Finish) performs full signature verification, while the second message performs component comparison only. Additionally, the PR changes the StartProcessed() detection logic to use MaxTokens instead of PromptHash to avoid false positives in the finish-first flow.

Changes:

  • Modified StartProcessed() to use MaxTokens != 0 instead of PromptHash != "" as the marker for whether StartInference has been processed
  • Implemented signature verification policy: first message verifies signatures, second message performs equality checks
  • Added new error types for component mismatches: ErrDevComponentMismatch, ErrTAComponentMismatch, ErrInferenceRoleMismatch
  • Added comparison functions to validate component consistency between Start and Finish messages in out-of-order scenarios

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 16 comments.

Show a summary per file
File Description
inference-chain/x/inference/types/inference.go Changed StartProcessed() to use MaxTokens instead of PromptHash
inference-chain/x/inference/types/errors.go Added three new error types for component mismatches
inference-chain/x/inference/keeper/msg_server_validation_test.go Added missing Creator field to FinishInference test message
inference-chain/x/inference/keeper/msg_server_start_inference.go Implemented dual-path signature verification: verify on first, compare on second
inference-chain/x/inference/keeper/msg_server_out_of_order_inference_test.go Updated mock expectations and added OriginalPromptHash assertions
inference-chain/x/inference/keeper/msg_server_finish_inference_test.go Added comprehensive tests for start-first/finish-first scenarios and mismatch detection
inference-chain/x/inference/keeper/msg_server_finish_inference.go Implemented dual-path signature verification for FinishInference
inference-chain/x/inference/keeper/msg_server_finish_first_missing_signature_fields_test.go New test documenting bug where hash fields aren't persisted in finish-first flow
inference-chain/x/inference/calculations/inference_state_test.go Updated test to reflect new StartProcessed() logic using MaxTokens
inference-chain/x/inference/calculations/inference_state.go Updated startProcessed() function to use MaxTokens instead of PromptHash
decentralized-api/internal/server/public/post_chat_handler.go Added TODO comment about rejecting empty PromptHash

Comment on lines +189 to +190
// TODO: We need to include inferenceId in the TA signature to make sure executor can't substitute the modified prompt
// TODO: any error here should lead to punishing the TA
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This TODO highlights a critical security concern: without the inferenceId in the TA signature, an executor could potentially substitute a modified prompt while keeping the same hash. This should be addressed before the signature verification optimization is considered complete.

Additionally, the second TODO mentions that any error in these comparisons should lead to punishing the TA. Consider implementing this punishment mechanism as it's important for maintaining system integrity.

Copilot uses AI. Check for mistakes.
Comment on lines +82 to +87
// TODO: punish executor if Dev fails
if err := compareFinishDevComponents(msg, &existingInference); err != nil {
k.LogError("FinishInference: dev component mismatch", types.Inferences, "error", err, "inferenceId", msg.InferenceId)
return failedFinish(ctx, err, msg), nil
}
// TODO: re-check TA signature if any of the checks fail and punish the TA if the signature is invalid
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These TODOs indicate incomplete implementation of the punishment mechanism. The comparison checks are meant to detect malicious behavior, but without punishing the bad actors, the system remains vulnerable to:

  1. Executors submitting incorrect dev components (line 82)
  2. Transfer agents submitting invalid signatures that fail comparison (line 87)

Implementing these punishment mechanisms should be a priority to ensure the signature verification optimization doesn't create security gaps.

Copilot uses AI. Check for mistakes.
@IgnatovFedor IgnatovFedor added this to the v0.2.11 milestone Feb 20, 2026
@IgnatovFedor IgnatovFedor linked an issue Feb 20, 2026 that may be closed by this pull request
@gmorgachev gmorgachev closed this Feb 20, 2026
@gmorgachev gmorgachev deleted the do/inference-optimization-keys-2 branch February 20, 2026 08:47
@github-project-automation github-project-automation bot moved this from Todo to Done in Upgrade v0.2.11 Feb 20, 2026
@gmorgachev gmorgachev restored the do/inference-optimization-keys-2 branch February 20, 2026 09:00
@gmorgachev gmorgachev reopened this Feb 20, 2026
@tcharchian tcharchian moved this from Done to In Progress in Upgrade v0.2.11 Feb 21, 2026
@DimaOrekhovPS DimaOrekhovPS marked this pull request as ready for review February 23, 2026 18:02
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 13 out of 13 changed files in this pull request and generated no new comments.

@patimen patimen added the planned Ready to go in the milestone it's assigned in label Feb 24, 2026
// StartInference always assigns MaxTokens (explicit or default).
// Finish-first flow can populate PromptHash early, so use MaxTokens to detect
// whether StartInference has already been processed.
return inference.MaxTokens != 0
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively we can check inference.AssignedTo != ""

return currentInference, &payments, nil
}

func startProcessed(inference *types.Inference) bool {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed duplicate implementations, see inference.go

@tcharchian tcharchian moved this from In Progress to Needs reviewer in Upgrade v0.2.11 Feb 28, 2026
Copy link
Collaborator

@slandymani slandymani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change pr description after removing failOnCompareMismatch

}
if request.PromptHash != "" && computedPromptHash != request.PromptHash {
if request.PromptHash == "" {
logging.Error("Empty prompt hash", types.Inferences, "error", err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err is always nil

inference.PromptHash,
)
}
if inference.RequestTimestamp != msg.RequestTimestamp {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicated comparison of msg.RequestTimestamp and msg.TransferredBy, we already checked them in compareFinishDevComponents

inference.PromptHash,
)
}
if inference.RequestTimestamp != msg.RequestTimestamp {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicated comparison of msg.RequestTimestamp and inference.TransferredBy, we already checked them in compareStartDevComponents


// Record the current price only if this is the first message (FinishInference not processed yet)
// This ensures consistent pricing regardless of message arrival order
if !existingInference.FinishedProcessed() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be moved to the previous if statement so as not to duplicate the check?


// Record the current price only if this is the first message (StartInference not processed yet)
// This ensures consistent pricing regardless of message arrival order
if !existingInference.StartProcessed() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move price recording to previous if statement, merge model checks with compareFinishModelField

patimen and others added 9 commits March 4, 2026 20:12
…inference-ignite into do/inference-optimization-keys-2

# Conflicts:
#	inference-chain/x/inference/keeper/msg_server_out_of_order_inference_test.go
#	inference-chain/x/inference/types/errors.go
…inference-ignite into do/inference-optimization-keys-2

# Conflicts:
#	inference-chain/x/inference/keeper/msg_server_finish_inference.go
#	inference-chain/x/inference/keeper/msg_server_start_inference.go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

planned Ready to go in the milestone it's assigned in

Projects

Status: Needs reviewer

Development

Successfully merging this pull request may close these issues.

[0/4] StartInference and FinishInference: optimiziation

6 participants