Releases: digital-asset/canton
canton v3.4.11
Release of Canton 3.4.11
Canton 3.4.11 has been released on February 16, 2026.
Summary
This release improves on performance, observability, and reliability.
As part of the upgrade for this patch release a DB migration is required and
performed by Canton.
What’s New
Free confirmation responses
A new traffic control parameter has been added: freeConfirmationResponses.
When set to true on a synchronizer where traffic control is enabled, confirmation responses will not cost traffic.
Defaults to false.
New Topology Processing and Client Architecture
Topology processing has been significantly refactored. Before, it used to perform a large number of sequential database
lookups during the validation of a topology transaction. The new validation pre-fetches all data of a batch into a
write through cache and only persists the data at the end of the batch processing. This means that where before
a batch of 100 txs needed 200-300 db round trips to process, this is now reduced down to effectively 2 db operations
(one batch read, one batch write).
In addition, the same write-through cache is now also leveraged for read processing, where the topology state is built
from the cache directly, avoiding further database round trips.
The new components can be turned on and controlled using
canton.<type>.<name>.topology = {
use-new-processor = true // can be used without new client
use-new-client = true // can only be used with new processor
// optional flags
enable-topology-state-cache-consistency-checks = true // enable in the beginning for additional consistency checks for further validation
topology-state-cache-eviction-threshold = 250 // what is the oversize threshold that must be reached before we start eviction
max-topology-state-cache-items = 10000 // how many items to keep in cache (uid, transaction_type)
}
The topology state cache will also expose appropriate cache metrics using the label "topology".
Performance Improvements
-
Fixed the private store cache to prevent an excessive number of database reads.
-
Sequencer nodes serving many validator subscriptions are not overloaded anymore with tasks reading from the database.
The parallelism is configured via thesequencers.<sequencer>.parameters.batching.parallelism. Notice that this
config setting is used not just used for limiting the event reading, but elsewhere in the sequencer as well. -
Replaying of ACS changes for the ACS commitment processor has smaller memory overhead:
- Changes are loaded in batches from the DB
- ACS changes are potentially smaller because we remove activations and deactivations that cancel out.
This is particularly useful for short-lived contracts
-
Additional DB indices on the ACS commitment tables to improve performance of commitment pruning. This requires a DB migration.
-
Added a mode for the mediator to process events asynchronously. This is enabled by default.
In the new asynchronous mode, events for the same request id are processed sequentially, but events for different request ids are processed in parallel.
The asynchronous mode can be turned off usingcanton.mediators.<mediator-name>.mediator.asynchronous-processing = false. -
The mediator now batches fetching from and storing of finalized responses. The batch behavior can be configured via the following parameters:
canton.mediators.<mediator>.parameters.batching {
mediator-fetch-finalized-responses-aggregator {
maximum-in-flight = 2 // default
maximum-batch-size = 500 // default
}
mediator-store-finalized-responses-aggregator {
maximum-in-flight = 2 // default
maximum-batch-size = 500 // default
}
}-
New participant config flag
canton.participants.<participant>.parameters.commitment-asynchronous-initializationto enable asynchronous initialization of the ACS commitment processor. This speeds up synchronizer connection if the participant manages active contracts for a large number of different stakeholder groups, at the expense of additional memory and DB load. -
Disabled last-error-log by default due to performance reasons. You can re-enable the previous behavior by passing
--log-last-errors=trueto the canton binary.
Observability Improvements
-
Additional sequencer metrics:
- more
daml.sequencer.block.stream-element-countmetric values withflowlabels from Pekko streams in the sequencer reader - new
daml.sequencer.public-api.subscription-last-timestampmetric with the last timestamp read via the member's subscription,
labeled by thesubscriber - 2 new metrics to monitor the time interval covered by the events buffer
daml.sequencer.head_timestampanddaml.sequencer.last_timestamp
- more
-
Additional metrics for the ACS commitment processor:
daml.participant.sync.commitments.last-incoming-received,daml.participant.sync.commitments.last-incoming-processed,daml.participant.sync.commitments.last-locally-completed, anddaml.participant.sync.commitments.last-locally-checkpointed. -
Logging improvements in sequencer (around event signaller and sequencer reader).
-
Canton startup logging: it is now possible to configure a startup log level, that will reset after a timeout, i.e.:
canton.monitoring.logging.startup {
log-level = "DEBUG"
reset-after = "5 minutes"
}- Sequencer progress supervisor: it is possible now to enable a monitor for the sequencer node progressing on its own subscription.
- False positives related to asynchronous writer has been fixed
- Added a warn action to kill the sequencer node
- Configuration:
// Future supervision has to be enabled
canton.monitoring.logging.log-slow-futures = true
canton.sequencers.<sequencer>.parameters.progress-supervisor {
enabled = true
warn-action = enable-debug-logging // set to "restart-sequencer" for sequencer node to exit when stuck
// detection timetout has been bumped in defaults to
// stuck-detection-timeout = 15 minutes
}-
If the new connection pool is enabled, the health status of a node will present the following new components:
sequencer-connection-poolinternal-sequencer-connection-<alias>(one per defined sequencer connection)sequencer-subscription-poolsubscription-sequencer-connection-<alias>(one per active susbcription)
-
The acknowledgement metric
daml_sequencer_block_acknowledgments_microsis now monotonic within restarts and ignores late/delayed member's acknowledgements. -
Mediator metric
daml_mediator_requestsincludes confirmation requests that are rejected due to reusing the request UUID. Such requests are labelled withduplicate_request -> trueon the metric.
Other Minor Improvements
-
Added an RPC and corresponding console command on the sequencer's admin API to
generate an authentication token for a member for testing:sequencer1.authentication.generate_authentication_token(participant1).
Requires the following config:canton.features.enable-testing-commands = yes. -
Added a console command to logout a member using their token on a sequencer:
sequencer1.authentication.logout(token) -
Added support for adding table settings for PostgreSQL. One can use a repeatable migration (Flyway feature) in a file
provided to Canton externally.- Use the new config
repeatable-migrations-pathsunder thecanton.<node_type>.<node>.storage.parametersconfiguration section. - The config takes a list of directories where repeatable migration files must be placed, paths must be prefixed with
filesystem:for Flyway to recognize them. - Example:
canton.sequencers.sequencer1.storage.parameters.repeatable-migrations-paths = ["filesystem:community/common/src/test/resources/test_table_settings"]. - Only repeatable migrations are allowed in these directories: files with names starting with
R__and ending with.sql. - The files cannot be removed once added, but they can be modified (unlike the
V__versioned schema migrations), and if modified these will be reapplied on each Canton startup. - The files are applied in lexicographical order.
- Example use case: adding
autovacuum_*settings to existing tables. - Only add idempotent changes in repeatable migrations.
- Use the new config
-
KMS operations are now retried on HTTP/2 INTERNAL gRPC exceptions.
-
New parameter
safeToPruneCommitmentStateinParticipantStoreConfig, enabling to optionally specify
in which conditions counter-participants that have not sent matching commitments cannot block pruning on
the current participant. The parameter affects all pruning commands, including scheduled pruning. -
Extended the set of characters allowed in user-id in the ledger api to contain brackets:
().
This also makes those characters accepted as part of thesubclaims in JWT tokens. -
Set aggressive TCP keepalive settings on Postgres connections to allow quick HA failover in case of stuck DB connections.
-
New configuration value for setting the sequencer in-flight aggregation query interval:
batching-config.in-flight-aggregation-query-interval. -
Fixed an issue that could prevent the
SequencerAggregatorto perform a timely shutdown. -
Update to bouncy castle to 1.83 which fixes CVE-2024-29857 and CVE-2024-34447
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.8 (Debian 17.8-1.pgdg13+1) – Also tested: PostgreSQL 14.21 (Debian... |
canton v3.4.10
Release of Canton 3.4.10
Canton 3.4.10 has been released on January 07, 2026.
Summary
This is a maintenance release focused on stability improvements.
Notably, it upgrades gRPC to version 1.77.0 to address a known vulnerability (CVE-2025-58057).
What’s New
Minor Improvements
- Protect the admin participant from self lock-out. It is now impossible for an admin to remove own admin rights or
delete itself. - LedgerAPI ListKnownParties supports an optional prefix filter argument filterParty.
The respective JSON API endpoint now additionally supportsidentity-provider-idas
an optional argument, as well asfilter-party.
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.7 (Debian 17.7-3.pgdg13+1) – Also tested: PostgreSQL 14.20 (Debian 14.20-1.pgdg13+1), PostgreSQL 15.15 (Debian 15.15-1.pgdg13+1), PostgreSQL 16.11 (Debian 16.11-1.pgdg13+1) |
canton v2.10.3
Release of Canton 2.10.3
Canton 2.10.3 has been released on January 12, 2026. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.
Summary
This maintenance release updates internal dependencies to address security vulnerabilities.
We recommend upgrading during your next scheduled maintenance window.
What’s New
Addressed security vulnerabilities
Notably, this maintenance release upgrades gRPC to version 1.75.0 and Netty to version 4.1.130.Final to address known
vulnerabilities (CVE-2025-55163 and CVE-2025-58057). Additionally, Flyway has been updated to 9.22.3, which includes
an upgrade to Jackson 2.15.2 to resolve a security vulnerability.
New gRPC Client Configuration Options
We have introduced keepAliveWithoutCalls and idleTimeout settings for gRPC client keep-alive configurations.
Please refer to the https://grpc.io/docs/guides/keepalive/#keepalive-configuration-specification for a detailed breakdown
of these parameters.
Backward compatibility note: These two configurations are disabled by default to maintain existing behavior.
Usage note: If keepAliveWithoutCalls is enabled on the client, you must ensure that permitKeepAliveWithoutCalls is
also enabled on the server side. Additionally, permitKeepAliveTime may need adjustment to accommodate the increased
frequency of keep-alive pings from the client.
Example Configuration:
Participant config:
canton.participants.participant.sequencer-client.keep-alive-client.keep-alive-without-calls = true
# And / Or
canton.participants.participant.sequencer-client.keep-alive-client.idle-timeout = 5 minutesDomain config:
# Must be enabled if keep-alive-without-calls is enabled on the client side
# Single domain
canton.domains.mydomain.public-api.keep-alive-server.permit-keep-alive-without-calls = true
canton.domains.mydomain.public-api.keep-alive-server.permit-keep-alive-time = 5 minutes
# Sequencer node
canton.sequencers.sequencer.public-api.keep-alive-server.permit-keep-alive-without-calls = true
canton.sequencers.sequencer.public-api.keep-alive-server.permit-keep-alive-time = 5 minutesCompatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 5, 7 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode) |
| Postgres | Recommended: PostgreSQL 12.22 (Debian 12.22-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.23 (Debian 13.23-1.pgdg13+1), PostgreSQL 14.20 (Debian 14.20-1.pgdg13+1), PostgreSQL 15.15 (Debian 15.15-1.pgdg13+1) |
| Oracle | 19.20.0 |
canton v3.4.9
Release of Canton 3.4.9
Canton 3.4.9 has been released on December 01, 2025.
Summary
This is a maintenance release focused on critical stability improvements and bugfixes for participant nodes,
primarily related to data pruning and the Ledger API's Interactive Submission Service.
What’s New
Minor Improvements
- Made the config option
...topology.use-time-proofs-to-observe-effective-timeto work and changed the default tofalse.
Disabling this option activates a more robust time advancement broadcast mechanism on the sequencers,
which however still does not tolerate crashes or big gaps in block sequencing times. The parameters can be configured
in the sequencer viacanton.sequencers.<sequencer>.parameters.time-advancing-topology. - Batching configuration now allows setting different parallelism for pruning (currently only for Sequencer pruning):
New optioncanton.sequencers.sequencer.parameters.batching.pruning-parallelism(defaults to2) can be used
separately from the generalcanton.sequencers.sequencer.parameters.batching.parallelismsetting. - The
PrepareSubmissionRPC in theInteractiveSubmissionServicenow rejects requests with multiple commands.
Such requests were never and are still not supported, but previously failed later in the transaction processing.
This provides earlier feedback and better UX.
This behavior can be reverted by settingledger-api.interactive-submission-service.enforce-single-root-node = falsein the participant node config object. - The
ExecuteSubmissionRPC in theInteractiveSubmissionServicenow rejects requests with transactions containing multiple root nodes.
Such requests were never and are still not supported, but previously failed later in the transaction processing.
This provides earlier feedback and better UX.
This behavior can be reverted by settingledger-api.interactive-submission-service.enforce-single-root-node = falsein the participant node config object. - Fixed an issue preventing clearing of the onboarding flag for new external parties created on Canton 3.4.8 using the
clearPartyOnboardingFlagendpoint.
Bugfixes
(25-012, Low): Ledger prune call erroneously emits an UNSAFE_TO_PRUNE error
Issue Description
Pruning bug that wrongly issues errors, but does not delete data that is not safe to prune. When no offset is safe to
prune, i.e., the first unsafe offset is 1 (the first ledger offset), Canton pruning indicates in logs that it is safe to
prune up to ledger end, which is wrong. However, the prune call subsequently logs an UNSAFE_TO_PRUNE error and the
pruning call fails. Note that this is not a safety bug: validation checks still prevent Canton from actually pruning
unsafe data.
Affected Deployments
Participant nodes
Affected Versions
All 3.x < 3.4.9
Impact
Minor: An error is logged when calling prune, which may alert an operator.
Symptom
Pruning logs that it is safe to prune up to some offset, but the pruning call fails when it tries to prune at that offset.
Workaround
Calls to prune eventually succeed when the unsafe to prune boundary advances.
Likelihood
Possible
Recommendation
Upgrade to >= 3.4.9
(25-013, Critical): Incorrect contract-store pruning related to immediately divulged events
Issue Description
Participant pruning is incorrectly removing stored contracts if certain conditions are met.
Pruning of participant events which are immediately divulged to the participant (create events with no locally hosted stakeholders)
leads to unconditional removal of the contract instance from the participant contract store. This is problematic for
the following scenario (the 3 steps need to be after each other in real time):
- Participant observes Immediately divulged contract C at offset O1.
- Participant starts hosting one of the stakeholders at offset O3 (this leads to offline party replication, which adds the same created event again).
- Participant is being pruned at offset O2.
Offset conditions: O1 < O2 < O3.
Affected Deployments
Participant nodes
Affected Versions
All 3.4.x < 3.4.9
Impact
Grave: Ledger API operations (command submission, pointwise/streaming update retrieval) related to the respective contracts fail with internal error due to the contract being removed from the store.
Symptom
If the conditions are met, the respective contract is removed from the participant contract store leading to (non-exhaustive list):
- Ledger API pointwise/streaming endpoints (Update, ACS) fail with internal error in case the respective import event is included in the results.
- Ledger API command submission fails with "contract not found" for interpretations that have the respective contract as an input.
Workaround
Not using ACS import/repair service which adds contracts after bootstrapping, or not pruning the participant.
Likelihood
Possible
Recommendation
Upgrade to >= 3.4.9
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.7 (Debian 17.7-3.pgdg13+1) – Also tested: PostgreSQL 14.20 (Debian 14.20-1.pgdg13+1), PostgreSQL 15.15 (Debian 15.15-1.pgdg13+1), PostgreSQL 16.11 (Debian 16.11-1.pgdg13+1) |
canton v3.4.8
Release of Canton 3.4.8
Canton 3.4.8 has been released on November 14, 2025.
Summary
This is a maintenance release that fixes automatic synchronization of protocol feature flags.
What’s New
Minor Improvements
- The
generateExternalPartyTopologyendpoint on the Ledger API now returns a singlePartyToParticipanttopology transaction to onboard the party.
The transaction contains signing threshold and signing keys. This effectively deprecate the usage ofPartyToKeyMapping.
For parties with signing keys both inPartyToParticipantandPartyToKeyMapping, the keys fromPartyToParticipanttake precedence.
Bugfixes
- Fixed a bug preventing automatic synchronization of protocol feature flags.
Automatic synchronization can be disabled by settingparameters.auto-sync-protocol-feature-flags = falsein the participant's configuration object.
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.7 (Debian 17.7-3.pgdg13+1) – Also tested: PostgreSQL 14.20 (Debian 14.20-1.pgdg13+1), PostgreSQL 15.15 (Debian 15.15-1.pgdg13+1), PostgreSQL 16.11 (Debian 16.11-1.pgdg13+1) |
canton v3.4.7
Release of Canton 3.4.7
Canton 3.4.7 has been released on November 10, 2025.
Summary
This is a maintenance release that fixes an issue when replaying too many ACS changes during connection to a synchronizer.
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.6 (Debian 17.6-2.pgdg13+1) – Also tested: PostgreSQL 14.19 (Debian 14.19-1.pgdg13+1), PostgreSQL 15.14 (Debian 15.14-1.pgdg13+1), PostgreSQL 16.10 (Debian 16.10-1.pgdg13+1) |
canton v3.4.6
Release of Canton 3.4.6
Canton 3.4.6 has been released on November 07, 2025.
Summary
This is a maintenance release that fixes an issue of importing of existing topology transactions.
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 34 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM (build 21.0.5+1-nixos, mixed mode, sharing) |
| Postgres | Recommended: PostgreSQL 17.6 (Debian 17.6-2.pgdg13+1) – Also tested: PostgreSQL 14.19 (Debian 14.19-1.pgdg13+1), PostgreSQL 15.14 (Debian 15.14-1.pgdg13+1), PostgreSQL 16.10 (Debian 16.10-1.pgdg13+1) |
canton v2.10.2
Release of Canton 2.10.2
Canton 2.10.2 has been released on July 23, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.
Summary
This is a maintenance release that provides minor stability and performance improvements, and resolves a dependency issue with the KMS Driver artifacts.
until 2025-07-23 (Exclusive)
- OTLP trace export configuration has been extended with several new parameters allowing connection to OTLP servers,
which require more elaborate set-up:trustCollectionPathshould point to a valid CA certificate file. When selected a TLS connection
is created instead of an open-text one.additionalHeadersallows specifying key-value pairs that are added to the HTTP2 headers on all trace exporting
calls to the OTLP server.timeoutsets the maximum time to wait for the collector to process an exported batch of spans.
If unset, defaults to 10s.
What’s New
Improved Package Dependency Resolution
The package dependency resolver, which is used in various topology state checks and transaction processing is improved as follows:
- The underlying cache is now configurable via
canton.parameters.general.caching.package-dependency-cache.
By default, the cache is size-bounded at 10000 entries and a 15-minutes expiry-after-access eviction policy. - The parallelism of the DB package fetch loader used in the package dependency cache
is bounded by thecanton.parameters.general.batching.parallelismconfig parameter, which defaults to 8.
Contract Prefetching
Contract prefetching is now also supported for createAndExercise command. On top of that, we now support recursive prefetching,
which allows to prefetch also referenced contract ids. The default max prefetching level is 3 and can be configured using
canton.participants.participant.ledger-api.command-service.contract-prefetching-depth = 3
Resolve KMS Driver Artifact Dependency Issues
The kms-driver-api and kms-driver-testing artifacts declared invalid dependencies in the Maven pom.xml files, which caused issues in fetching those artifacts. The declared invalid dependencies have been resolved.
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 5, 7 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode) |
| Postgres | Recommended: PostgreSQL 12.22 (Debian 12.22-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.21 (Debian 13.21-1.pgdg120+1), PostgreSQL 13.21 (Debian 13.21-1.pgdg120+1), PostgreSQL 14.18 (Debian 14.18-1.pgdg120+1), PostgreSQL 14.18 (Debian 14.18-1.pgdg120+1), PostgreSQL 15.13 (Debian 15.13-1.pgdg120+1), PostgreSQL 15.13 (Debian 15.13-1.pgdg120+1) |
| Oracle | 19.20.0 |
canton v2.10.1
Release of Canton 2.10.1
Canton 2.10.1 has been released on May 30, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.
Summary
This is a maintenance release that fixes one high, one medium, and two low severity issues.
Please update during the next maintenance window.
What’s New
Daml packages validation on Ledger API start-up
On Ledger-API start-up, the Daml package store of the participant node
is checked for upgrade compatibility for all persisted packages.
On compatibility check failure, the participant is shut down with an error message.
To disable the check (not recommended), set canton.participants.participant.parameters.unsafe-disable-upgrade-validation=true.
JWT Tokens in Admin API
Background
User authorization is extended to all service types for participant nodes in 2.10 for the Canton Admin API.
Specific Changes
The users is able to configure authorization on the Admin API of the participant node in a manner similar to what
is currently possible on the Ledger API. However, it is necessary to specify explicitly which users are
allowed in and which grpc services are accessible to them. An example configuration for both Ledger and Admin API
looks like this:
canton {
participants {
participant {
ledger-api {
port = 5001
auth-services = [{
type = jwt-rs-256-jwks
url = "https://target.audience.url/jwks.json"
target-audience = "https://rewrite.target.audience.url"
}]
}
admin-api {
port = 5002
auth-services = [{
type = jwt-rs-256-jwks
url = "https://target.audience.url/jwks.json"
target-audience = "https://rewrite.target.audience.url"
users = [{
user-id = alice
allowed-services = [{
"admin.v0.ParticipantRepairService",
"connection.v30.ApiInfoService",
"v1alpha.ServerReflection",
}]
}]
}]
}
}
}
}
While the users appearing in the sub claims of the JWT tokens on the Ledger API always have to be present in
the participant’s user database, no such requirement exists for the Admin API. The user in the authorization service
config can be an arbitrary choice of the participant’s operator. This user also needs to be configured in the associated
IDP system issuing the JWT tokens.
The configuration can contain a definition of either the target audience or the target scope depending on the specific
preference of the client organization. If none is given, the JWT tokens minted by the IDP must specify daml_ledger_api
as their scope claim.
Independent of the specific service that the operator wants to expose, it is a good practice to also give access rights
to the ServerReflection service. Some tools such as grpcurl or postman need to hit that service to construct their requests.
Impact and Migration
The changes are backwards compatible.
LF 1.17 templates cannot implement LF 1.15 interfaces.
Background
Bug 25-005, detailed below, prevents the execution of choices from LF
1.15 interfaces within LF 1.17 templates. To resolve this, we have
entirely restricted LF 1.17 templates from implementing LF 1.15
interfaces. However, this change is more than just a bug fix—it
reflects a deliberate alignment with upgradeability principles.
While mixing LF 1.15 interfaces with LF 1.17 templates may seem
advantageous, it is only useful if your model contains other LF 1.15
templates implementing those interfaces. Yet, this approach ultimately
disrupts the upgrade path. Maintaining two versions of a template
within the same model leads to inconsistency: LF 1.17 templates
support seamless upgrades, whereas LF 1.15 templates do not, requiring
an offline migration to fully upgrade the model.
Specific Changes
-
The compiler now prevents Daml models from implementing an LF 1.15
interface within an LF 1.17 template. -
For backward compatibility, participants can still load these models
if they were compiled with SDK 2.10.0. However:- A warning is issued at load time, if those models contain an LF 1.17 template implementing an LF 1.15 interface.
- Any attempt to execute a choice on an LF 1.15 interface within an
LF 1.17 template (compiled with SDK 2.10.0) will trigger a runtime
error during submission.
Impact and Migration
These changes preserve backward compatibility, while preventing
the participant from crashing.
Minor Improvements
- The daml values representing parties received over the Ledger API can be validated in a stricter manner. When the
canton.participants.<participant>.http-ledger-api.daml-definitions-service-enabledparameter is turned on,
the parties must adhere to a format containing the hint and the fingerprint separated by a double colon:
<party-hint>::<fingerprint>. The change affects the values embedded in the commands supplied to theSubmit*calls
to theCommandSubmissionServiceand theCommandService. - Added configuration for the size of the inbound metadata on the Ledger API. Changing this value allows
the server to accept larger JWT tokens.
canton.participants.participant.ledger-api.max-inbound-metadata-size=10240 canton.participants.participant.parameters.disable-upgrade-validationis explicitly deemed a dangerous configuration by:- renaming it to
unsafe-disable-upgrade-validation. - only allowing its setting in conjunction with
canton.parameters.non-standard-config = yes
- renaming it to
Bugfixes
(25-005, Low): Cannot exercise LF 1.15 interface choices on LF 1.17 contracts
Issue Description
Daml-LF 1.17 versions exercise-by-interface nodes according to the
interface's language version. This ensures that the choice argument
and result are not normalized if the interface is defined in LF 1.15,
which is important for Ledger API clients that have been compiled against
the interface and expect non-normalized values. Yet, the package name
is still set on the exercise node based on the contract's language
version, namely 1.17.
This leads to at least two problems:
-
When Canton tries to serialize the transaction in Phase 7, the
TransactionCoderattempts to serialize the node with version 1.15
into the SingleDimensionEventLog, but cannot do so because package
names cannot be serialized in 1.15. This serialization exception
bubbles up into the application handler and causes the participant
to disconnect from the domain. This problem is sticky in that crash
recovery will run into the same problem again and disconnect
again. -
When the template defines a key, Canton serializes the global key
for the view participant according to the exercise node
version. Accordingly, the hash of the key according to LF
1.15. This trips up the consistency check in ViewParticipantData
because the input contract's serialized key was hashed according to
LF 1.17. This failure happens only Phase 3 during when decrypting
and parsing the received views. Participants discard such views and
reject the confirmation request. The failure does not happen in
Phase 1 because the global key hashes are still correct in Phase 1.
Serialization correctly sets the package name, but uses the wrong
version (1.15) from the exercise node. Deserialization sees the
wrong version and ignores the package name.
Accordingly, the participant discards the views it cannot
deserialize and rejects the confirmation request if it is a
confirming node.
Affected Deployments
Participant nodes.
Affected Versions
- 2.10.0
Impact
- Participant node crashes and may need manual repair.
- Command submission fails with
INVALID_ARGUMENT.
Symptom
If the template defines a contract key, the participant logs
LOCAL_VERDICT_MALFORMED_PAYLOAD with error message "Inconsistencies
for resolved keys: GlobalKey(...) -> Assigned(...)" and rejects the
confirmation request. The command submission fails with
INVALID_ARGUMENT
If the template does not define a contract key, the transaction gets
processed, but the participant fails to store the accepted transaction
in its database with the error message "Failed to serialize versioned
transaction: packageName is not supported by transaction version 15",
disconnects from the domain with an ApplicationHandlerFailure and
possible crashes (depending on configuration). Reconnection attempts
will run into the same problem over and over gain.
Workaround
Recompile the interfaces with LF 1.17. This may break ledger API
clients that expected trailing None record fields in exercise arguments
and results.
Likeliness
Deterministic.
Recommendation
Upgrade to 2.10.1. If your Daml models contain LF 1.17 templates
implementing LF 1.15 interfaces, and you face issues rewriting
everything in LF 1.17, please contact Digital Asset support.
(25-006, High): Confidential configuration fields are logged in plain text when using specific configuration syntax or CLI flags
Issue Description
If the logConfigWithDefaults config parameter is set to false (which is the default),
the config rendering logic fails to redact confidential information (e.g DB credentials) when config substitution is used in combination with a config element override.
Suppose we have the following configuration file:
canton.conf
_storage {
password = confidential
}
canton {
storage = ${_storage}
}Now the confidential config element is changed via another config file:
override.conf
canton.storage.password = confidential2and then:
canton v2.9.7
Release of Canton 2.9.7
Canton 2.9.7 has been released on April 14, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.
Summary
This is a maintenance release that fixes one medium severity issue which prevents migration of contracts with keys via ACS export / import. Please update if affected.
Bugfixes
(25-004, Medium): RepairService contract import discards re-computed contract keys in the repaired contract
Issue Description
The repair service re-computes contract metadata when adding new contracts.
However, instead of repairing the contract with the re-computed keys, it re-uses the keys from the input contract.
Combined with a gap in the console macros which do not propagate contract keys during ACS export,
migrating contracts with keys in that way can result in an inconsistency between the ACS and contract key store,
which crashes the participant when attempting to fetch a contract by key.
Affected Deployments
Participant nodes.
Affected Versions
- 2.10.0
- 2.9.0-2.9.6
- 2.8.x
Impact
Contracts with keys cannot be used after migration via ACS export / import.
Symptom
The participant crashes with
"java.lang.IllegalStateException: Unknown keys are to be reassigned. Either the persisted ledger state corrupted or this is a malformed transaction"
when attempting to lookup a contract by key that has been migrated via ACS export / import
Workaround
No workaround available. Update to 2.9.7 if affected.
Likeliness
Deterministic if migrating contracts with keys using ACS export to an unpatched version.
Recommendation
Upgrade to 2.9.7 if affected by this issue.
Compatibility
The following Canton protocol versions are supported:
| Dependency | Version |
|---|---|
| Canton protocol versions | 5 |
Canton has been tested against the following versions of its dependencies:
| Dependency | Version |
|---|---|
| Java Runtime | OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode) |
| Postgres | Recommended: PostgreSQL 12.22 (Debian 12.22-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.20 (Debian 13.20-1.pgdg120+1), PostgreSQL 14.17 (Debian 14.17-1.pgdg120+1), PostgreSQL 15.12 (Debian 15.12-1.pgdg120+1) |
| Oracle | 19.20.0 |