Skip to content

Commit 2c5fcff

Browse files
committed
A note on SigV4
1 parent e828fc5 commit 2c5fcff

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

text/0000-cargo-asymmetric-tokens.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -144,6 +144,9 @@ In practice, I suspect many registries will not, leading to an ecosystem where m
144144
We could use PASETO `v4.public` instead of `v3.public`. The `v4` standard uses more modern crypto, based on Ed25519.
145145
Unfortunately, most existing hardware tokens do not support Ed25519. By using `v3.public` based on P-384 we allow a `credential-process` to keep keys on the hardware.
146146

147+
We could use [Amazon's SigV4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). In SigV4 the client constructs a string from the request (url, headers, and body). The client signs the string. It sends the signature and only the signature as the authentication with the request. Importantly the client does not send the constructed string. The server looks at the request it receives and construct a new copy of the string. It then checks that the signature it got is valid for the string it constructed. This scheme means that the authentication field stays the same size no matter how much is being signed. Also, any large data sent in the request is not duplicated in the authentication header. Most importantly there is no way for a server to have a bug where it forgets to check that some fields in the token do not match the request it came with!
148+
Unfortunately this scheme is more complicated than it seems. There is a lot of complexity hidden in "constructs a string". SigV4 does not get us out of having to specify exactly which fields are important for each request. Furthermore, HTTP headers and urls can be canonicalised differently by different hops on the network. when calling Amazon's services Amazon provides client libraries that do all the heavy lifting of making sure the fields are canonicalised the same on the client and the server if and only if the requests are for the same resource. A lot of this complexity's has been standardized and generalized in the [HTTP Message Signatures](https://www.ietf.org/archive/id/draft-ietf-httpbis-message-signatures-08.html) draft specification. Unfortunately, implementations of the specification are not yet widely available.
149+
147150
Mutating operations include signed proof that the asymmetric token was intended for that package, version, and hash. Why not do the same for read operations? When reading from an HTTP based index, we may need to request many files in quick succession without being able to enumerate them in advance. When using a credential process to communicate with a hardware token that requires human interaction for each signing operation we do not want to require hundreds of interactions.
148151

149152
Use [Biscuit](https://www.biscuitsec.org/) instead of PASETO. Biscuit is a format that adds delegation and a logic-based policy engine for attenuation and fine-grained usage controls to the other properties tokens have. The Biscuit logic language provides a centralized place to do authorization. As part of the token format, for example, a token can be made that can only publish one crate on a particular day (good for a CI/CD use case), or a token that can only yank particular crates (good for giving to a security scanner). Once biscuit is adopted as your token format [the crates.io token scopes RFC](https://github.com/rust-lang/rfcs/pull/2947) becomes easy to implement. Authorization with tokens that have limited scope are definitely something more widely used registries should definitely support.

0 commit comments

Comments
 (0)