This document describes the format of the driver spec tests included in the JSON and YAML files included in the legacy
sub-directory. Tests in the unified directory are written using the
Unified Test Format.
The timeoutMS.yml/timeoutMS.json files in this directory contain tests for the timeoutMS option and its
application to the client-side encryption feature. Drivers MUST only run these tests after implementing the
Client Side Operations Timeout specification.
Additional prose tests, that are not represented in the spec tests, are described and MUST be implemented by all drivers.
Running spec and prose tests require that the driver and server both support Client-Side Field Level Encryption. CSFLE is supported when all of the following are true:
- Server version is 4.2.0 or higher. Legacy spec test runners can rely on
runOn.minServerVersionfor this check. - Driver has libmongocrypt enabled
- At least one of crypt_shared and/or mongocryptd is available.
The spec tests format is an extension of the transactions spec legacy test format with some additions:
- A
json_schemato set on the collection used for operations. - An
encrypted_fieldsto set on the collection used for operations. - A
key_vault_dataof data that should be inserted in the key vault collection before each test. - Introduction
autoEncryptOptstoclientOptions - Addition of
$dbto command incommand_started_event - Addition of
$$typetocommand_started_eventand outcome.
The semantics of $$type is that any actual value matching one of the types indicated by either a BSON type string or
an array of BSON type strings is considered a match.
For example, the following matches a command_started_event for an insert of a document where random must be of type
binData:
- command_started_event:
command:
insert: *collection_name
documents:
- { random: { $$type: "binData" } }
ordered: true
command_name: insert
The following matches a command_started_event for an insert of a document where random must be of type binData or
string:
- command_started_event:
command:
insert: *collection_name
documents:
- { random: { $$type: ["binData", "string"] } }
ordered: true
command_name: insert
The values of $$type correspond to
these documented string representations of BSON types.
Each YAML file has the following keys:
runOnUnchanged from Transactions spec tests.database_nameUnchanged from Transactions spec tests.collection_nameUnchanged from Transactions spec tests.dataUnchanged from Transactions spec tests.json_schemaA JSON Schema that should be set on the collection (usingcreateCollection) before each test run.encrypted_fieldsAn encryptedFields option that should be set on the collection (usingcreateCollection) before each test run.key_vault_dataThe data that should exist in the key vault collection under test before each test run.tests: An array of tests that are to be run independently of each other. Each test will have some or all of the following fields:description: Unchanged from Transactions spec tests.skipReason: Unchanged from Transactions spec tests.useMultipleMongoses: Unchanged from Transactions spec tests.failPoint: Unchanged from Transactions spec tests.clientOptions: Optional, parameters to pass to MongoClient().autoEncryptOpts: OptionalkmsProvidersA dictionary of KMS providers to set on the key vault ("aws" or "local")awsThe AWS KMS provider. An empty object. Drivers MUST fill in AWS credentials (accessKeyId,secretAccessKey) from the environment.azureThe Azure KMS provider credentials. An empty object. Drivers MUST fill in Azure credentials (tenantId,clientId, andclientSecret) from the environment.gcpThe GCP KMS provider credentials. An empty object. Drivers MUST fill in GCP credentials (email,privateKey) from the environment.localorlocal:name2The local KMS provider.keyA 96 byte local key.
kmipThe KMIP KMS provider credentials. An empty object. Drivers MUST fill in KMIP credentials (endpoint, and TLS options).
schemaMap: Optional, a map from namespaces to local JSON schemas.keyVaultNamespace: Optional, a namespace to the key vault collection. Defaults to "keyvault.datakeys".bypassAutoEncryption: Optional, a boolean to indicate whether or not auto encryption should be bypassed. Defaults tofalse.encryptedFieldsMapAn optional document. The document maps collection namespace toEncryptedFieldsdocuments.
operations: Array of documents, each describing an operation to be executed. Each document has the following fields:name: Unchanged from Transactions spec tests.object: Unchanged from Transactions spec tests.. Defaults to "collection" if omitted.collectionOptions: Unchanged from Transactions spec tests.command_name: Unchanged from Transactions spec tests.arguments: Unchanged from Transactions spec tests.result: Same as the Transactions spec test format with one addition: if the operation is expected to return an error, theresultdocument may contain anisTimeoutErrorboolean field. Iftrue, the test runner MUST assert that the error represents a timeout due to the use of thetimeoutMSoption. Iffalse, the test runner MUST assert that the error does not represent a timeout.
expectations: Unchanged from Transactions spec tests.outcome: Unchanged from Transactions spec tests.
Test credentials are available in AWS Secrets Manager. See https://wiki.corp.mongodb.com/display/DRIVERS/Using+AWS+Secrets+Manager+to+Store+Testing+Secrets for more background on how the secrets are managed.
Test credentials to KMS are located in "drivers/csfle".
Test credentials to create environments are available in "drivers/gcpkms" and "drivers/azurekms".
Do the following before running spec tests:
- If available for the platform under test, obtain a crypt_shared binary and place it in a location accessible to the tests. Refer to: Using crypt_shared
- Start the mongocryptd process.
- Start a mongod process with server version 4.2.0 or later.
- Place credentials somewhere in the environment outside of tracked code. (If testing on evergreen, project variables are a good place).
- Start a KMIP test server on port 5698 by running drivers-evergreen-tools/.evergreen/csfle/kms_kmip_server.py.
Load each YAML (or JSON) file using a Canonical Extended JSON parser.
If the test file name matches the regular expression fle2-Range-.*-Correctness, drivers MAY skip the test on macOS.
The fle2-Range tests are very slow on macOS and do not provide significant additional test coverage.
Then for each element in tests:
-
If the
skipReasonfield is present, skip this test completely. -
If the
key_vault_datafield is present:- Drop the
keyvault.datakeyscollection using writeConcern "majority". - Insert the data specified into the
keyvault.datakeyswith write concern "majority".
- Drop the
-
Create a MongoClient.
-
Create a collection object from the MongoClient, using the
database_nameandcollection_namefields from the YAML file. Drop the collection with writeConcern "majority". If ajson_schemais defined in the test, use thecreateCollectioncommand to explicitly create the collection:{"create": <collection>, "validator": {"$jsonSchema": <json_schema>}}
If
encrypted_fieldsis defined in the test, the required collections and index described in Create and Drop Collection Helpers must be created:- Use the
dropCollectionhelper withencrypted_fieldsas an option and writeConcern "majority". - Use the
createCollectionhelper withencrypted_fieldsas an option.
- Use the
-
If the YAML file contains a
dataarray, insert the documents indatainto the test collection, using writeConcern "majority". -
Create a new MongoClient using
clientOptions.- If
autoEncryptOptsincludesaws,awsTemporary,awsTemporaryNoSessionToken,azure,gcp, and/orkmipas a KMS provider, pass in credentials from the environment.-
awsTemporary, andawsTemporaryNoSessionTokenrequire temporary AWS credentials. These can be retrieved using the csfle set-temp-creds.sh script. -
aws,awsTemporary, andawsTemporaryNoSessionTokenare mutually exclusive.awsshould be substituted with:"aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }
awsTemporaryshould be substituted with:"aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> "sessionToken": <set from environment> }
awsTemporaryNoSessionTokenshould be substituted with:"aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }
gcpshould be substituted with:"gcp": { "email": <set from environment>, "privateKey": <set from environment>, }
azureshould be substituted with:"azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, }
localshould be substituted with:"local": { "key": <base64 decoding of LOCAL_MASTERKEY> }
kmipshould be substituted with:"kmip": { "endpoint": "localhost:5698" }
Configure KMIP TLS connections to use the following options:
tlsCAFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/client.pem.
The method of passing TLS options for KMIP TLS connections is driver dependent.
-
- If
autoEncryptOptsdoes not includekeyVaultNamespace, default it tokeyvault.datakeys.
- If
-
For each element in
operations:-
Enter a "try" block or your programming language's closest equivalent.
-
Create a Database object from the MongoClient, using the
database_namefield at the top level of the test file. -
Create a Collection object from the Database, using the
collection_namefield at the top level of the test file. IfcollectionOptionsis present create the Collection object with the provided options. Otherwise create the object with the default options. -
Execute the named method on the provided
object, passing the arguments listed. -
If the driver throws an exception / returns an error while executing this series of operations, store the error message and server error code.
-
If the result document has an "errorContains" field, verify that the method threw an exception or returned an error, and that the value of the "errorContains" field matches the error string. "errorContains" is a substring (case-insensitive) of the actual error message.
If the result document has an "errorCodeName" field, verify that the method threw a command failed exception or returned an error, and that the value of the "errorCodeName" field matches the "codeName" in the server error response.
If the result document has an "errorLabelsContain" field, verify that the method threw an exception or returned an error. Verify that all of the error labels in "errorLabelsContain" are present in the error or exception using the
hasErrorLabelmethod.If the result document has an "errorLabelsOmit" field, verify that the method threw an exception or returned an error. Verify that none of the error labels in "errorLabelsOmit" are present in the error or exception using the
hasErrorLabelmethod. -
If the operation returns a raw command response, eg from
runCommand, then compare only the fields present in the expected result document. Otherwise, compare the method's return value toresultusing the same logic as the CRUD Spec Tests runner.
-
-
If the test includes a list of command-started events in
expectations, compare them to the actual command-started events using the same logic as the Command Monitoring spec legacy test runner. -
For each element in
outcome:- If
nameis "collection", create a new MongoClient without encryption and verify that the test collection contains exactly the documents in thedataarray. Ensure this find reads the latest data by using primary read preference with local read concern even when the MongoClient is configured with another read preference or read concern.
- If
The spec test MUST be run with and without auth.
On platforms where crypt_shared is available, drivers should prefer to test
with the crypt_shared library instead of spawning mongocryptd.
crypt_shared is released alongside the server. crypt_shared is only available in versions 6.0 and above.
mongocryptd is released alongside the server. mongocryptd is available in versions 4.2 and above.
Drivers MUST run all tests with mongocryptd on at least one platform for all tested server versions.
Drivers MUST run all tests with crypt_shared on at least one platform for all tested server versions. For server versions < 6.0, drivers MUST test with the latest major release of crypt_shared. Using the latest major release of crypt_shared is supported with older server versions.
Note that some tests assert on mongocryptd-related behaviors (e.g. the mongocryptdBypassSpawn test).
Drivers under test should load the crypt_shared library using either the
cryptSharedLibPath public API option (as part of the AutoEncryption extraOptions), or by setting a special search
path instead.
Some tests will require not using crypt_shared. For such tests, one
should ensure that crypt_shared will not be loaded. Refer to the client-side-encryption documentation for information
on "disabling" crypt_shared and setting library search paths.
Note
The crypt_shared dynamic library can be obtained using the mongodl Python script from drivers-evergreen-tools:
$ python3 mongodl.py --component=crypt_shared --version=<VERSION> --out=./crypt_shared/Other versions of crypt_shared are also available. Please use the --list option to see versions.
Tests for the ClientEncryption type are not included as part of the YAML tests.
In the prose tests LOCAL_MASTERKEY refers to the following base64:
Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFk
Perform all applicable operations on key vault collections (e.g. inserting an example data key, or running a find command) with readConcern/writeConcern "majority".
-
Create a
MongoClientobject (referred to asclient). -
Using
client, drop the collectionkeyvault.datakeys. -
Create a
ClientEncryptionobject (referred to asclient_encryption) withclientset as thekeyVaultClient. -
Using
client_encryption, create a data key with alocalKMS provider and the following custom key material (given as base64):xPTAjBRG5JiPm+d3fj6XLi2q5DMXUS/f1f+SMAlhhwkhDRL0kr8r9GDLIGTAGlvC+HVjSIgdL+RKwZCvpXSyxTICWSXTUYsWYPyu3IoHbuBZdmw2faM3WhcRIgbMReU5 -
Find the resulting key document in
keyvault.datakeys, save a copy of the key document, then remove the key document from the collection. -
Replace the
_idfield in the copied key document with a UUID with base64 valueAAAAAAAAAAAAAAAAAAAAAA==(16 bytes all equal to0x00) and insert the modified key document intokeyvault.datakeyswith majority write concern. -
Using
client_encryption, encrypt the string"test"with the modified data key using theAEAD_AES_256_CBC_HMAC_SHA_512-Deterministicalgorithm and assert the resulting value is equal to the following (given as base64):AQAAAAAAAAAAAAAAAAAAAAACz0ZOLuuhEYi807ZXTdhbqhLaS2/t9wLifJnnNYwiw79d75QYIZ6M/aYC1h9nCzCjZ7pGUpAuNnkUhnIXM3PjrA==
First, perform the setup.
-
Create a MongoClient without encryption enabled (referred to as
client). Enable command monitoring to listen for command_started events. -
Using
client, drop the collectionskeyvault.datakeysanddb.coll. -
Create the following:
- A MongoClient configured with auto encryption (referred to as
client_encrypted) - A
ClientEncryptionobject (referred to asclient_encryption)
Configure both objects with the following KMS providers:
{ "aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, }, "gcp": { "email": <set from environment>, "privateKey": <set from environment>, } "local": { "key": <base64 decoding of LOCAL_MASTERKEY> }, "kmip": { "endpoint": "localhost:5698" } }
Configure KMIP TLS connections to use the following options:
tlsCAFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/client.pem.
The method of passing TLS options for KMIP TLS connections is driver dependent.
Configure both objects with
keyVaultNamespaceset tokeyvault.datakeys.Configure the
MongoClientwith the followingschema_map:{ "db.coll": { "bsonType": "object", "properties": { "encrypted_placeholder": { "encrypt": { "keyId": "/placeholder", "bsonType": "string", "algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Random" } } } } }
Configure
client_encryptionwith thekeyVaultClientof the previously createdclient. - A MongoClient configured with auto encryption (referred to as
For each KMS provider (aws, azure, gcp, local, and kmip), referred to as provider_name, run the following
test.
- Call
client_encryption.createDataKey().-
Set keyAltNames to
["<provider_name>_altname"]. -
Set the masterKey document based on
provider_name.For "aws":
{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0" }
For "azure":
{ "keyVaultEndpoint": "key-vault-csfle.vault.azure.net", "keyName": "key-name-csfle" }
For "gcp":
{ "projectId": "devprod-drivers", "location": "global", "keyRing": "key-ring-csfle", "keyName": "key-name-csfle" }
For "kmip":
{}
For "local", do not set a masterKey document.
-
Expect a BSON binary with subtype 4 to be returned, referred to as
datakey_id. -
Use
clientto run afindonkeyvault.datakeysby querying with the_idset to thedatakey_id. -
Expect that exactly one document is returned with the "masterKey.provider" equal to
provider_name. -
Check that
clientcaptured a command_started event for theinsertcommand containing a majority writeConcern.
-
- Call
client_encryption.encrypt()with the value"hello <provider_name>", the algorithmAEAD_AES_256_CBC_HMAC_SHA_512-Deterministic, and thekey_idofdatakey_id.- Expect the return value to be a BSON binary subtype 6, referred to as
encrypted. - Use
client_encryptedto insert{ _id: "<provider_name>", "value": <encrypted> }intodb.coll. - Use
client_encryptedto run a find querying with_idof"<provider_name>"and expectvalueto be"hello <provider_name>".
- Expect the return value to be a BSON binary subtype 6, referred to as
- Call
client_encryption.encrypt()with the value"hello <provider_name>", the algorithmAEAD_AES_256_CBC_HMAC_SHA_512-Deterministic, and thekey_alt_nameof<provider_name>_altname.- Expect the return value to be a BSON binary subtype 6. Expect the value to exactly match the value of
encrypted.
- Expect the return value to be a BSON binary subtype 6. Expect the value to exactly match the value of
- Test explicit encrypting an auto encrypted field.
- Use
client_encryptedto attempt to insert{ "encrypted_placeholder": <encrypted> } - Expect an exception to be thrown, since this is an attempt to auto encrypt an already encrypted value.
- Use
Run the following tests twice, parameterized by a boolean withExternalKeyVault.
-
Create a MongoClient without encryption enabled (referred to as
client). -
Using
client, drop the collectionskeyvault.datakeysanddb.coll. Insert the document external/external-key.json intokeyvault.datakeys. -
Create the following:
- A MongoClient configured with auto encryption (referred to as
client_encrypted) - A
ClientEncryptionobject (referred to asclient_encryption)
Configure both objects with the
localKMS providers as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure both objects with
keyVaultNamespaceset tokeyvault.datakeys.Configure
client_encryptedto use the schema external/external-schema.json fordb.collby setting a schema map like:{ "db.coll": <contents of external-schema.json> }If
withExternalKeyVault == true, configure both objects with an external key vault client. The external client MUST connect to the same MongoDB cluster that is being tested against, except it MUST use the usernamefake-userand passwordfake-pwd. - A MongoClient configured with auto encryption (referred to as
-
Use
client_encryptedto insert the document{"encrypted": "test"}intodb.coll. IfwithExternalKeyVault == true, expect an authentication exception to be thrown. Otherwise, expect the insert to succeed. -
Use
client_encryptionto explicitly encrypt the string"test"with key IDLOCALAAAAAAAAAAAAAAAAA==and deterministic algorithm. IfwithExternalKeyVault == true, expect an authentication exception to be thrown. Otherwise, expect the insert to succeed.
First, perform the setup.
-
Create a MongoClient without encryption enabled (referred to as
client). -
Using
client, drop and create the collectiondb.collconfigured with the included JSON schema limits/limits-schema.json. -
If using MongoDB 8.0+, use
clientto drop and create the collectiondb.coll2configured with the included encryptedFields limits/limits-encryptedFields.json. -
Using
client, drop the collectionkeyvault.datakeys. Insert the document limits/limits-key.json -
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure with the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys.
Using client_encrypted perform the following operations:
-
Insert
{ "_id": "over_2mib_under_16mib", "unencrypted": <the string "a" repeated 2097152 times> }intocoll.Expect this to succeed since this is still under the
maxBsonObjectSizelimit. -
Insert the document limits/limits-doc.json concatenated with
{ "_id": "encryption_exceeds_2mib", "unencrypted": < the string "a" repeated (2097152 - 2000) times > }intocoll. Note: limits-doc.json is a 1005 byte BSON document that encrypts to a ~10,000 byte document.Expect this to succeed since after encryption this still is below the normal maximum BSON document size. Note, before auto encryption this document is under the 2 MiB limit. After encryption it exceeds the 2 MiB limit, but does NOT exceed the 16 MiB limit.
-
Use MongoCollection.bulkWrite to insert the following into
coll:{ "_id": "over_2mib_1", "unencrypted": <the string "a" repeated (2097152) times> }{ "_id": "over_2mib_2", "unencrypted": <the string "a" repeated (2097152) times> }
Expect the bulk write to succeed and split after first doc (i.e. two inserts occur). This may be verified using command monitoring.
-
Use MongoCollection.bulkWrite insert the following into
coll:- The document limits/limits-doc.json concatenated with
{ "_id": "encryption_exceeds_2mib_1", "unencrypted": < the string "a" repeated (2097152 - 2000) times > } - The document limits/limits-doc.json concatenated with
{ "_id": "encryption_exceeds_2mib_2", "unencrypted": < the string "a" repeated (2097152 - 2000) times > }
Expect the bulk write to succeed and split after first doc (i.e. two inserts occur). This may be verified using command logging and monitoring.
- The document limits/limits-doc.json concatenated with
-
Insert
{ "_id": "under_16mib", "unencrypted": <the string "a" repeated 16777216 - 2000 times>intocoll.Expect this to succeed since this is still (just) under the
maxBsonObjectSizelimit. -
Insert the document limits/limits-doc.json concatenated with
{ "_id": "encryption_exceeds_16mib", "unencrypted": < the string "a" repeated (16777216 - 2000) times > }intocoll.Expect this to fail indicating the document exceeded the
maxBsonObjectSizelimit. If the write is sent to the server (i.e. does not fail due to a driver-side check), expect a server error with code 2 or 10334. -
If using MongoDB 8.0+, use MongoClient.bulkWrite to insert the following into
coll2:{ "_id": "over_2mib_3", "unencrypted": <the string "a" repeated (2097152 - 1500) times> }{ "_id": "over_2mib_4", "unencrypted": <the string "a" repeated (2097152 - 1500) times> }
Expect the bulk write to succeed and split after first doc (i.e. two inserts occur). This may be verified using command logging and monitoring.
-
If using MongoDB 8.0+, use MongoClient.bulkWrite to insert the following into
coll2:- The document limits/limits-qe-doc.json concatenated with
{ "_id": "encryption_exceeds_2mib_3", "foo": < the string "a" repeated (2097152 - 2000 - 1500) times > } - The document limits/limits-qe-doc.json concatenated with
{ "_id": "encryption_exceeds_2mib_4", "foo": < the string "a" repeated (2097152 - 2000 - 1500) times > }
Expect the bulk write to succeed and split after first doc (i.e. two inserts occur). This may be verified using command logging and monitoring.
- The document limits/limits-qe-doc.json concatenated with
Optionally, if it is possible to mock the maxWriteBatchSize (i.e. the maximum number of documents in a batch) test that
setting maxWriteBatchSize=1 and inserting the two documents { "_id": "a" }, { "_id": "b" } with client_encrypted
splits the operation into two inserts.
-
Create a MongoClient without encryption enabled (referred to as
client). -
Using
client, drop and create a view nameddb.viewwith an empty pipeline. E.g. using the command{ "create": "view", "viewOn": "coll" }. -
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure with the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys. -
Using
client_encrypted, attempt to insert a document intodb.view. Expect an exception to be thrown containing the message: "cannot auto encrypt a view".
The corpus test exhaustively enumerates all ways to encrypt all BSON value types. Note, the test data includes BSON binary subtype 4 (or standard UUID), which MUST be decoded and encoded as subtype 4. Run the test as follows.
-
Create a MongoClient without encryption enabled (referred to as
client). -
Using
client, drop and create the collectiondb.collconfigured with the included JSON schema corpus/corpus-schema.json. -
Using
client, drop the collectionkeyvault.datakeys. Insert the documents corpus/corpus-key-local.json, corpus/corpus-key-aws.json, corpus/corpus-key-azure.json, corpus/corpus-key-gcp.json, and corpus/corpus-key-kmip.json. -
Create the following:
- A MongoClient configured with auto encryption (referred to as
client_encrypted) - A
ClientEncryptionobject (referred to asclient_encryption)
Configure both objects with
aws,azure,gcp,local, andkmipKMS providers as follows:{ "aws": { <AWS credentials> }, "azure": { <Azure credentials> }, "gcp": { <GCP credentials> }, "local": { "key": <base64 decoding of LOCAL_MASTERKEY> }, "kmip": { "endpoint": "localhost:5698" } }
Configure KMIP TLS connections to use the following options:
tlsCAFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/client.pem.
The method of passing TLS options for KMIP TLS connections is driver dependent.
Where LOCAL_MASTERKEY is the following base64:
Mng0NCt4ZHVUYUJCa1kxNkVyNUR1QURhZ2h2UzR2d2RrZzh0cFBwM3R6NmdWMDFBMUN3YkQ5aXRRMkhGRGdQV09wOGVNYUMxT2k3NjZKelhaQmRCZGJkTXVyZG9uSjFkConfigure both objects with
keyVaultNamespaceset tokeyvault.datakeys. - A MongoClient configured with auto encryption (referred to as
-
Load corpus/corpus.json to a variable named
corpus. The corpus contains subdocuments with the following fields:kmsisaws,azure,gcp,local, orkmiptypeis a BSON type string names coming from here)algois eitherrandordetfor random or deterministic encryptionmethodis eitherauto, for automatic encryption orexplicitfor explicit encryptionidentifieris eitheridoraltnamefor the key identifierallowedis a boolean indicating whether the encryption for the given parameters is permitted.valueis the value to be tested.
Create a new BSON document, named
corpus_copied. Iterate over each field ofcorpus.-
If the field name is
_id,altname_aws,altname_local,altname_azure,altname_gcp, oraltname_kmipcopy the field tocorpus_copied. -
If
methodisauto, copy the field tocorpus_copied. -
If
methodisexplicit, useclient_encryptionto explicitly encrypt the value.- Encrypt with the algorithm described by
algo. - If
identifierisid- If
kmsislocalset the key_id to the UUID with base64 valueLOCALAAAAAAAAAAAAAAAAA==. - If
kmsisawsset the key_id to the UUID with base64 valueAWSAAAAAAAAAAAAAAAAAAA==. - If
kmsisazureset the key_id to the UUID with base64 valueAZUREAAAAAAAAAAAAAAAAA==. - If
kmsisgcpset the key_id to the UUID with base64 valueGCPAAAAAAAAAAAAAAAAAAA==. - If
kmsiskmipset the key_id to the UUID with base64 valueKMIPAAAAAAAAAAAAAAAAAA==.
- If
- If
identifierisaltname- If
kmsislocalset the key_alt_name to "local". - If
kmsisawsset the key_alt_name to "aws". - If
kmsisazureset the key_alt_name to "azure". - If
kmsisgcpset the key_alt_name to "gcp". - If
kmsiskmipset the key_alt_name to "kmip".
- If
If
allowedis true, copy the field and encrypted value tocorpus_copied. Ifallowedis false. verify that an exception is thrown. Copy the unencrypted value to tocorpus_copied. - Encrypt with the algorithm described by
-
Using
client_encrypted, insertcorpus_copiedintodb.coll. -
Using
client_encrypted, find the inserted document fromdb.collto a variable namedcorpus_decrypted. Since it should have been automatically decrypted, assert the document exactly matchescorpus. -
Load corpus/corpus_encrypted.json to a variable named
corpus_encrypted_expected. Usingclientfind the inserted document fromdb.collto a variable namedcorpus_encrypted_actual.Iterate over each field of
corpus_encrypted_expectedand check the following:- If the
algoisdet, that the value equals the value of the corresponding field incorpus_encrypted_actual. - If the
algoisrandandallowedis true, that the value does not equal the value of the corresponding field incorpus_encrypted_actual. - If
allowedis true, decrypt the value withclient_encryption. Decrypt the value of the corresponding field ofcorpus_encryptedand validate that they are both equal. - If
allowedis false, validate the value exactly equals the value of the corresponding field ofcorpus(neither was encrypted).
- If the
-
Repeat steps 1-8 with a local JSON schema. I.e. amend step 4 to configure the schema on
client_encryptedwith theschema_mapoption.
For each test cases, start by creating two ClientEncryption objects. Recreate the ClientEncryption objects for each
test case.
Create a ClientEncryption object (referred to as client_encryption)
Configure with keyVaultNamespace set to keyvault.datakeys, and a default MongoClient as the keyVaultClient.
Configure with KMS providers as follows:
{
"aws": {
"accessKeyId": <set from environment>,
"secretAccessKey": <set from environment>
},
"azure": {
"tenantId": <set from environment>,
"clientId": <set from environment>,
"clientSecret": <set from environment>,
"identityPlatformEndpoint": "login.microsoftonline.com:443"
},
"gcp": {
"email": <set from environment>,
"privateKey": <set from environment>,
"endpoint": "oauth2.googleapis.com:443"
},
"kmip" {
"endpoint": "localhost:5698"
}
}Create a ClientEncryption object (referred to as client_encryption_invalid)
Configure with keyVaultNamespace set to keyvault.datakeys, and a default MongoClient as the keyVaultClient.
Configure with KMS providers as follows:
{
"azure": {
"tenantId": <set from environment>,
"clientId": <set from environment>,
"clientSecret": <set from environment>,
"identityPlatformEndpoint": "doesnotexist.invalid:443"
},
"gcp": {
"email": <set from environment>,
"privateKey": <set from environment>,
"endpoint": "doesnotexist.invalid:443"
},
"kmip": {
"endpoint": "doesnotexist.invalid:5698"
}
}Configure KMIP TLS connections to use the following options:
tlsCAFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to drivers-evergreen-tools/.evergreen/x509gen/client.pem.
The method of passing TLS options for KMIP TLS connections is driver dependent.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "kms.us-east-1.amazonaws.com" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "kms.us-east-1.amazonaws.com:443" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
-
Call
client_encryption.createDataKey()with "kmip" as the provider and the following masterKey:{ "keyId": "1", "endpoint": "localhost:12345" }
Expect this to fail with a socket connection error.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "kms.us-east-2.amazonaws.com" }
Expect this to fail with an exception.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ region: "us-east-1", key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", endpoint: "doesnotexist.invalid" }
Expect this to fail with a network exception indicating failure to resolve "doesnotexist.invalid".
-
Call
client_encryption.createDataKey()with "azure" as the provider and the following masterKey:{ "keyVaultEndpoint": "key-vault-csfle.vault.azure.net", "keyName": "key-name-csfle" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
Call
client_encryption_invalid.createDataKey()with the same masterKey. Expect this to fail with a network exception indicating failure to resolve "doesnotexist.invalid". -
Call
client_encryption.createDataKey()with "gcp" as the provider and the following masterKey:{ "projectId": "devprod-drivers", "location": "global", "keyRing": "key-ring-csfle", "keyName": "key-name-csfle", "endpoint": "cloudkms.googleapis.com:443" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
Call
client_encryption_invalid.createDataKey()with the same masterKey. Expect this to fail with a network exception indicating failure to resolve "doesnotexist.invalid". -
Call
client_encryption.createDataKey()with "gcp" as the provider and the following masterKey:{ "projectId": "devprod-drivers", "location": "global", "keyRing": "key-ring-csfle", "keyName": "key-name-csfle", "endpoint": "doesnotexist.invalid:443" }
Expect this to fail with an exception with a message containing the string: "Invalid KMS response".
-
Call
client_encryption.createDataKey()with "kmip" as the provider and the following masterKey:{ "keyId": "1" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
Call
client_encryption_invalid.createDataKey()with the same masterKey. Expect this to fail with a network exception indicating failure to resolve "doesnotexist.invalid". -
Call
client_encryption.createDataKey()with "kmip" as the provider and the following masterKey:{ "keyId": "1", "endpoint": "localhost:5698" }
Expect this to succeed. Use the returned UUID of the key to explicitly encrypt and decrypt the string "test" to validate it works.
-
Call
client_encryption.createDataKey()with "kmip" as the provider and the following masterKey:{ "keyId": "1", "endpoint": "doesnotexist.invalid:5698" }
Expect this to fail with a network exception indicating failure to resolve "doesnotexist.invalid".
Note
CONSIDER: To reduce the chances of tests interfering with each other, drivers MAY use a different port for each test
in this group, and include it in --pidfilepath. The interference may come from the fact that once spawned by a test,
mongocryptd stays up and running for some time.
The following tests that loading crypt_shared bypasses spawning mongocryptd.
Note
IMPORTANT: This test requires the crypt_shared library be loaded. If the crypt_shared library is not available, skip the test.
-
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure the required options. Use the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys.Configure
client_encryptedto use the schema external/external-schema.json fordb.collby setting a schema map like:{ "db.coll": <contents of external-schema.json> }Configure the following
extraOptions:{ "mongocryptdURI": "mongodb://localhost:27021/?serverSelectionTimeoutMS=1000", "mongocryptdSpawnArgs": [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port=27021"], "cryptSharedLibPath": "<path to shared library>", "cryptSharedLibRequired": true }
Drivers MAY pass a different port if they expect their testing infrastructure to be using port 27021. Pass a port that should be free.
-
Use
client_encryptedto insert the document{"unencrypted": "test"}intodb.coll. Expect this to succeed. -
Validate that mongocryptd was not spawned. Create a MongoClient to localhost:27021 (or whatever was passed via
--port) with serverSelectionTimeoutMS=1000. Run a handshake command and ensure it fails with a server selection timeout.
Note
IMPORTANT: If crypt_shared is visible to the operating system's library
search mechanism, the expected server error generated by the Via mongocryptdBypassSpawn, Via bypassAutoEncryption,
Via bypassQueryAnalysis tests will not appear because libmongocrypt will load the crypt_shared library instead of
consulting mongocryptd. For the following tests, it is required that libmongocrypt not load crypt_shared. Refer to
the client-side-encryption document for more information on "disabling" crypt_shared. Take into account that once
loaded, for example, by another test, crypt_shared cannot be unloaded and may be used by MongoClient, thus making
the tests misbehave in unexpected ways.
The following tests that setting mongocryptdBypassSpawn=true really does bypass spawning mongocryptd.
-
Insert the document external/external-key.json into
keyvault.datakeyswith majority write concern. This step is not required to run this test, and drivers MAY skip it. But if the driver misbehaves, then not having the encryption fully set up may complicate the process of figuring out what is wrong. -
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure the required options. Use the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys.Configure
client_encryptedto use the schema external/external-schema.json fordb.collby setting a schema map like:{ "db.coll": <contents of external-schema.json> }Configure the following
extraOptions:{ "mongocryptdBypassSpawn": true "mongocryptdURI": "mongodb://localhost:27021/?serverSelectionTimeoutMS=1000", "mongocryptdSpawnArgs": [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port=27021"] }
Drivers MAY pass a different port if they expect their testing infrastructure to be using port 27021. Pass a port that should be free.
-
Use
client_encryptedto insert the document{"encrypted": "test"}intodb.coll. Expect a server selection error propagated from the internal MongoClient failing to connect to mongocryptd on port 27021.
The following tests that setting bypassAutoEncryption=true really does bypass spawning mongocryptd.
-
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure the required options. Use the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys.Configure with
bypassAutoEncryption=true.Configure the following
extraOptions:{ "mongocryptdSpawnArgs": [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port=27021"] }
Drivers MAY pass a different value to
--portif they expect their testing infrastructure to be using port 27021. Pass a port that should be free. -
Use
client_encryptedto insert the document{"unencrypted": "test"}intodb.coll. Expect this to succeed. -
Validate that mongocryptd was not spawned. Create a MongoClient to localhost:27021 (or whatever was passed via
--port) with serverSelectionTimeoutMS=1000. Run a handshake command and ensure it fails with a server selection timeout.
Repeat the steps from the "Via bypassAutoEncryption" test, replacing "bypassAutoEncryption=true" with "bypassQueryAnalysis=true".
The following tests only apply to drivers that have implemented a connection pool (see the Connection Monitoring and Pooling specification).
There are multiple parameterized test cases. Before each test case, perform the setup.
Create a MongoClient for setup operations named client_test.
Create a MongoClient for key vault operations with maxPoolSize=1 named client_keyvault. Capture command started
events.
Using client_test, drop the collections keyvault.datakeys and db.coll.
Insert the document external/external-key.json into keyvault.datakeys with majority
write concern.
Create a collection db.coll configured with a JSON schema
external/external-schema.json as the validator, like so:
{"create": "coll", "validator": {"$jsonSchema": <json_schema>}}Create a ClientEncryption object, named client_encryption configured with: - keyVaultClient=client_test -
keyVaultNamespace="keyvault.datakeys" - kmsProviders={ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Use client_encryption to encrypt the value "string0" with algorithm="AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic"
and keyAltName="local". Store the result in a variable named ciphertext.
Proceed to run the test case.
Each test case configures a MongoClient with automatic encryption (named client_encrypted).
Each test must assert the number of unique MongoClient objects created. This can be accomplished by capturing
TopologyOpeningEvent, or by checking command started events for a client identifier (not possible in all drivers).
-
Create a
MongoClientnamedclient_encryptedconfigured as follows:-
Set
AutoEncryptionOpts:keyVaultNamespace="keyvault.datakeys"kmsProviders={ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }- Append
TestCase.AutoEncryptionOpts(defined below)
-
Capture command started events.
-
Set
maxPoolSize=TestCase.MaxPoolSize
-
-
If the testcase sets
AutoEncryptionOpts.bypassAutoEncryption=true:- Use
client_testto insert{ "_id": 0, "encrypted": <ciphertext> }intodb.coll.
- Use
-
Otherwise:
- Use
client_encryptedto insert{ "_id": 0, "encrypted": "string0" }.
- Use
-
Use
client_encryptedto run afindOneoperation ondb.coll, with the filter{ "_id": 0 }. -
Expect the result to be
{ "_id": 0, "encrypted": "string0" }. -
Check captured events against
TestCase.Expectations. -
Check the number of unique
MongoClientobjects created is equal toTestCase.ExpectedNumberOfClients.
-
MaxPoolSize: 1
-
AutoEncryptionOpts:
- bypassAutoEncryption=false
- keyVaultClient=unset
-
Expectations:
- Expect
client_encryptedto have captured fourCommandStartedEvent:- a listCollections to "db".
- a find on "keyvault".
- an insert on "db".
- a find on "db"
- Expect
-
ExpectedNumberOfClients: 2
-
MaxPoolSize: 1
-
AutoEncryptionOpts:
- bypassAutoEncryption=false
- keyVaultClient=client_keyvault
-
Expectations:
-
Expect
client_encryptedto have captured threeCommandStartedEvent:- a listCollections to "db".
- an insert on "db".
- a find on "db"
-
Expect
client_keyvaultto have captured oneCommandStartedEvent:- a find on "keyvault".
-
-
ExpectedNumberOfClients: 2
-
MaxPoolSize: 1
-
AutoEncryptionOpts:
- bypassAutoEncryption=true
- keyVaultClient=unset
-
Expectations:
- Expect
client_encryptedto have captured threeCommandStartedEvent:- a find on "db"
- a find on "keyvault".
- Expect
-
ExpectedNumberOfClients: 2
-
MaxPoolSize: 1
-
AutoEncryptionOpts:
- bypassAutoEncryption=true
- keyVaultClient=client_keyvault
-
Expectations:
-
Expect
client_encryptedto have captured twoCommandStartedEvent:- a find on "db"
-
Expect
client_keyvaultto have captured oneCommandStartedEvent:- a find on "keyvault".
-
-
ExpectedNumberOfClients: 1
Drivers that do not support an unlimited maximum pool size MUST skip this test.
-
MaxPoolSize: 0
-
AutoEncryptionOpts:
- bypassAutoEncryption=false
- keyVaultClient=unset
-
Expectations:
- Expect
client_encryptedto have captured fiveCommandStartedEvent:- a listCollections to "db".
- a listCollections to "keyvault".
- a find on "keyvault".
- an insert on "db".
- a find on "db"
- Expect
-
ExpectedNumberOfClients: 1
Drivers that do not support an unlimited maximum pool size MUST skip this test.
-
MaxPoolSize: 0
-
AutoEncryptionOpts:
- bypassAutoEncryption=false
- keyVaultClient=client_keyvault
-
Expectations:
-
Expect
client_encryptedto have captured threeCommandStartedEvent:- a listCollections to "db".
- an insert on "db".
- a find on "db"
-
Expect
client_keyvaultto have captured oneCommandStartedEvent:- a find on "keyvault".
-
-
ExpectedNumberOfClients: 1
Drivers that do not support an unlimited maximum pool size MUST skip this test.
-
MaxPoolSize: 0
-
AutoEncryptionOpts:
- bypassAutoEncryption=true
- keyVaultClient=unset
-
Expectations:
- Expect
client_encryptedto have captured threeCommandStartedEvent:- a find on "db"
- a find on "keyvault".
- Expect
-
ExpectedNumberOfClients: 1
Drivers that do not support an unlimited maximum pool size MUST skip this test.
-
MaxPoolSize: 0
-
AutoEncryptionOpts:
- bypassAutoEncryption=true
- keyVaultClient=client_keyvault
-
Expectations:
-
Expect
client_encryptedto have captured twoCommandStartedEvent:- a find on "db"
-
Expect
client_keyvaultto have captured oneCommandStartedEvent:- a find on "keyvault".
-
-
ExpectedNumberOfClients: 1
The following tests that connections to KMS servers with TLS verify peer certificates.
The two tests below make use of mock KMS servers which can be run on Evergreen using the mock KMS server script. Drivers can set up their local Python environment for the mock KMS server by running the virtualenv activation script.
To start two mock KMS servers, one on port 9000 with
ca.pem as a CA file and
expired.pem as a
cert file, and one on port 9001 with
ca.pem as a CA file and
wrong-host.pem
as a cert file, run the following commands from the .evergreen/csfle directory:
. ./activate_venv.sh
python -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/expired.pem --port 9000 &
python -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/wrong-host.pem --port 9001 &For both tests, do the following:
- Start a
mongodprocess with server version 4.2.0 or later. - Create a
MongoClientfor key vault operations. - Create a
ClientEncryptionobject (referred to asclient_encryption) withkeyVaultNamespaceset tokeyvault.datakeys.
-
Start a mock KMS server on port 9000 with ca.pem as a CA file and expired.pem as a cert file.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ "region": "us-east-1", "key": "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", "endpoint": "127.0.0.1:9000", }
Expect this to fail with an exception with a message referencing an expired certificate. This message will be language dependent. In Python, this message is "certificate verify failed: certificate has expired". In Go, this message is "certificate has expired or is not yet valid". If the language of implementation has a single, generic error message for all certificate validation errors, drivers may inspect other fields of the error to verify its meaning.
-
Start a mock KMS server on port 9001 with ca.pem as a CA file and wrong-host.pem as a cert file.
-
Call
client_encryption.createDataKey()with "aws" as the provider and the following masterKey:{ "region": "us-east-1", "key": "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0", "endpoint": "127.0.0.1:9001", }
Expect this to fail with an exception with a message referencing an incorrect or unexpected host. This message will be language dependent. In Python, this message is "certificate verify failed: IP address mismatch, certificate is not valid for '127.0.0.1'". In Go, this message is "cannot validate certificate for 127.0.0.1 because it doesn't contain any IP SANs". If the language of implementation has a single, generic error message for all certificate validation errors, drivers may inspect other fields of the error to verify its meaning.
Start a mongod process with server version 4.2.0 or later.
Four mock KMS server processes must be running:
-
The mock KMS HTTP server.
Run on port 9000 with ca.pem as a CA file and expired.pem as a cert file.
Example:
python -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/expired.pem --port 9000
-
The mock KMS HTTP server.
Run on port 9001 with ca.pem as a CA file and wrong-host.pem as a cert file.
Example:
python -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/wrong-host.pem --port 9001
-
The mock KMS HTTP server.
Run on port 9002 with ca.pem as a CA file and server.pem as a cert file.
Run with the
--require_client_certoption.Example:
python -u kms_http_server.py --ca_file ../x509gen/ca.pem --cert_file ../x509gen/server.pem --port 9002 --require_client_cert
-
The mock KMS KMIP server.
Create the following ClientEncryption objects.
Configure each with keyVaultNamespace set to keyvault.datakeys, and a default MongoClient as the keyVaultClient.
-
Create a
ClientEncryptionobject namedclient_encryption_no_client_certwith the following KMS providers:{ "aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9002" }, "gcp": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9002" }, "kmip" { "endpoint": "127.0.0.1:5698" } }
Add TLS options for the
aws,azure,gcp, andkmipproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.
-
Create a
ClientEncryptionobject namedclient_encryption_with_tlswith the following KMS providers:{ "aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9002" }, "gcp": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9002" }, "kmip" { "endpoint": "127.0.0.1:5698" } }
Add TLS options for the
aws,azure,gcp, andkmipproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to client.pem
-
Create a
ClientEncryptionobject namedclient_encryption_expiredwith the following KMS providers:{ "aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9000" }, "gcp": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9000" }, "kmip" { "endpoint": "127.0.0.1:9000" } }
Add TLS options for the
aws,azure,gcp, andkmipproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.
-
Create a
ClientEncryptionobject namedclient_encryption_invalid_hostnamewith the following KMS providers:{ "aws": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9001" }, "gcp": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9001" }, "kmip" { "endpoint": "127.0.0.1:9001" } }
Add TLS options for the
aws,azure,gcp, andkmipproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.
-
Create a
ClientEncryptionobject namedclient_encryption_with_nameswith the following KMS providers:{ "aws:no_client_cert": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure:no_client_cert": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9002" }, "gcp:no_client_cert": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9002" }, "kmip:no_client_cert": { "endpoint": "127.0.0.1:5698" }, "aws:with_tls": { "accessKeyId": <set from environment>, "secretAccessKey": <set from environment> }, "azure:with_tls": { "tenantId": <set from environment>, "clientId": <set from environment>, "clientSecret": <set from environment>, "identityPlatformEndpoint": "127.0.0.1:9002" }, "gcp:with_tls": { "email": <set from environment>, "privateKey": <set from environment>, "endpoint": "127.0.0.1:9002" }, "kmip:with_tls": { "endpoint": "127.0.0.1:5698" } }
Support for named KMS providers requires libmongocrypt 1.9.0.
Add TLS options for the
aws:no_client_cert,azure:no_client_cert,gcp:no_client_cert, andkmip:no_client_certproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.
Add TLS options for the
aws:with_tls,azure:with_tls,gcp:with_tls, andkmip:with_tlsproviders to use the following options:tlsCAFile(or equivalent) set to ca.pem. This MAY be configured system-wide.tlsCertificateKeyFile(or equivalent) set to client.pem
Call client_encryption_no_client_cert.createDataKey() with "aws" as the provider and the following masterKey:
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
endpoint: "127.0.0.1:9002"
}Expect an error indicating TLS handshake failed.
Call client_encryption_with_tls.createDataKey() with "aws" as the provider and the following masterKey:
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
endpoint: "127.0.0.1:9002"
}Expect an error from libmongocrypt with a message containing the string: "parse error". This implies TLS handshake succeeded.
Call client_encryption_expired.createDataKey() with "aws" as the provider and the following masterKey:
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
endpoint: "127.0.0.1:9000"
}Expect an error indicating TLS handshake failed due to an expired certificate.
Call client_encryption_invalid_hostname.createDataKey() with "aws" as the provider and the following masterKey:
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
endpoint: "127.0.0.1:9001"
}Expect an error indicating TLS handshake failed due to an invalid hostname.
Call client_encryption_no_client_cert.createDataKey() with "azure" as the provider and the following masterKey:
{ 'keyVaultEndpoint': 'doesnotexist.invalid', 'keyName': 'foo' }Expect an error indicating TLS handshake failed.
Call client_encryption_with_tls.createDataKey() with "azure" as the provider and the same masterKey.
Expect an error from libmongocrypt with a message containing the string: "HTTP status=404". This implies TLS handshake succeeded.
Call client_encryption_expired.createDataKey() with "azure" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an expired certificate.
Call client_encryption_invalid_hostname.createDataKey() with "azure" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an invalid hostname.
Call client_encryption_no_client_cert.createDataKey() with "gcp" as the provider and the following masterKey:
{ 'projectId': 'foo', 'location': 'bar', 'keyRing': 'baz', 'keyName': 'foo' }Expect an error indicating TLS handshake failed.
Call client_encryption_with_tls.createDataKey() with "gcp" as the provider and the same masterKey.
Expect an error from libmongocrypt with a message containing the string: "HTTP status=404". This implies TLS handshake succeeded.
Call client_encryption_expired.createDataKey() with "gcp" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an expired certificate.
Call client_encryption_invalid_hostname.createDataKey() with "gcp" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an invalid hostname.
Call client_encryption_no_client_cert.createDataKey() with "kmip" as the provider and the following masterKey:
{ }Expect an error indicating TLS handshake failed.
Call client_encryption_with_tls.createDataKey() with "kmip" as the provider and the same masterKey.
Expect success.
Call client_encryption_expired.createDataKey() with "kmip" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an expired certificate.
Call client_encryption_invalid_hostname.createDataKey() with "kmip" as the provider and the same masterKey.
Expect an error indicating TLS handshake failed due to an invalid hostname.
This test does not apply if the driver does not support the the option tlsDisableOCSPEndpointCheck.
Create a ClientEncryption object with the following KMS providers:
{ "aws": { "accessKeyId": "foo", "secretAccessKey": "bar" } }Add TLS options for the
awswith the following options:
tlsDisableOCSPEndpointCheck(or equivalent) set totrue.
Expect no error on construction.
Call client_encryption_with_names.createDataKey() with "aws:no_client_cert" as the provider and the following
masterKey.
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
endpoint: "127.0.0.1:9002"
}Expect an error indicating TLS handshake failed.
Call client_encryption_with_names.createDataKey() with "aws:with_tls" as the provider and the same masterKey.
Expect an error from libmongocrypt with a message containing the string: "parse error". This implies TLS handshake succeeded.
Call client_encryption_with_names.createDataKey() with "azure:no_client_cert" as the provider and the following
masterKey:
{ 'keyVaultEndpoint': 'doesnotexist.invalid', 'keyName': 'foo' }Expect an error indicating TLS handshake failed.
Call client_encryption_with_names.createDataKey() with "azure:with_tls" as the provider and the same masterKey.
Expect an error from libmongocrypt with a message containing the string: "HTTP status=404". This implies TLS handshake succeeded.
Call client_encryption_with_names.createDataKey() with "gcp:no_client_cert" as the provider and the following
masterKey:
{ 'projectId': 'foo', 'location': 'bar', 'keyRing': 'baz', 'keyName': 'foo' }Expect an error indicating TLS handshake failed.
Call client_encryption_with_names.createDataKey() with "gcp:with_tls" as the provider and the same masterKey.
Expect an error from libmongocrypt with a message containing the string: "HTTP status=404". This implies TLS handshake succeeded.
Call client_encryption_with_names.createDataKey() with "kmip:no_client_cert" as the provider and the following
masterKey:
{ }Expect an error indicating TLS handshake failed.
Call client_encryption_with_names.createDataKey() with "kmip:with_tls" as the provider and the same masterKey.
Expect success.
The Explicit Encryption tests require MongoDB server 7.0+. The tests must not run against a standalone.
Note
MongoDB Server 7.0 introduced a backwards breaking change to the Queryable Encryption (QE) protocol: QEv2. libmongocrypt 1.8.0 is configured to use the QEv2 protocol.
Before running each of the following test cases, perform the following Test Setup.
Load the file
encryptedFields.json
as encryptedFields.
Load the file
key1-document.json
as key1Document.
Read the "_id" field of key1Document as key1ID.
Drop and create the collection db.explicit_encryption using encryptedFields as an option. See
FLE 2 CreateCollection() and Collection.Drop().
Drop and create the collection keyvault.datakeys.
Insert key1Document in keyvault.datakeys with majority write concern.
Create a MongoClient named keyVaultClient.
Create a ClientEncryption object named clientEncryption with these options:
class ClientEncryptionOpts {
keyVaultClient: <keyVaultClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
}Create a MongoClient named encryptedClient with these AutoEncryptionOpts:
class AutoEncryptionOpts {
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
bypassQueryAnalysis: true,
}Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
contentionFactor: 0,
}Store the result in insertPayload.
Use encryptedClient to insert the document { "encryptedIndexed": <insertPayload> } into db.explicit_encryption.
Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
queryType: "equality",
contentionFactor: 0,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter
{ "encryptedIndexed": <findPayload> }.
Assert one document is returned containing the field { "encryptedIndexed": "encrypted indexed value" }.
Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
contentionFactor: 10,
}Store the result in insertPayload.
Use encryptedClient to insert the document { "encryptedIndexed": <insertPayload> } into db.explicit_encryption.
Repeat the above steps 10 times to insert 10 total documents. The insertPayload must be regenerated each iteration.
Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
queryType: "equality",
contentionFactor: 0,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter
{ "encryptedIndexed": <findPayload> }.
Assert less than 10 documents are returned. 0 documents may be returned. Assert each returned document contains the
field { "encryptedIndexed": "encrypted indexed value" }.
Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
queryType: "equality",
contentionFactor: 10,
}Store the result in findPayload2.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter
{ "encryptedIndexed": <findPayload2> }.
Assert 10 documents are returned. Assert each returned document contains the field
{ "encryptedIndexed": "encrypted indexed value" }.
Use clientEncryption to encrypt the value "encrypted unindexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Unindexed",
}Store the result in insertPayload.
Use encryptedClient to insert the document { "_id": 1, "encryptedUnindexed": <insertPayload> } into
db.explicit_encryption.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter
{ "_id": 1 }.
Assert one document is returned containing the field { "encryptedUnindexed": "encrypted unindexed value" }.
Use clientEncryption to encrypt the value "encrypted indexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Indexed",
contentionFactor: 0,
}Store the result in payload.
Use clientEncryption to decrypt payload. Assert the returned value equals "encrypted indexed value".
Use clientEncryption to encrypt the value "encrypted unindexed value" with these EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Unindexed",
}Store the result in payload.
Use clientEncryption to decrypt payload. Assert the returned value equals "encrypted unindexed value".
The following setup must occur before running each of the following test cases.
-
Create a
MongoClientobject (referred to asclient). -
Using
client, drop the collectionkeyvault.datakeys. -
Using
client, create a unique index onkeyAltNameswith a partial index filter for only documents wherekeyAltNamesexists using writeConcern "majority".The command should be equivalent to:
db.runCommand( { createIndexes: "datakeys", indexes: [ { name: "keyAltNames_1", key: { "keyAltNames": 1 }, unique: true, partialFilterExpression: { keyAltNames: { $exists: true } } } ], writeConcern: { w: "majority" } } )
-
Create a
ClientEncryptionobject (referred to asclient_encryption) withclientset as thekeyVaultClient. -
Using
client_encryption, create a data key with alocalKMS provider and the keyAltName "def".
- Use
client_encryptionto create a new local data key with a keyAltName "abc" and assert the operation does not fail. - Repeat Step 1 and assert the operation fails due to a duplicate key server error (error code 11000).
- Use
client_encryptionto create a new local data key with a keyAltName "def" and assert the operation fails due to a duplicate key server error (error code 11000).
- Use
client_encryptionto create a new local data key and assert the operation does not fail. - Use
client_encryptionto add a keyAltName "abc" to the key created in Step 1 and assert the operation does not fail. - Repeat Step 2, assert the operation does not fail, and assert the returned key document contains the keyAltName "abc" added in Step 2.
- Use
client_encryptionto add a keyAltName "def" to the key created in Step 1 and assert the operation fails due to a duplicate key server error (error code 11000). - Use
client_encryptionto add a keyAltName "def" to the existing key, assert the operation does not fail, and assert the returned key document contains the keyAltName "def" added during Setup.
Before running each of the following test cases, perform the following Test Setup.
Create a MongoClient named setupClient.
Drop and create the collection db.decryption_events.
Create a ClientEncryption object named clientEncryption with these options:
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
}Create a data key with the "local" KMS provider. Storing the result in a variable named keyID.
Use clientEncryption to encrypt the string "hello" with the following EncryptOpts:
class EncryptOpts {
keyId: <keyID>,
algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic",
}Store the result in a variable named ciphertext.
Copy ciphertext into a variable named malformedCiphertext. Change the last byte to a different value. This will
produce an invalid HMAC tag.
Create a MongoClient named encryptedClient with these AutoEncryptionOpts:
class AutoEncryptionOpts {
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
}Configure encryptedClient with "retryReads=false". Register a listener for CommandSucceeded events on
encryptedClient. The listener must store the most recent CommandSucceededEvent reply for the "aggregate" command.
The listener must store the most recent CommandFailedEvent error for the "aggregate" command.
Use setupClient to configure the following failpoint:
{
"configureFailPoint": "failCommand",
"mode": {
"times": 1
},
"data": {
"errorCode": 123,
"failCommands": [
"aggregate"
]
}
}Use encryptedClient to run an aggregate on db.decryption_events.
Expect an exception to be thrown from the command error. Expect a CommandFailedEvent.
Use setupClient to configure the following failpoint:
{
"configureFailPoint": "failCommand",
"mode": {
"times": 1
},
"data": {
"errorCode": 123,
"closeConnection": true,
"failCommands": [
"aggregate"
]
}
}Use encryptedClient to run an aggregate on db.decryption_events.
Expect an exception to be thrown from the network error. Expect a CommandFailedEvent.
Use encryptedClient to insert the document { "encrypted": <malformedCiphertext> } into db.decryption_events.
Use encryptedClient to run an aggregate on db.decryption_events with an empty pipeline.
Expect an exception to be thrown from the decryption error. Expect a CommandSucceededEvent. Expect the
CommandSucceededEvent.reply to contain BSON binary for the field cursor.firstBatch.encrypted.
Use encryptedClient to insert the document { "encrypted": <ciphertext> } into db.decryption_events.
Use encryptedClient to run an aggregate on db.decryption_events with an empty pipeline.
Expect no exception. Expect a CommandSucceededEvent. Expect the CommandSucceededEvent.reply to contain BSON binary
for the field cursor.firstBatch.encrypted.
These tests require valid AWS credentials. Refer: Automatic AWS Credentials.
For these cases, create a ClientEncryption object
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "aws": {} },
}Do not run this test case in an environment where AWS credentials are available (e.g. via environment variables or a metadata URL). (Refer: Obtaining credentials for AWS)
Attempt to create a datakey with "aws" KMS provider. Expect this to fail due to a lack of KMS provider
credentials.
For this test case, the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must be defined and set to
a valid set of AWS credentials.
Use the client encryption to create a datakey using the "aws" KMS provider. This should successfully load and use the
AWS credentials that were defined in the environment.
When the following test case requests setting masterKey, use the following values based on the KMS provider:
For "aws":
{
"region": "us-east-1",
"key": "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
}For "azure":
{
"keyVaultEndpoint": "key-vault-csfle.vault.azure.net",
"keyName": "key-name-csfle"
}For "gcp":
{
"projectId": "devprod-drivers",
"location": "global",
"keyRing": "key-ring-csfle",
"keyName": "key-name-csfle"
}For "kmip":
{}For "local", do not set a masterKey document.
Run the following test case for each pair of KMS providers (referred to as srcProvider and dstProvider). Include
pairs where srcProvider equals dstProvider.
-
Drop the collection
keyvault.datakeys. -
Create a
ClientEncryptionobject namedclientEncryption1with these options:class ClientEncryptionOpts { keyVaultClient: <new MongoClient>, keyVaultNamespace: "keyvault.datakeys", kmsProviders: <all KMS providers>, }
-
Call
clientEncryption1.createDataKeywithsrcProviderand these options:class DataKeyOpts { masterKey: <depends on srcProvider>, }
Store the return value in
keyID. -
Call
clientEncryption1.encryptwith the value "test" and these options:class EncryptOpts { keyId : keyID, algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", }
Store the return value in
ciphertext. -
Create a
ClientEncryptionobject namedclientEncryption2with these options:class ClientEncryptionOpts { keyVaultClient: <new MongoClient>, keyVaultNamespace: "keyvault.datakeys", kmsProviders: <all KMS providers>, }
-
Call
clientEncryption2.rewrapManyDataKeywith an emptyfilterand these options:class RewrapManyDataKeyOpts { provider: dstProvider, masterKey: <depends on dstProvider>, }
Assert that the returned
RewrapManyDataKeyResult.bulkWriteResult.modifiedCountis 1. -
Call
clientEncryption1.decryptwith theciphertext. Assert the return value is "test". -
Call
clientEncryption2.decryptwith theciphertext. Assert the return value is "test".
Drivers MAY chose not to implement this prose test if their implementation of RewrapManyDataKeyOpts makes it
impossible by design to omit RewrapManyDataKeyOpts.provider when RewrapManyDataKeyOpts.masterKey is set.
-
Create a
ClientEncryptionobject namedclientEncryptionwith these options:class ClientEncryptionOpts { keyVaultClient: <new MongoClient>, keyVaultNamespace: "keyvault.datakeys", kmsProviders: <all KMS providers>, }
-
Call
clientEncryption.rewrapManyDataKeywith an emptyfilterand these options:class RewrapManyDataKeyOpts { masterKey: {} }
Assert that
clientEncryption.rewrapManyDataKeyraises a client error indicating that the requiredRewrapManyDataKeyOpts.providerfield is missing.
Refer: Automatic GCP Credentials.
For these cases, create a ClientEncryption object
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "gcp": {} },
}Do not run this test case in an environment with a GCP service account is attached (e.g. any GCE equivalent runtime). This may be run in an AWS EC2 instance.
Attempt to create a datakey with "gcp" KMS provider and following DataKeyOpts:
class DataKeyOpts {
masterKey: {
"projectId": "devprod-drivers",
"location": "global",
"keyRing": "key-ring-csfle",
"keyName": "key-name-csfle",
}
}Expect the attempt to obtain "gcp" credentials from the environment to fail.
This test case must run in a Google Compute Engine (GCE) Virtual Machine with a service account attached. See
drivers-evergreen-tools/.evergreen/csfle/gcpkms
for scripts to create a GCE instance for testing. The Evergreen task SHOULD set a batchtime of 14 days to reduce how
often this test case runs.
Attempt to create a datakey with "gcp" KMS provider and following DataKeyOpts:
class DataKeyOpts {
masterKey: {
"projectId": "devprod-drivers",
"location": "global",
"keyRing": "key-ring-csfle",
"keyName": "key-name-csfle",
}
}This should successfully load and use the GCP credentials of the service account attached to the virtual machine.
Expect the key to be successfully created.
Refer: Automatic Azure Credentials
The test cases for IMDS communication are specially designed to not require an Azure environment, while still exercising the core of the functionality. The design of these test cases encourages an implementation to separate the concerns of IMDS communication from the logic of KMS key manipulation. The purpose of these test cases is to ensure drivers will behave appropriately regardless of the behavior of the IMDS server.
For these IMDS credentials tests, a simple stand-in IMDS-imitating HTTP server is available in drivers-evergreen-tools,
at .evergreen/csfle/fake_azure.py. fake_azure.py is a very simple bottle.py application. For the easiest use, it
is recommended to execute it through bottle.py (which is a sibling file in the same directory):
python .evergreen/csfle/bottle.py fake_azure:imdsThis will run the imds Bottle application defined in the fake_azure Python module. bottle.py accepts additional
command line arguments to control the bind host and TCP port (use --help for more information).
For each test case, follow the process for obtaining the token as outlined in the automatic Azure credentials section with the following changes:
- Instead of the standard IMDS TCP endpoint of
169.254.169.254:80, communicate with the runningfake_azureHTTP server. - For each test case, the behavior of the server may be controlled by attaching an additional HTTP header to the sent
request:
X-MongoDB-HTTP-TestParams.
Do not set an X-MongoDB-HTTP-TestParams header.
Upon receiving a response from fake_azure, the driver must decode the following information:
- HTTP status will be
200 Okay. - The HTTP body will be a valid JSON string.
- The access token will be the string
"magic-cookie". - The expiry duration of the token will be seventy seconds.
- The token will have a resource of
"https://vault.azure.net"
This case addresses a server returning valid JSON with invalid content.
Set X-MongoDB-HTTP-TestParams to case=empty-json.
Upon receiving a response:
- HTTP status will be
200 Okay - The HTTP body will be a valid JSON string.
- There will be no access token, expiry duration, or resource.
The test case should ensure that this error condition is handled gracefully.
This case addresses a server returning malformed JSON.
Set X-MongoDB-HTTP-TestParams to case=bad-json.
Upon receiving a response:
- HTTP status will be
200 Okay - The response body will contain a malformed JSON string.
The test case should ensure that this error condition is handled gracefully.
This case addresses a server returning a "Not Found" response. This is documented to occur spuriously within an Azure environment.
Set X-MongoDB-HTTP-TestParams to case=404.
Upon receiving a response:
- HTTP status will be
404 Not Found. - The response body is unspecified.
The test case should ensure that this error condition is handled gracefully.
This case addresses an IMDS server reporting an internal error. This is documented to occur spuriously within an Azure environment.
Set X-MongoDB-HTTP-TestParams to case=500.
Upon receiving a response:
- HTTP status code will be
500. - The response body is unspecified.
The test case should ensure that this error condition is handled gracefully.
This case addresses an IMDS server responding very slowly. Drivers should not halt the application waiting on a peer to communicate.
Set X-MongoDB-HTTP-TestParams to case=slow.
The HTTP response from the fake_azure server will take at least 1000 seconds to complete. The request should fail with
a timeout.
Refer: Automatic Azure Credentials
For these cases, create a ClientEncryption object
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "azure": {} },
}Do not run this test case in an Azure environment with an attached identity. This may be run in an AWS EC2 instance.
Attempt to create a datakey with "azure" KMS provider and following DataKeyOpts:
class DataKeyOpts {
masterKey: {
"keyVaultEndpoint": "https://keyvault-drivers-2411.vault.azure.net/keys/",
"keyName": "KEY-NAME",
}
}Expect the attempt to obtain "azure" credentials from the environment to fail.
This test case must run in an Azure environment with an attached identity. See
drivers-evergreen-tools/.evergreen/csfle/azurekms
for scripts to create a Azure instance for testing. The Evergreen task SHOULD set a batchtime of 14 days to reduce how
often this test case runs.
Attempt to create a datakey with "azure" KMS provider and following DataKeyOpts:
class DataKeyOpts {
masterKey: {
"keyVaultEndpoint": "https://keyvault-drivers-2411.vault.azure.net/keys/",
"keyName": "KEY-NAME",
}
}This should successfully load and use the Azure credentials of the service account attached to the virtual machine.
Expect the key to be successfully created.
Note
IMPORTANT: If crypt_shared is not visible to the operating system's library search mechanism, this test should be skipped.
The following tests that a mongocryptd client is not created when shared library is in-use.
-
Start a new thread (referred to as
listenerThread) -
On
listenerThread, create a TcpListener on 127.0.0.1 endpoint and port 27021. Start the listener and wait for establishing connections. If any connection is established, then signal about this to the main thread.Drivers MAY pass a different port if they expect their testing infrastructure to be using port 27021. Pass a port that should be free.
-
Create a MongoClient configured with auto encryption (referred to as
client_encrypted)Configure the required options. Use the
localKMS provider as follows:{ "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } }
Configure with the
keyVaultNamespaceset tokeyvault.datakeys.Configure the following
extraOptions:{ "mongocryptdURI": "mongodb://localhost:27021/?serverSelectionTimeoutMS=1000" }
-
Use
client_encryptedto insert the document{"unencrypted": "test"}intodb.coll. -
Expect no signal from
listenerThread.
The Automatic Data Encryption Keys tests require MongoDB server 7.0+. The tests must not run against a standalone.
Note
MongoDB Server 7.0 introduced a backwards breaking change to the Queryable Encryption (QE) protocol: QEv2. libmongocrypt 1.8.0 is configured to use the QEv2 protocol.
For each of the following test cases, assume DB is a valid open database handle, and assume a
ClientEncryption object CE created using the following options:
clientEncryptionOptions: {
keyVaultClient: <new MongoClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: {
local: { key: base64Decode(LOCAL_MASTERKEY) },
aws: {
accessKeyId: <set from environment>,
secretAccessKey: <set from environment>
},
},
}Run each test case with each of these KMS providers: aws, local. The KMS provider name is referred to as
kmsProvider. When testing aws, use the following as the masterKey option:
{
region: "us-east-1",
key: "arn:aws:kms:us-east-1:579766882180:key/89fcc2c4-08b0-4bd9-9f25-e30687b580d0"
}When testing local, set masterKey to null.
This test is the most basic to verify that CreateEncryptedCollection created a collection with queryable encryption enabled. It verifies that the server rejects an attempt to insert plaintext in an encrypted fields.
-
Create a new create-collection options
$Opts$ including the following:{ encryptedFields: { fields: [{ path: "ssn", bsonType: "string", keyId: null }] } }
-
Invoke
$CreateEncryptedCollection(CE, DB, "testing1", Opts, kmsProvider, masterKey)$ to obtain a new collection$Coll$ . Expect success. -
Attempt to insert the following document into
Coll:{ ssn: "123-45-6789" }
-
Expect an error from the insert operation that indicates that the document failed validation. This error indicates that the server expects to receive an encrypted field for
ssn, but we tried to insert a plaintext field via a client that is unaware of the encryption requirements.
The CreateEncryptedCollection helper should not
create a regular collection if there are no encryptedFields for the collection being created. Instead, it should
generate an error indicated that the encryptedFields option is missing.
- Create a new empty create-collection options
$Opts$ . (i.e. it must not contain anyencryptedFieldsoptions.) - Invoke
$CreateEncryptedCollection(CE, DB, "testing1", Opts, kmsProvider, masterKey)$ . - Expect the invocation to fail with an error indicating that
encryptedFieldsis not defined for the collection, and expect that no collection was created within the database. It would be incorrect for CreateEncryptedCollection to create a regular collection without queryable encryption enabled.
The CreateEncryptedCollection helper only inspects
encryptedFields.fields for keyId of null.
CreateEncryptedCollection should forward all other
data as-is, even if it would be malformed. The server should generate an error when attempting to create a collection
with such invalid settings.
Note
This test is not required if the type system of the driver has a compile-time check that fields' keyIds are of the
correct type.
-
Create a new create-collection options
$Opts$ including the following:{ encryptedFields: { fields: [{ path: "ssn", bsonType: "string", keyId: false, }] } }
-
Invoke
$CreateEncryptedCollection(CE, DB, "testing1", Opts, kmsProvider, masterKey)$ . -
Expect an error from the server indicating a validation error at
create.encryptedFields.fields.keyId, which must be a UUID and not a boolean value.
This test is continuation of the case 1 and provides a way to complete inserting with encrypted value.
-
Create a new create-collection options
$Opts$ including the following:{ encryptedFields: { fields: [{ path: "ssn", bsonType: "string", keyId: null }] } }
-
Invoke
$CreateEncryptedCollection(CE, DB, "testing1", Opts, kmsProvider, masterKey)$ to obtain a new collection$Coll$ and data key$key1$ . Expect success. -
Use
$CE$ to explicitly encrypt the string "123-45-6789" using algorithm$Unindexed$ and data key$key1$ . Refer result as$encryptedPayload$ . -
Attempt to insert the following document into
Coll:{ ssn: <encryptedPayload> }
Expect success.
The Range Explicit Encryption tests utilize Queryable Encryption (QE) range protocol V2 and require MongoDB server 8.0.0-rc14+ for SERVER-91889 and libmongocrypt 1.11.0+ for MONGOCRYPT-705. The tests must not run against a standalone.
Each of the following test cases must pass for each of the supported types (DecimalNoPrecision, DecimalPrecision,
DoublePrecision, DoubleNoPrecision, Date, Int, and Long), unless it is stated the type should be skipped.
Tests for DecimalNoPrecision must only run against a replica set. DecimalNoPrecision queries are expected to take a
long time and may exceed the default mongos timeout.
Before running each of the following test cases, perform the following Test Setup.
Load the file for the specific data type being tested range-encryptedFields-<type>.json. For example, for Int load
range-encryptedFields-Int.json
as encryptedFields.
Load the file
key1-document.json
as key1Document.
Read the "_id" field of key1Document as key1ID.
Drop and create the collection db.explicit_encryption using encryptedFields as an option. See
FLE 2 CreateCollection() and Collection.Drop().
Drop and create the collection keyvault.datakeys.
Insert key1Document in keyvault.datakeys with majority write concern.
Create a MongoClient named keyVaultClient.
Create a ClientEncryption object named clientEncryption with these options:
class ClientEncryptionOpts {
keyVaultClient: <keyVaultClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
}Create a MongoClient named encryptedClient with these AutoEncryptionOpts:
class AutoEncryptionOpts {
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
bypassQueryAnalysis: true,
}The remaining tasks require setting RangeOpts. Test Setup: RangeOpts lists the values to use
for RangeOpts for each of the supported data types.
Use clientEncryption to encrypt these values: 0, 6, 30, and 200. Ensure the type matches that of the encrypted field.
For example, if the encrypted field is encryptedDoubleNoPrecision encrypt the value 6.0.
Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Use encryptedClient to insert the following documents into db.explicit_encryption:
{ "_id": 0, "encrypted<Type>": <encrypted 0> }
{ "_id": 1, "encrypted<Type>": <encrypted 6> }
{ "_id": 2, "encrypted<Type>": <encrypted 30> }
{ "_id": 3, "encrypted<Type>": <encrypted 200> }This section lists the values to use for RangeOpts for each of the supported data types, since each data type requires
a different RangeOpts.
Each test listed in the cases below must pass for all supported data types unless it is stated the type should be skipped.
-
DecimalNoPrecision
class RangeOpts { trimFactor: 1, sparsity: 1, }
-
DecimalPrecision
class RangeOpts { min: { "$numberDecimal": "0" }, max: { "$numberDecimal": "200" }, trimFactor: 1, sparsity: 1, precision: 2, }
-
DoubleNoPrecision
class RangeOpts { trimFactor: 1 sparsity: 1, }
-
DoublePrecision
class RangeOpts { min: { "$numberDouble": "0" }, max: { "$numberDouble": "200" }, trimFactor: 1, sparsity: 1, precision: 2, }
-
Date
class RangeOpts { min: {"$date": { "$numberLong": "0" } } , max: {"$date": { "$numberLong": "200" } }, trimFactor: 1, sparsity: 1, }
-
Int
class RangeOpts { min: {"$numberInt": "0" } , max: {"$numberInt": "200" }, trimFactor: 1, sparsity: 1, }
-
Long
class RangeOpts { min: {"$numberLong": "0" } , max: {"$numberLong": "200" }, trimFactor: 1, sparsity: 1, }
Use clientEncryption.encrypt() to encrypt the value 6. Ensure the type matches that of the encrypted field. For
example, if the encrypted field is encryptedLong encrypt a BSON int64 type, not a BSON int32 type.
Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Store the result in insertPayload.
Use clientEncryption to decrypt insertPayload. Assert the returned value equals 6 and has the expected type.
Note
The type returned by clientEncryption.decrypt() may differ from the input type to clientEncryption.encrypt()
depending on how the driver unmarshals BSON numerics to language native types. Example: a driver may unmarshal a BSON
int64 to a numeric type that does not distinguish between int64 and int32.
Use clientEncryption.encryptExpression() to encrypt this query:
// Convert 6 and 200 to the encrypted field type
{ "$and": [ { "encrypted<Type>": { "$gte": 6 } }, { "encrypted<Type>": { "$lte": 200 } } ] }Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
queryType: "range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter findPayload
and sort the results by _id.
Assert the following three documents are returned:
// Convert 6, 30, and 200 to the encrypted field type
{ "_id": 1, "encrypted<Type>": 6 }
{ "_id": 2, "encrypted<Type>": 30 }
{ "_id": 3, "encrypted<Type>": 200 }Use clientEncryption.encryptExpression() to encrypt this query:
// Convert 0 and 6 to the encrypted field type
{ "$and": [ { "encrypted<Type>": { "$gte": 0 } }, { "encrypted<Type>": { "$lte": 6 } } ] }Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
queryType: "range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter findPayload
and sort the results by _id.
Assert the following two documents are returned:
// Convert 0 and 6 to the encrypted field type
{ "_id": 0, "encrypted<Type>": 0 }
{ "_id": 1, "encrypted<Type>": 6 }Use clientEncryption.encryptExpression() to encrypt this query:
// Convert 30 to the encrypted field type
{ "$and": [ { "encrypted<Type>": { "$gt": 30 } } ] }Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
queryType: "range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter findPayload
and sort the results by _id.
Assert the following document is returned:
// Convert 200 to the encrypted field type
{ "_id": 3, "encrypted<Type>": 200 }Use clientEncryption.encryptExpression() to encrypt this query:
// Convert 30 to the encrypted field type
{ "$and": [ { "$lt": [ "$encrypted<Type>", 30 ] } ] } }Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
queryType: "range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Store the result in findPayload.
Use encryptedClient to run a "find" operation on the db.explicit_encryption collection with the filter
{ "$expr": <findPayload> } and sort the results by _id.
Assert the following two documents are returned:
// Convert 0 and 6 to the encrypted field type
{ "_id": 0, "encrypted<Type>": 0 }
{ "_id": 1, "encrypted<Type>": 6 }This test case should be skipped if the encrypted field is encryptedDoubleNoPrecision or
encryptedDecimalNoPrecision.
Use clientEncryption.encrypt() to encrypt the value 201. Ensure the type matches that of the encrypted field.
Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Assert that an error was raised because 201 is greater than the maximum value in RangeOpts.
This test case should be skipped if the encrypted field is encryptedDoubleNoPrecision or
encryptedDecimalNoPrecision.
Use clientEncryption.encrypt() to encrypt the value 6 with a type that does not match that of the encrypted field.
If the encrypted field is encryptedInt use a BSON double type. Otherwise, use a BSON int32 type.
Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type>,
}Ensure that RangeOpts corresponds to the type of the encrypted field (i.e. expected type) and not that of the value
being passed to clientEncryption.encrypt().
Assert that an error was raised.
This test case should be skipped if the encrypted field is encryptedDoublePrecision, encryptedDoubleNoPrecision,
encryptedDecimalPrecision, or encryptedDecimalNoPrecision.
Use clientEncryption.encrypt() to encrypt the value 6. Ensure the type matches that of the encrypted field.
Add { precision: 2 } to the encrypted field's RangeOpts (see: Test Setup: RangeOpts).
Encrypt using the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: <RangeOpts for Type with precision added>,
}Assert that an error was raised.
This test requires libmongocrypt with changes in 14ccd9ce (MONGOCRYPT-698).
Create a MongoClient named keyVaultClient.
Create a ClientEncryption object named clientEncryption with these options:
class ClientEncryptionOpts {
keyVaultClient: keyVaultClient,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": "<base64 decoding of LOCAL_MASTERKEY>" } },
}Create a key with clientEncryption.createDataKey. Store the returned key ID in a variable named keyId.
Call clientEncryption.encrypt to encrypt the int32 value 123 with these options:
class EncryptOpts {
keyId : keyId,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: RangeOpts {
min: 0,
max: 1000
}
}Store the result in a variable named payload_defaults.
Call clientEncryption.encrypt to encrypt the int32 value 123 with these options:
class EncryptOpts {
keyId : keyId,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: RangeOpts {
min: 0,
max: 1000,
sparsity: 2,
trimFactor: 6
}
}Assert the returned payload size equals the size of payload_defaults.
Note
Do not compare the payload contents. The payloads include random data. The trimFactor and sparsity directly affect
the payload size.
Call clientEncryption.encrypt to encrypt the int32 value 123 with these options:
class EncryptOpts {
keyId : keyId,
algorithm: "Range",
contentionFactor: 0,
rangeOpts: RangeOpts {
min: 0,
max: 1000,
trimFactor: 0
}
}Assert the returned payload size is greater than the size of payload_defaults.
Note
Do not compare the payload contents. The payloads include random data. The trimFactor and sparsity directly affect
the payload size.
The following tests that certain AWS, Azure, and GCP KMS operations are retried on transient errors.
This test uses a mock server with configurable failpoints to simulate network failures. To start the server:
python -u kms_failpoint_server.py --port 9003See the TLS tests for running the mock server on Evergreen. See the mock server implementation and the C driver tests for how to configure failpoints.
- Start a
mongodprocess with server version 4.2.0 or later. - Start the failpoint KMS server with:
python -u kms_failpoint_server.py --port 9003. - Create a
MongoClientfor key vault operations. - Create a
ClientEncryptionobject (referred to asclient_encryption) withkeyVaultNamespaceset tokeyvault.datakeys.
The failpoint server is configured using HTTP requests. Example request to simulate a network failure:
curl -X POST https://localhost:9003/set_failpoint/network -d '{"count": 1}' --cacert drivers-evergreen-tools/.evergreen/x509gen/ca.pem
To simulate an HTTP failure, replace network with http.
When the following test cases request setting masterKey, use the following values based on the KMS provider:
For "aws":
{
"region": "foo",
"key": "bar",
"endpoint": "127.0.0.1:9003",
}For "azure":
{
"keyVaultEndpoint": "127.0.0.1:9003",
"keyName": "foo",
}For "gcp":
{
"projectId": "foo",
"location": "bar",
"keyRing": "baz",
"keyName": "qux",
"endpoint": "127.0.0.1:9003"
}- Configure the mock server to simulate one network failure.
- Call
client_encryption.createDataKey()with "aws" as the provider. Expect this to succeed. Store the returned key ID in a variable namedkeyId. - Configure the mock server to simulate another network failure.
- Call
clientEncryption.encryptwith the followingEncryptOptsto encrypt the int32 value123with the newly created key:Expect this to succeed.class EncryptOpts { keyId : <keyID>, algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", }
Repeat this test with the azure and gcp masterKeys.
- Configure the mock server to simulate one HTTP failure.
- Call
client_encryption.createDataKey()with "aws" as the provider. Expect this to succeed. Store the returned key ID in a variable namedkeyId. - Configure the mock server to simulate another HTTP failure.
- Call
clientEncryption.encryptwith the followingEncryptOptsto encrypt the int32 value123with the newly created key:Expect this to succeed.class EncryptOpts { keyId : <keyID>, algorithm: "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic", }
Repeat this test with the azure and gcp masterKeys.
- Configure the mock server to simulate four network failures.
- Call
client_encryption.createDataKey()with "aws" as the provider. Expect this to fail.
Repeat this test with the azure and gcp masterKeys.
All tests require libmongocrypt 1.13.0, server 7.0+, and must be skipped on standalone. Tests define more constraints.
The syntax <filename.json> is used to refer to the content of the corresponding file in ../etc/data/lookup.
Create an encrypted MongoClient named encryptedClient configured with:
AutoEncryptionOpts(
keyVaultNamespace="db.keyvault",
kmsProviders={"local": { "key": "<base64 decoding of LOCAL_MASTERKEY>" }}
)Use encryptedClient to drop db.keyvault. Insert <key-doc.json> into db.keyvault with majority write concern.
Use encryptedClient to drop and create the following collections:
-
db.csflewith options:{ "validator": { "$jsonSchema": "<schema-csfle.json>"}}. -
db.csfle2with options:{ "validator": { "$jsonSchema": "<schema-csfle2.json>"}}. -
db.qewith options:{ "encryptedFields": "<schema-qe.json>"}. -
db.qe2with options:{ "encryptedFields": "<schema-qe2.json>"}. -
db.no_schemawith no options. -
db.no_schema2with no options. -
db.non_csfle_schemawith options:{ "validator": { "$jsonSchema": "<schema-non-csfle.json>"}}
Create an unencrypted MongoClient named unencryptedClient.
Insert documents with encryptedClient:
{"csfle": "csfle"}intodb.csfle- Use
unencryptedClientto retrieve it. Assert thecsflefield is BSON binary.
- Use
{"csfle2": "csfle2"}intodb.csfle2- Use
unencryptedClientto retrieve it. Assert thecsfle2field is BSON binary.
- Use
{"qe": "qe"}intodb.qe- Use
unencryptedClientto retrieve it. Assert theqefield is BSON binary.
- Use
{"qe2": "qe2"}intodb.qe2- Use
unencryptedClientto retrieve it. Assert theqe2field is BSON binary.
- Use
{"no_schema": "no_schema"}intodb.no_schema{"no_schema2": "no_schema2"}intodb.no_schema2{"non_csfle_schema": "non_csfle_schema"}intodb.non_csfle_schema
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.csfle with the following pipeline:
[
{"$match" : {"csfle" : "csfle"}},
{
"$lookup" : {
"from" : "no_schema",
"as" : "matched",
"pipeline" : [ {"$match" : {"no_schema" : "no_schema"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect one document to be returned matching: {"csfle" : "csfle", "matched" : [ {"no_schema" : "no_schema"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.qe with the following pipeline:
[
{"$match" : {"qe" : "qe"}},
{
"$lookup" : {
"from" : "no_schema",
"as" : "matched",
"pipeline" :
[ {"$match" : {"no_schema" : "no_schema"}}, {"$project" : {"_id" : 0, "__safeContent__" : 0}} ]
}
},
{"$project" : {"_id" : 0, "__safeContent__" : 0}}
]Expect one document to be returned matching: {"qe" : "qe", "matched" : [ {"no_schema" : "no_schema"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.no_schema with the following pipeline:
[
{"$match" : {"no_schema" : "no_schema"}},
{
"$lookup" : {
"from" : "csfle",
"as" : "matched",
"pipeline" : [ {"$match" : {"csfle" : "csfle"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect one document to be returned matching: {"no_schema" : "no_schema", "matched" : [ {"csfle" : "csfle"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.no_schema with the following pipeline:
[
{"$match" : {"no_schema" : "no_schema"}},
{
"$lookup" : {
"from" : "qe",
"as" : "matched",
"pipeline" : [ {"$match" : {"qe" : "qe"}}, {"$project" : {"_id" : 0, "__safeContent__" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect one document to be returned matching: {"no_schema" : "no_schema", "matched" : [ {"qe" : "qe"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.csfle with the following pipeline:
[
{"$match" : {"csfle" : "csfle"}},
{
"$lookup" : {
"from" : "csfle2",
"as" : "matched",
"pipeline" : [ {"$match" : {"csfle2" : "csfle2"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect one document to be returned matching: {"csfle" : "csfle", "matched" : [ {"csfle2" : "csfle2"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.qe with the following pipeline:
[
{"$match" : {"qe" : "qe"}},
{
"$lookup" : {
"from" : "qe2",
"as" : "matched",
"pipeline" : [ {"$match" : {"qe2" : "qe2"}}, {"$project" : {"_id" : 0, "__safeContent__" : 0}} ]
}
},
{"$project" : {"_id" : 0, "__safeContent__" : 0}}
]Expect one document to be returned matching: {"qe" : "qe", "matched" : [ {"qe2" : "qe2"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.no_schema with the following pipeline:
[
{"$match" : {"no_schema" : "no_schema"}},
{
"$lookup" : {
"from" : "no_schema2",
"as" : "matched",
"pipeline" : [ {"$match" : {"no_schema2" : "no_schema2"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect one document to be returned matching:
{"no_schema" : "no_schema", "matched" : [ {"no_schema2" : "no_schema2"} ]}.
Test requires server 8.1+ and mongocryptd/crypt_shared 8.1+
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.csfle with the following pipeline:
[
{"$match" : {"csfle" : "qe"}},
{
"$lookup" : {
"from" : "qe",
"as" : "matched",
"pipeline" : [ {"$match" : {"qe" : "qe"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect an exception to be thrown with a message containing one of the following substrings depending on the mongocryptd/crypt_shared and libmongocrypt versions:
- mongocryptd/crypt_shared <8.2 or libmongocrypt <1.17.0:
not supported. - mongocryptd/crypt_shared 8.2+ and libmongocrypt 1.17.0+:
Cannot specify both encryptionInformation and csfleEncryptionSchemas unless csfleEncryptionSchemas only contains non-encryption JSON schema validators.
This case requires mongocryptd/crypt_shared <8.1.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.csfle with the following pipeline:
[
{"$match" : {"csfle" : "csfle"}},
{
"$lookup" : {
"from" : "no_schema",
"as" : "matched",
"pipeline" : [ {"$match" : {"no_schema" : "no_schema"}}, {"$project" : {"_id" : 0}} ]
}
},
{"$project" : {"_id" : 0}}
]Expect an exception to be thrown with a message containing the substring Upgrade.
Test requires server 8.2+, mongocryptd/crypt_shared 8.2+, and libmongocrypt 1.17.0+.
Recreate encryptedClient with the same AutoEncryptionOpts as the setup. (Recreating prevents schema caching from
impacting the test).
Run an aggregate operation on db.qe with the following pipeline:
[
{"$match" : {"qe" : "qe"}},
{
"$lookup" : {
"from" : "non_csfle_schema",
"as" : "matched",
"pipeline" : [ {"$match" : {"non_csfle_schema" : "non_csfle_schema"}}, {"$project" : {"_id" : 0, "__safeContent__" : 0}} ]
}
},
{"$project" : {"_id" : 0, "__safeContent__" : 0}}
]Expect one document to be returned matching: {"qe" : "qe", "matched" : [ {"non_csfle_schema" : "non_csfle_schema"} ]}.
These tests require valid AWS credentials for the remote KMS provider via the secrets manager (FLE_AWS_KEY and FLE_AWS_SECRET). These tests MUST NOT run inside an AWS environment that has the same credentials set in order to properly ensure the tests would fail using on-demand credentials.
Create a MongoClient named setupClient.
Create a ClientEncryption object with the following options:
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "aws": { "accessKeyId": <set from secrets manager>, "secretAccessKey": <set from secrets manager> } },
credentialProviders: { "aws": <default provider from AWS SDK> }
}Assert that an error is thrown.
Create a MongoClient named setupClient.
Create a ClientEncryption object with the following options:
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "aws": {} },
credentialProviders: { "aws": <object/function that returns valid credentials from the secrets manager> }
}Use the client encryption to create a datakey using the "aws" KMS provider. This should successfully load and use the AWS credentials that were provided by the secrets manager for the remote provider. Assert the datakey was created and that the custom credential provider was called at least once.
An example of this in Node.js:
import { ClientEncryption, MongoClient } from 'mongodb';
let calledCount = 0;
const masterKey = {
region: '<aws region>',
key: '<key for arn>'
};
const keyVaultClient = new MongoClient(process.env.MONGODB_URI);
const options = {
keyVaultNamespace: 'keyvault.datakeys',
kmsProviders: { aws: {} },
credentialProviders: {
aws: async () => {
calledCount++;
return {
accessKeyId: process.env.FLE_AWS_KEY,
secretAccessKey: process.env.FLE_AWS_SECRET
};
}
}
};
const clientEncryption = new ClientEncryption(keyVaultClient, options);
const dk = await clientEncryption.createDataKey('aws', { masterKey });
expect(dk).to.be.a(Binary);
expect(calledCount).to.be.greaterThan(0);Create a MongoClient object with the following options:
class AutoEncryptionOpts {
autoEncryption: {
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "aws": { "accessKeyId": <set from secrets manager>, "secretAccessKey": <set from secrets manager> } },
credentialProviders: { "aws": <default provider from AWS SDK> }
}
}Assert that an error is thrown.
Ensure a valid AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are present in the environment.
Create a MongoClient named setupClient.
Create a ClientEncryption object with the following options:
class ClientEncryptionOpts {
keyVaultClient: <setupClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "aws": {} },
credentialProviders: { "aws": <object/function that returns valid credentials from the secrets manager> }
}Use the client encryption to create a datakey using the "aws" KMS provider. This should successfully load and use the AWS credentials that were provided by the secrets manager for the remote provider. Assert the datakey was created and that the custom credential provider was called at least once.
An example of this in Node.js:
import { ClientEncryption, MongoClient } from 'mongodb';
let calledCount = 0;
const masterKey = {
region: '<aws region>',
key: '<key for arn>'
};
const keyVaultClient = new MongoClient(process.env.MONGODB_URI);
const options = {
keyVaultNamespace: 'keyvault.datakeys',
kmsProviders: { aws: {} },
credentialProviders: {
aws: async () => {
calledCount++;
return {
accessKeyId: process.env.FLE_AWS_KEY,
secretAccessKey: process.env.FLE_AWS_SECRET
};
}
}
};
const clientEncryption = new ClientEncryption(keyVaultClient, options);
const dk = await clientEncryption.createDataKey('aws', { masterKey });
expect(dk).to.be.a(Binary);
expect(calledCount).to.be.greaterThan(0);The Text Explicit Encryption tests utilize Queryable Encryption (QE) range protocol V2 and require MongoDB server 8.2.0+ and libmongocrypt 1.15.1+. The tests must not run against a standalone.
Before running each of the following test cases, perform the following Test Setup.
Using QE CreateCollection() and Collection.Drop(), drop and create the following collections with majority write concern:
db.prefix-suffixusing theencryptedFieldsoption set to the contents of encryptedFields-prefix-suffix.jsondb.substringusing theencryptedFieldsoption set to the contents of encryptedFields-substring.json
Load the file
key1-document.json
as key1Document.
Read the "_id" field of key1Document as key1ID.
Drop and create the collection keyvault.datakeys.
Insert key1Document in keyvault.datakeys with majority write concern.
Create a MongoClient named keyVaultClient.
Create a ClientEncryption object named clientEncryption with these options:
class ClientEncryptionOpts {
keyVaultClient: <keyVaultClient>,
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
}Create a MongoClient named encryptedClient with these AutoEncryptionOpts:
class AutoEncryptionOpts {
keyVaultNamespace: "keyvault.datakeys",
kmsProviders: { "local": { "key": <base64 decoding of LOCAL_MASTERKEY> } },
bypassQueryAnalysis: true,
}Use clientEncryption to encrypt the string "foobarbaz" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
prefix: PrefixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
},
suffix: SuffixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
},
},
}Use encryptedClient to insert the following document into db.prefix-suffix with majority write concern:
{ "_id": 0, "encryptedText": <encrypted 'foobarbaz'> }Use clientEncryption to encrypt the string "foobarbaz" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
substring: SubstringOpts {
strMaxLength: 10,
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to insert the following document into db.substring with majority write concern:
{ "_id": 0, "encryptedText": <encrypted 'foobarbaz'> }Use clientEncryption.encrypt() to encrypt the string "foo" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "prefixPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
prefix: PrefixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.prefix-suffix collection with the following filter:
{ $expr: { $encStrStartsWith: {input: '$encryptedText', prefix: <encrypted 'foo'>} } }Assert the following document is returned:
{ "_id": 0, "encryptedText": "foobarbaz" }Use clientEncryption.encrypt() to encrypt the string "baz" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "suffixPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
suffix: SuffixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.prefix-suffix collection with the following filter:
{ $expr: { $encStrEndsWith: {input: '$encryptedText', suffix: <encrypted 'baz'>} } }Assert the following document is returned:
{ "_id": 0, "encryptedText": "foobarbaz" }Use clientEncryption.encrypt() to encrypt the string "baz" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "prefixPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
prefix: PrefixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.prefix-suffix collection with the following filter:
{ $expr: { $encStrStartsWith: {input: '$encryptedText', prefix: <encrypted 'baz'>} } }Assert that no documents are returned.
Use clientEncryption.encrypt() to encrypt the string "foo" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "suffixPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
suffix: SuffixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.prefix-suffix collection with the following filter:
{ $expr: { $encStrEndsWith: {input: '$encryptedText', suffix: <encrypted 'foo'>} } }Assert that no documents are returned.
Use clientEncryption.encrypt() to encrypt the string "bar" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "substringPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
substring: SubstringOpts {
strMaxLength: 10,
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.substring collection with the following filter:
{ $expr: { $encStrContains: {input: '$encryptedText', substring: <encrypted 'bar'>} } }Assert the following document is returned:
{ "_id": 0, "encryptedText": "foobarbaz" }Use clientEncryption.encrypt() to encrypt the string "qux" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "substringPreview",
contentionFactor: 0,
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
substring: SubstringOpts {
strMaxLength: 10,
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Use encryptedClient to run a "find" operation on the db.substring collection with the following filter:
{ $expr: { $encStrContains: {input: '$encryptedText', substring: <encrypted 'qux'>} } }Assert that no documents are returned.
Use clientEncryption.encrypt() to encrypt the string "foo" with the following EncryptOpts:
class EncryptOpts {
keyId : <key1ID>,
algorithm: "TextPreview",
queryType: "prefixPreview",
textOpts: TextOpts {
caseSensitive: true,
diacriticSensitive: true,
prefix: PrefixOpts {
strMaxQueryLength: 10,
strMinQueryLength: 2,
}
},
}Expect an error from libmongocrypt with a message containing the string: "contention factor is required for textPreview algorithm".