Skip to content

Commit 2f403a4

Browse files
minor: sync retryable reads tests (fixed)
1 parent 629055f commit 2f403a4

File tree

85 files changed

+22759
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

85 files changed

+22759
-0
lines changed
Lines changed: 173 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,173 @@
1+
=====================
2+
Retryable Reads Tests
3+
=====================
4+
5+
.. contents::
6+
7+
----
8+
9+
Introduction
10+
============
11+
12+
The YAML and JSON files in this directory tree are platform-independent tests
13+
that drivers can use to prove their conformance to the Retryable Reads spec.
14+
15+
Prose tests, which are not easily expressed in YAML, are also presented
16+
in this file. Those tests will need to be manually implemented by each driver.
17+
18+
Tests will require a MongoClient created with options defined in the tests.
19+
Integration tests will require a running MongoDB cluster with server versions
20+
4.0 or later.
21+
22+
N.B. The spec specifies 3.6 as the minimum server version: however,
23+
``failCommand`` is not supported on 3.6, so for now, testing requires MongoDB
24+
4.0. Once `DRIVERS-560`_ is resolved, we will attempt to adapt its live failure
25+
integration tests to test Retryable Reads on MongoDB 3.6.
26+
27+
.. _DRIVERS-560: https://jira.mongodb.org/browse/DRIVERS-560
28+
29+
Server Fail Point
30+
=================
31+
32+
See: `Server Fail Point`_ in the Transactions spec test suite.
33+
34+
.. _Server Fail Point: ../../transactions/tests#server-fail-point
35+
36+
Disabling Fail Point after Test Execution
37+
-----------------------------------------
38+
39+
After each test that configures a fail point, drivers should disable the
40+
``failCommand`` fail point to avoid spurious failures in
41+
subsequent tests. The fail point may be disabled like so::
42+
43+
db.runCommand({
44+
configureFailPoint: "failCommand",
45+
mode: "off"
46+
});
47+
48+
Network Error Tests
49+
===================
50+
51+
Network error tests are expressed in YAML and should be run against a standalone,
52+
shard cluster, or single-node replica set.
53+
54+
55+
Test Format
56+
-----------
57+
58+
Each YAML file has the following keys:
59+
60+
- ``runOn`` (optional): An array of server version and/or topology requirements
61+
for which the tests can be run. If the test environment satisfies one or more
62+
of these requirements, the tests may be executed; otherwise, this file should
63+
be skipped. If this field is omitted, the tests can be assumed to have no
64+
particular requirements and should be executed. Each element will have some or
65+
all of the following fields:
66+
67+
- ``minServerVersion`` (optional): The minimum server version (inclusive)
68+
required to successfully run the tests. If this field is omitted, it should
69+
be assumed that there is no lower bound on the required server version.
70+
71+
- ``maxServerVersion`` (optional): The maximum server version (inclusive)
72+
against which the tests can be run successfully. If this field is omitted,
73+
it should be assumed that there is no upper bound on the required server
74+
version.
75+
76+
- ``topology`` (optional): An array of server topologies against which the
77+
tests can be run successfully. Valid topologies are "single", "replicaset",
78+
and "sharded". If this field is omitted, the default is all topologies (i.e.
79+
``["single", "replicaset", "sharded"]``).
80+
81+
- ``database_name`` and ``collection_name``: Optional. The database and
82+
collection to use for testing.
83+
84+
- ``bucket_name``: Optional. The GridFS bucket name to use for testing.
85+
86+
- ``data``: The data that should exist in the collection(s) under test before
87+
each test run. This will typically be an array of documents to be inserted
88+
into the collection under test (i.e. ``collection_name``); however, this field
89+
may also be an object mapping collection names to arrays of documents to be
90+
inserted into the specified collection.
91+
92+
- ``tests``: An array of tests that are to be run independently of each other.
93+
Each test will have some or all of the following fields:
94+
95+
- ``description``: The name of the test.
96+
97+
- ``clientOptions``: Optional, parameters to pass to MongoClient().
98+
99+
- ``useMultipleMongoses`` (optional): If ``true``, the MongoClient for this
100+
test should be initialized with multiple mongos seed addresses. If ``false``
101+
or omitted, only a single mongos address should be specified. This field has
102+
no effect for non-sharded topologies.
103+
104+
- ``skipReason``: Optional, string describing why this test should be skipped.
105+
106+
- ``failPoint``: Optional, a server fail point to enable, expressed as the
107+
configureFailPoint command to run on the admin database.
108+
109+
- ``operations``: An array of documents describing an operation to be
110+
executed. Each document has the following fields:
111+
112+
- ``name``: The name of the operation on ``object``.
113+
114+
- ``object``: The name of the object to perform the operation on. Can be
115+
"database", "collection", "client", or "gridfsbucket."
116+
117+
- ``arguments``: Optional, the names and values of arguments.
118+
119+
- ``result``: Optional. The return value from the operation, if any. This
120+
field may be a scalar (e.g. in the case of a count), a single document, or
121+
an array of documents in the case of a multi-document read.
122+
123+
- ``error``: Optional. If ``true``, the test should expect an error or
124+
exception.
125+
126+
- ``expectations``: Optional list of command-started events.
127+
128+
GridFS Tests
129+
------------
130+
131+
GridFS tests are denoted by when the YAML file contains ``bucket_name``.
132+
The ``data`` field will also be an object, which maps collection names
133+
(e.g. ``fs.files``) to an array of documents that should be inserted into
134+
the specified collection.
135+
136+
``fs.files`` and ``fs.chunks`` should be created in the database
137+
specified by ``database_name``. This could be done via inserts or by
138+
creating GridFSBuckets—using the GridFS ``bucketName`` (see
139+
`GridFSBucket spec`_) specified by ``bucket_name`` field in the YAML
140+
file—and calling ``upload_from_stream_with_id`` with the appropriate
141+
data.
142+
143+
``Download`` tests should be tested against ``GridFS.download_to_stream``.
144+
``DownloadByName`` tests should be tested against
145+
``GridFS.download_to_stream_by_name``.
146+
147+
148+
.. _GridFSBucket spec: https://github.com/mongodb/specifications/blob/master/source/gridfs/gridfs-spec.rst#configurable-gridfsbucket-class
149+
150+
Speeding Up Tests
151+
-----------------
152+
153+
Drivers may benefit reducing `minHeartbeatFrequencyMS`_ in order to speed up
154+
tests. Python was able to decrease the run time of the tests greatly by lowering
155+
the SDAM's ``minHeartbeatFrequencyMS`` from 500ms to 50ms, thus decreasing the
156+
waiting time after a "not master" error:
157+
158+
.. _minHeartbeatFrequencyMS: https://github.com/mongodb/specifications/blob/master/source/server-discovery-and-monitoring/server-discovery-and-monitoring.rst#minheartbeatfrequencyms
159+
Optional Enumeration Commands
160+
=============================
161+
162+
A driver only needs to test the optional enumeration commands it has chosen to
163+
implement (e.g. ``Database.listCollectionNames()``).
164+
165+
Changelog
166+
=========
167+
168+
:2019-03-19: Add top-level ``runOn`` field to denote server version and/or
169+
topology requirements requirements for the test file. Removes the
170+
``minServerVersion`` and ``topology`` top-level fields, which are
171+
now expressed within ``runOn`` elements.
172+
173+
Add test-level ``useMultipleMongoses`` field.
Lines changed: 98 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,98 @@
1+
{
2+
"runOn": [
3+
{
4+
"minServerVersion": "4.1.11"
5+
}
6+
],
7+
"database_name": "retryable-reads-tests",
8+
"collection_name": "coll",
9+
"data": [
10+
{
11+
"_id": 1,
12+
"x": 11
13+
},
14+
{
15+
"_id": 2,
16+
"x": 22
17+
},
18+
{
19+
"_id": 3,
20+
"x": 33
21+
}
22+
],
23+
"tests": [
24+
{
25+
"description": "Aggregate with $merge does not retry",
26+
"failPoint": {
27+
"configureFailPoint": "failCommand",
28+
"mode": {
29+
"times": 1
30+
},
31+
"data": {
32+
"failCommands": [
33+
"aggregate"
34+
],
35+
"closeConnection": true
36+
}
37+
},
38+
"operations": [
39+
{
40+
"object": "collection",
41+
"name": "aggregate",
42+
"arguments": {
43+
"pipeline": [
44+
{
45+
"$match": {
46+
"_id": {
47+
"$gt": 1
48+
}
49+
}
50+
},
51+
{
52+
"$sort": {
53+
"x": 1
54+
}
55+
},
56+
{
57+
"$merge": {
58+
"into": "output-collection"
59+
}
60+
}
61+
]
62+
},
63+
"error": true
64+
}
65+
],
66+
"expectations": [
67+
{
68+
"command_started_event": {
69+
"command": {
70+
"aggregate": "coll",
71+
"pipeline": [
72+
{
73+
"$match": {
74+
"_id": {
75+
"$gt": 1
76+
}
77+
}
78+
},
79+
{
80+
"$sort": {
81+
"x": 1
82+
}
83+
},
84+
{
85+
"$merge": {
86+
"into": "output-collection"
87+
}
88+
}
89+
]
90+
},
91+
"command_name": "aggregate",
92+
"database_name": "retryable-reads-tests"
93+
}
94+
}
95+
]
96+
}
97+
]
98+
}
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
runOn:
2+
-
3+
minServerVersion: "4.1.11"
4+
5+
database_name: &database_name "retryable-reads-tests"
6+
collection_name: &collection_name "coll"
7+
8+
data:
9+
- {_id: 1, x: 11}
10+
- {_id: 2, x: 22}
11+
- {_id: 3, x: 33}
12+
13+
tests:
14+
-
15+
description: "Aggregate with $merge does not retry"
16+
failPoint:
17+
configureFailPoint: failCommand
18+
mode: { times: 1 }
19+
data:
20+
failCommands: [aggregate]
21+
closeConnection: true
22+
operations:
23+
-
24+
object: collection
25+
name: aggregate
26+
arguments:
27+
pipeline: &pipeline
28+
- $match: {_id: {$gt: 1}}
29+
- $sort: {x: 1}
30+
- $merge: { into: "output-collection" }
31+
error: true
32+
expectations:
33+
-
34+
command_started_event:
35+
command:
36+
aggregate: *collection_name
37+
pipeline: *pipeline
38+
command_name: aggregate
39+
database_name: *database_name

0 commit comments

Comments
 (0)