You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: lib/mongo/bulk_write.ex
+16-12Lines changed: 16 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -20,7 +20,6 @@ defmodule Mongo.BulkWrite do
20
20
2. updates
21
21
3. deletes
22
22
23
-
24
23
## Example:
25
24
26
25
```
@@ -68,19 +67,19 @@ defmodule Mongo.BulkWrite do
68
67
69
68
## Stream bulk writes
70
69
71
-
The examples shown initially filled the bulk with a few operations and then the bulk was written to the database.
70
+
The examples shown initially filled the bulk with a few operations and then the bulk is written to the database.
72
71
This is all done in memory. For larger amounts of operations or imports of very long files, the main memory would
73
72
be unnecessarily burdened. It could come to some resource problems.
74
73
75
-
For such cases you could use streams. Unordered and ordered bulk writes can also be combined with Stream.
74
+
For such cases you could use streams. Unordered and ordered bulk writes can also be combined with Streams.
76
75
You set the maximum size of the bulk. Once the number of bulk operations has been reached,
77
76
it will be sent to the database. While streaming you can limit the memory consumption regarding the current task.
78
77
79
78
In the following example we import 1.000.000 integers into the MongoDB using the stream api:
80
79
81
80
We need to create an insert operation (`BulkOps.get_insert_one()`) for each number. Then we call the `UnorderedBulk.stream`
82
81
function to import it. This function returns a stream function which accumulate
83
-
all inserts operations until the limit `1000` is reached. In this case the operation group is send to
82
+
all inserts operations until the limit `1000` is reached. In this case the operation group is written to
84
83
MongoDB.
85
84
86
85
## Example
@@ -94,7 +93,7 @@ defmodule Mongo.BulkWrite do
94
93
95
94
## Benchmark
96
95
97
-
The following benchmark compares single `Mongo.insert_one()` calls with stream unordered bulk writes.
96
+
The following benchmark compares multiple `Mongo.insert_one()` calls with a stream using unordered bulk writes.
98
97
Both tests inserts documents into a replica set with `w: 1`.
99
98
100
99
```
@@ -159,7 +158,8 @@ defmodule Mongo.BulkWrite do
159
158
aliasMongo.UnorderedBulk
160
159
aliasMongo.OrderedBulk
161
160
162
-
@max_batch_size100_000
161
+
@max_batch_size100_000## todo The maxWriteBatchSize limit of a database, which indicates the maximum number of write operations permitted in a write batch, raises from 1,000 to 100,000.
162
+
163
163
164
164
@doc"""
165
165
Executes unordered and ordered bulk writes.
@@ -179,14 +179,16 @@ defmodule Mongo.BulkWrite do
179
179
If a group (inserts, updates or deletes) exceeds the limit `maxWriteBatchSize` it will be split into chunks.
180
180
Everything is done in memory, so this use case is limited by memory. A better approach seems to use streaming bulk writes.
0 commit comments