You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sources/Codable/Encodable/EncodingStrategy.swift
+15-5Lines changed: 15 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -41,17 +41,27 @@ extension Strategy {
41
41
case custom((Data,Encoder)throws->Void)
42
42
}
43
43
44
-
/// Indication on how encoded CSV rows are cached and actually written to the output target (file, data blocb, or string).
44
+
/// Indication on how encoded CSV rows are cached and written to the output target (file, data blocb, or string).
45
45
///
46
-
/// CSV encoding is an inherently sequential operation, i.e. row 2 must be encoded after row 1. On the other hand, the `Encodable` protocol allows CSV rows to be encoded in a random-order
46
+
/// CSV encoding is an inherently sequential operation, i.e. row 2 must be encoded after row 1. On the other hand, the `Encodable` protocol allows CSV rows to be encoded in a random-order through *keyed container*. Selecting the appropriate buffering strategy lets you pick your encoding style and minimize memory usage.
47
47
publicenumEncodingBuffer{
48
-
/// Encoded rows are being kept in memory till it is their turn to be written to the targeted output.
48
+
/// All encoded rows/fields are cached and the *writing* only occurs at the end of the encodable process.
49
49
///
50
-
/// Foward encoding jumps are allowed and the user may jump backward to continue encoding.
50
+
/// *Keyed containers* can be used to encode rows/fields unordered. That means, a row at position 5 may be encoded before the row at position 3. Similar behavior is supported for fields within a row.
51
+
/// - attention: This strategy consumes the largest amount of memory from all the supported options.
52
+
case keepAll
53
+
/// Encoded rows may be cached, but the encoder will keep the buffer as small as possible by writing completed ordered rows.
54
+
///
55
+
/// *Keyed containers* can be used to encode rows/fields unordered. The writer will however consume rows in order.
56
+
///
57
+
/// For example, an encoder starts encoding row 1 and it gets all its fields. The row will get written and no cache for the row is kept. Same situation occurs when the row 2 is encoded.
58
+
/// However, the user may decide to jump to row 5 and encode it. This row will be kept in the cache till row 3 and 4 are encoded, at which time row 3, 4, 5, and any subsequent rows will be writen.
59
+
/// - attention: This strategy tries to keep the cache to a minimum, but memory usage may be big if there are holes while encoding rows. Those holes are filled with empty rows at the end of the encoding process.
51
60
case unfulfilled
52
61
/// No rows are kept in memory and writes are performed sequentially.
53
62
///
54
-
/// If a keyed container is used to encode rows and a jump forward is requested all the in-between rows are filled with empty fields.
63
+
/// *Keyed containers* can be used, however when forward jumps are performed any in-between rows will be filled with empty fields.
64
+
/// - attention: This strategy provides the smallest usage of memory from all.
/// Retrieves and removes from the buffer all rows/fields from the given indices.
49
-
///
50
-
/// This function never returns rows at an index smaller than the passed `rowIndex`. Also, for the `rowIndex`, it doesn't return the fields previous the `fieldIndex`.
0 commit comments