Skip to content

Commit 17feeb4

Browse files
committed
Update dependencies
1 parent b812b1d commit 17feeb4

File tree

4 files changed

+8
-15
lines changed

4 files changed

+8
-15
lines changed

.gitattributes

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
*.min.js binary
2+
yarn.lock binary

.prettierignore

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
legacy/
22
dist/
33
bin/
4-
*.md
4+
*.md
5+
*.html

README.md

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -264,7 +264,7 @@ Now, what is relevant to notice is how `sed` only takes 4.9 seconds longer for t
264264
```
265265
Speed relative to fastest tool for each file size
266266
---------------------------------------------------
267-
Bytes sed rr Time it took longer (seconds)
267+
Bytes sed rr Diff (seconds)
268268
1 1 39 0,5 <= sed is 39x faster
269269
5 1 32 0,4
270270
10 1 27 0,4
@@ -282,19 +282,13 @@ So even though the speed evolves very differently, there is only little practica
282282
283283
Please note that speeds might look very different when files get as large as the memory available.
284284
285-
Sure, I can help sharpen up the text for better clarity and impact. While I don't have specific knowledge about the rexreplace project, I understand the general idea conveyed in the text. Here's a revised version:
286-
287-
---
288-
289-
Here's an improved version of the text for your README:
290-
291285
### Tips and Tricks for Performance
292286
293287
Reading many files multiple times is far more time-consuming than creating a complex regex and reading each file once.
294288
295289
> **Anecdote time**
296290
>
297-
> Imagine you need to duplicate a set of files, but each duplicate must have unique keys. To achieve this, you can append `_v2` to each key in the duplicated files. Running a separate `rexreplace` command for each key is a reasoable approach.
291+
> Imagine you need to duplicate a set of files, but within the content are references (let call them keys) that must be unique across both old and new files. We have the complete list of keys so a reasoable approach to append `_v2` to each old key in the new files by running a separate `rexreplace` command for each key.
298292
>
299293
> However, in a real-world scenario with 5,000 keys across 10,000 files, this approach took **57 minutes**. The bottleneck was identifying the 10,000 files 5,000 times, open + read each file 5000 times and the startup time of node x 5,000.
300294
>

src/engine.ts

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -87,15 +87,12 @@ function openFile(file, config) {
8787
function doReplacement(_file_rr: string, _config_rr: any, _data_rr: string) {
8888
debug('Work on content from: ' + _file_rr);
8989

90-
// Variables to be accessible from js.
91-
if (_config_rr.replacementJs) {
92-
_config_rr.replacementDynamic = dynamicReplacement(_file_rr, _config_rr, _data_rr);
93-
}
94-
9590
// Main regexp of the whole thing
9691
const result = _data_rr.replace(
9792
_config_rr.regex,
98-
_config_rr.replacementJs ? _config_rr.replacementDynamic : _config_rr.replacement
93+
_config_rr.replacementJs
94+
? dynamicReplacement(_file_rr, _config_rr, _data_rr)
95+
: _config_rr.replacement
9996
);
10097

10198
// The output of matched strings is done from the replacement, so no need to continue

0 commit comments

Comments
 (0)