-
Notifications
You must be signed in to change notification settings - Fork 82
Description
Summary
String replacement sometimes doesn't replace the token.
A little bit linked to #3318 as it is an issue with how chunks are handled.
Steps To Reproduce
I don't have exact steps as it may depends on system because streams not always use the same size of chunk when reading a file.
I will try my best to explain what is happening and where based on my findings.
The way the replacement strings works is:
- Get all file path with replacement tokens
- Read every file chunk by chunk and replace found tokens before writing result file
If we take a simple example file like:
<?xml version="1.0" encoding="UTF-8"?>
<CaseSettings xmlns="http://soap.sforce.com/2006/04/metadata">
<someTag>#SOME_REPLACEMENT#</someTag>
</CaseSettings>And a sfdx-project.json like this:
{
"packageDirectories": [
{
"path": "force-app",
"default": true
}
],
"namespace": "",
"sfdcLoginUrl": "https://login.salesforce.com",
"sourceApiVersion": "65.0",
"replacements": [
{
"filename": "<pathToTestFile>",
"stringToReplace": "#SOME_REPLACEMENT#",
"replaceWithEnv": "ENV_SOME_REPLACEMENT"
}
]
}Now let's imagine the stream reader split this file in two chunks:
- Chunk 1
<?xml version="1.0" encoding="UTF-8"?>
<CaseSettings xmlns="http://soap.sforce.com/2006/04/metadata">
<someTag>#SOME- Chunk 2
_REPLACEMENT#</someTag>
</CaseSettings>It will fail to find the token #SOME_REPLACEMENT# thus leave it not replaced in converted file.
In src/convert/replacements.ts more specifically on line 70 we search for token inside a unique chunk of file which is then creating this issue.
This one is hard to reproduce in real life example as it is link to the size of the file, the size of the chunks and where the token to replace are located.
I cannot share my real life example as it contains private data and modifying the file will probably hide the issue.
Expected result
Correctly replace all token instance in file.
Actual result
Leave some (one in my case) token in the file.
Solutions?
Read the files line by line as it would be unlikely that a token is split on two lines.
Alternatively we could retain previous chunk to always process two chunk at a time but I doubt this approach is compatible with Transform node class.
System Information
CLI:
@salesforce/cli/2.113.6 win32-x64 node-v24.11.1
Plugin Version:
@oclif/plugin-autocomplete 3.2.39 (core)
@oclif/plugin-commands 4.1.37 (core)
@oclif/plugin-help 6.2.35 (core)
@oclif/plugin-not-found 3.2.72 (core)
@oclif/plugin-plugins 5.4.53 (core)
@oclif/plugin-search 1.2.36 (core)
@oclif/plugin-update 4.7.14 (core)
@oclif/plugin-warn-if-update-available 3.1.52 (core)
@oclif/plugin-which 3.2.42 (core)
@salesforce/cli 2.113.6 (core)
agent 1.24.27 (core)
apex 3.8.7 (core)
api 1.3.3 (core)
auth 3.9.19 (core)
data 4.0.62 (core)
deploy-retrieve 3.23.16 (core)
info 3.4.96 (core)
limits 3.3.71 (core)
marketplace 1.3.8 (core)
org 5.9.45 (core)
packaging 2.23.3 (core)
schema 3.3.88 (core)
settings 2.4.51 (core)
sobject 1.4.79 (core)
telemetry 3.6.66 (core)
templates 56.3.71 (core)
trust 3.7.113 (core)
user 3.6.41 (core)
Windows: true
Shell: powershell
Channel: stable