-
Notifications
You must be signed in to change notification settings - Fork 322
#3171 Solved the problem of systematic data archiving #3173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release/2.8.0
Are you sure you want to change the base?
#3171 Solved the problem of systematic data archiving #3173
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Using the word archive in all archive-related contexts;
-
Using TimePeriods and the number of periods; minutes can also be added here for testing;
-
The entire Archiver configuration is in a single JSON file, saved in a single archiveSettings parameter;
-
It's good that you added validation, but from what I see, it's not implemented there. You could create a static method for this in Utils that accepts the entire object, with all validations, and returns messages. BatchSize can be used to limit the number of values in the query; it's good that you noticed this problem;
-
We don't hardcode anything in JavaScript, except parameter names; that shouldn't be there;
-
We move js functions from common.js to systemSettings.jsp or create a separate js file;
Data Archiver – Implemented Features1. Core Service and Scheduling
2. Configuration
3. Archiving Rules
4. Database Vendor Support
5. Safety and Reliability
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
📝 WalkthroughWalkthroughA comprehensive data archiving subsystem has been introduced, enabling scheduled archiving of point values and events to external databases. The system includes Quartz-based job scheduling, database-specific query providers, configuration management, and multi-language UI support. New Java classes implement archive operations with PostgreSQL and MySQL support, while JSP and DWR layers provide configuration and retrieval endpoints. Changes
Sequence DiagramsequenceDiagram
participant App as Application Startup
participant MGL as MangoContextListener
participant AU as ArchiveUtils
participant Sched as Archive Scheduler
participant AJ as ArchiveJob
participant AS as ArchiveService
participant AQPF as ArchiveQueryProviderFactory
participant DAQP as DB Query Provider
participant SDB as Source Database
participant ADB as Archive Database
App->>MGL: initSchedule()
MGL->>AU: init()
AU->>Sched: schedule(trigger, jobDetail)
Note over Sched: Waits for cron trigger
Sched-->>AJ: execute() [on schedule]
AJ->>AS: runArchive()
AS->>AQPF: get()
AQPF-->>AS: MySqlArchiveQueryProvider<br/>or PostgresArchiveQueryProvider
AS->>DAQP: tryLockTable()
DAQP->>ADB: GET_LOCK / pg_advisory_lock
ADB-->>DAQP: lock acquired
AS->>DAQP: listColumns(), tableExists()
DAQP->>SDB: metadata queries
SDB-->>DAQP: schema info
AS->>SDB: SELECT batch (paginated)
SDB-->>AS: data rows
AS->>ADB: INSERT/DELETE (per function)
ADB-->>AS: rows affected
AS->>DAQP: releaseLockTable()
DAQP->>ADB: RELEASE_LOCK / pg_advisory_unlock
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🤖 Fix all issues with AI agents
In @src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java:
- Around line 588-617: The code currently calls
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled) before validating the archive config via
SystemSettingsUtils.validateArchiveConfig(configJson, response), which can leave
ARCHIVE_ENABLED persisted when validation fails; move or defer all persistence
(the setBooleanValue and later setValue(SystemSettingsDAO.ARCHIVE_CONFIG,
configJson)) until after validateArchiveConfig returns with no messages, or wrap
the whole sequence in a transaction so that if validateArchiveConfig throws or
response.hasMessages() is true you roll back and no changes are saved; ensure
the logic that creates a new SystemSettingsDAO and calls setValue only executes
after successful validation and that setBooleanValue is executed in the same
post-validation block or transaction.
In @src/org/scada_lts/archive/ArchiveDataSourceManager.java:
- Around line 15-27: The cache key for getOrCreate currently omits the password
causing stale credentials; update the Key used by cache (class Key) to include
password (or alternatively implement cache invalidation when credentials change)
and modify getOrCreate to construct Key with (url, username, password) so
DriverManagerDataSource/JdbcTemplate are recreated for credential changes;
ensure Key's equals and hashCode incorporate the password field and adjust any
Key constructors/accessors accordingly.
- Around line 38-54: The Key class must include the password field so cache
lookups reflect credential changes: add a final String password field to class
Key, requireNonNull it in the Key(String url, String user, String password)
constructor (and update all call sites that construct Key to pass the password),
and update equals and hashCode to include password alongside url and user so
cached DataSource instances are keyed by url+user+password.
In @src/org/scada_lts/archive/ArchiveQueryProviderFactory.java:
- Around line 20-22: Remove the confusing/newInstance() and instance get() API
and standardize on the static forTarget(JdbcTemplate targetJdbc, JdbcTemplate
sourceJdbc) factory: delete newInstance() and get(), update callers (e.g.,
ArchiveService usage which already calls forTarget(archiveJdbc, jdbc)) to use
forTarget, and ensure forTarget fully wires the created
PostgresArchiveQueryProvider by setting its internal jdbc/source JdbcTemplate
fields so showCreateTable() and ensureJdbcFor() cannot see a null jdbc;
alternatively if you prefer to keep instance APIs, refactor both newInstance/get
and forTarget to accept explicit JdbcTemplate parameters and consistently set
the PostgresArchiveQueryProvider.jdbc field before returning.
In @src/org/scada_lts/archive/ArchiveService.java:
- Around line 66-133: runArchive() already uses the provider's locking API
(provider.tryLockTable(...) and provider.releaseLockTable(...)), so remove the
duplicate private lock helper methods in this class (the unused service-level
lock helpers and the Postgres-specific SQL string they reference) to avoid dual
ownership and potential drift; ensure only the provider's
tryLockTable/releaseLockTable are used and delete any private methods/fields
related to table locking and their SQL constants so the class has a single clear
owner of lock semantics.
- Around line 39-65: The ArchiveConfig.driverClassName is never applied to the
DriverManagerDataSource so the wrong JDBC driver may be chosen; in runArchive
retrieve the driverClassName from ArchiveConfig and call
archiveDataSource.setDriverClassName(driverClassName) (use
ArchiveConfig.getDriverClassName()), and if provided optionally attempt
Class.forName(driverClassName) to fail fast on invalid drivers before creating
the JdbcTemplate; ensure you guard null/empty driver names and only set/load
when non-blank, referencing runArchive, archiveDataSource, and ArchiveConfig.
- Around line 24-37: ArchiveService currently accepts an injected
IArchiveQueryProvider (field queries) but runArchive() ignores it and constructs
a new provider via ArchiveQueryProviderFactory.forTarget(...); change this by
either removing the unused constructor parameter and field or (preferred) inject
a factory/provider creator and use it inside runArchive() instead of calling
ArchiveQueryProviderFactory.forTarget(...) so the injected dependency is
consistently used (e.g., replace direct factory call with
injectedArchiveQueryProviderFactory.forTarget(...)). Also ensure the
ArchiveConfig-provided driver class is applied when building the JDBC source:
after creating DriverManagerDataSource archiveDataSource set
archiveDataSource.setDriverClassName(config.getDriverClassName()) when
config.getDriverClassName() is non-null/non-empty so the configured driver is
used.
In @src/org/scada_lts/archive/ArchiveUtils.java:
- Around line 35-37: The current early-return in ArchiveUtils checks
cronExpression.equals(lastCronApplied) which can throw a NullPointerException if
cronExpression is null; change the comparison to a null-safe check such as
Objects.equals(cronExpression, lastCronApplied) (or an equivalent null-safe
comparison) and add the necessary import for java.util.Objects, so the
conditional becomes if (enabled == lastEnabled && Objects.equals(cronExpression,
lastCronApplied)) return; to prevent NPEs when
ScadaConfig.getInstance().getArchiveCron() returns null.
In @src/org/scada_lts/archive/IArchiveQueryProvider.java:
- Around line 8-29: The interface methods that accept identifier fragments
(showCreateTable, buildSelectBatchSql, buildVerifyIdsSql, buildDeleteChunkSql,
buildCountIdsSql, buildAddColumnSql, buildInsertWithExtraColumnSql, hasColumn,
columnTypes and related) must validate or document that callers supply
allowlisted/validated identifiers and implementations must quote identifiers
safely; add or reference a helper (e.g., validateIdentifier/quoteIdentifier) and
throw IllegalArgumentException on invalid input. Add precondition checks in
bindSelectBatchArgs/buildSelectBatchSql/buildVerifyIdsSql/buildDeleteChunkSql/buildCountIdsSql
to require idCount > 0 and limit > 0 (or explicitly document allowed zero
behavior) and throw IllegalArgumentException when violated. Clarify the lock
contract for tryLockTable/releaseLockTable in the JavaDoc (which DB the lock is
intended for) and ensure implementations use the provided archiveJdbc
consistently to acquire a global lock across nodes (or change signature if main
DB locking is required). Finally, add short JavaDoc to
archivedAtColumnName/archivedAtSqlType indicating they are safe constants and
any caller constraints.
In @src/org/scada_lts/dao/SystemSettingsDAO.java:
- Around line 178-181: getArchiveConfig() in SystemSettingsDAO currently lets
JSON parse errors propagate as RuntimeException; change it to catch JSON
parsing/IO exceptions (e.g., JsonProcessingException/IOException), log an error
with context (including the bad JSON and key ARCHIVE_CONFIG) via the class
logger, and return a safe default new ArchiveConfig() (so the archive job
no-ops) instead of throwing; apply the same pattern to the other parsing sites
noted (the other getArchiveConfig-like blocks around the file) so none of them
crash unrelated flows and all log the invalid payloads.
In @src/org/scada_lts/utils/SystemSettingsUtils.java:
- Around line 24-25: The code in SystemSettingsUtils.getArchiveEnabled() (and
the similar block around lines 898-907) uses the ARCHIVE_ENABLED constant from
SystemSettingsDAO ("archiveEnabled") which is the wrong property key and causes
defaults to be used; update those calls to read ScadaConfig with the correct key
"systemsettings.archive.enabled" (or a new/appropriate constant named e.g.
SYSTEMSETTINGS_ARCHIVE_ENABLED) instead of ARCHIVE_ENABLED so the configured
environment value is picked up; ensure you change all occurrences in
getArchiveEnabled() and the repeated block at 898-907 to use the new key.
In @webapp-resources/env.properties:
- Around line 219-222: Replace the unsafe example archive config: remove
hard-coded credentials and primary DB name in systemsettings.archive.config (and
avoid duplicating enabled flags), replace the "dbUrl", "dbUsername",
"dbPassword" values with clearly-marked placeholders (e.g. ARCHIVE_DB_URL,
ARCHIVE_DB_USER, ARCHIVE_DB_PASS) and use an explicit archive-specific database
name (not "scadalts"); ensure only one source of truth for enabling (either the
"enabled" inside systemsettings.archive.config or the separate
systemsettings.archive.enabled) and set the default to false so the shipped
defaults cannot accidentally archive to the primary DB.
In @WebContent/WEB-INF/applicationContext.xml:
- Around line 381-392: The bean wiring injects archiveQueryProvider into
ArchiveService but ArchiveService.runArchive() ignores it and calls
ArchiveQueryProviderFactory.forTarget(...); update the service to use the
injected provider (remove the internal call to
ArchiveQueryProviderFactory.forTarget in ArchiveService.runArchive and use the
constructor-injected ArchiveQueryProvider instance) or alternatively change the
wiring to inject the factory instead (replace the constructor-arg on the
archiveService bean with a reference to archiveQueryProviderFactory and update
ArchiveService constructor/signature to accept and use
ArchiveQueryProviderFactory.forTarget), and remove the unused
archiveQueryProvider bean if you choose the factory injection route to avoid
conflicting/unused beans.
🟡 Minor comments (12)
WebContent/WEB-INF/dox/fr/dataArchiveSettings.htm-6-67 (1)
6-67: FR locale doc appears English + “move” wording risks implying deletion.Since this is under
dox/fr/, the content should likely be French (or moved to the correct locale). Also consider changing “move records” (Line 58) to “copy records” to match the “Archive (copy)” action and avoid implying data removal.WebContent/WEB-INF/dox/de/dataArchiveSettings.htm-6-67 (1)
6-67: DE locale doc appears English + “move” wording risks implying deletion.Since this is under
dox/de/, it should likely be German (or moved). Also consider changing “move records” (Line 58) to “copy records” to match “Archive (copy)”.src/org/scada_lts/config/ScadaConfig.java-433-440 (1)
433-440: Consider the null return case from this method.When
config.getCron()is null/empty AND theARCHIVE_CRONproperty is not set in the configuration file,conf.getProperty(ARCHIVE_CRON)will returnnull. This could cause issues in callers likeArchiveUtils.init()wherecronExpression.equals(lastCronApplied)would throw an NPE.Suggested defensive fix
public String getArchiveCron() { ArchiveConfig config = SystemSettingsDAO.getArchiveConfig(); if (config != null && config.getCron() != null && !config.getCron().isEmpty()) { return config.getCron(); } - return conf.getProperty(ARCHIVE_CRON); + String fallback = conf.getProperty(ARCHIVE_CRON); + return fallback != null ? fallback : ""; }webapp-resources/messages_en.properties-3461-3481 (1)
3461-3481: Align batch size error text with backend validation, and tighten wording.
systemSettings.archive.invalidBatchSizesays “between 0 and 32767”, but validation rejects<= 0and>= 32767(so effectively1..32766). Also consider renaming “archiver config” → “archiving config” for consistency. Based on learnings, focus on EN only.Proposed diff
-systemSettings.archive.invalidBatchSize=Batch size must be a positive integer between 0 and 32767. +systemSettings.archive.invalidBatchSize=Batch size must be a positive integer between 1 and 32766. -systemSettings.archive.invalidConfig=Invalid archiver config -systemSettings.archive.duplicateTask=Invalid archiver config - duplicate tasks +systemSettings.archive.invalidConfig=Invalid archiving config +systemSettings.archive.duplicateTask=Invalid archiving config - duplicate taskssrc/org/scada_lts/archive/MySqlArchiveQueryProvider.java-204-211 (1)
204-211:hasColumnalso doesn't handle schema-qualified table names.Similar to
columnTypes, this method should usesplitTableto handle schema-qualified names consistently.src/org/scada_lts/archive/PostgresArchiveQueryProvider.java-242-252 (1)
242-252:hasColumnhas the same schema handling issue.Same as
columnTypes- doesn't usesplitTablefor schema-qualified names.WebContent/WEB-INF/jsp/systemSettings.jsp-686-688 (1)
686-688:parseIntwithout fallback may produceNaN.If the
archiveBatchSizeinput is empty or contains non-numeric text,parseIntreturnsNaN, which will be serialized and may cause backend validation or processing errors.♻️ Suggested fix
- batchSize: parseInt($get("archiveBatchSize")), + batchSize: parseInt($get("archiveBatchSize"), 10) || 1000,WebContent/WEB-INF/jsp/systemSettings.jsp-768-770 (1)
768-770: Duplicate element IDs will be generated.Each row creates an
imgwithid="removeArchiveRuleRowImg", resulting in duplicate IDs in the DOM when multiple rules exist. This is invalid HTML and may cause issues with element selection.♻️ Suggested fix: remove the id or make it unique
- tdDel.innerHTML = `<img id="removeArchiveRuleRowImg" src="images/delete.png" alt="Remove" title="Remove" onclick="removeArchiveRuleRow(${idx})" border="0"/>`; + tdDel.innerHTML = `<img src="images/delete.png" alt="Remove" title="Remove" onclick="removeArchiveRuleRow(${idx})" style="cursor:pointer"/>`;src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java-667-681 (1)
667-681: Hardcoded English labels should use i18n message keys.The
labelKeyvalues for tables are hardcoded English strings ("Point values", "Events", "Report instances") rather than message keys. This breaks the i18n pattern used forarchiveFunctionsand will not translate properly in non-English locales.♻️ Suggested fix: use message keys
List<Map<String, String>> table = new ArrayList<>(); Map<String, String> pv = new HashMap<>(); pv.put("value", "pointValues"); - pv.put("labelKey", "Point values"); + pv.put("labelKey", "systemSettings.archive.table.pointValues"); table.add(pv); Map<String, String> events = new HashMap<>(); events.put("value", "events"); - events.put("labelKey", "Events"); + events.put("labelKey", "systemSettings.archive.table.events"); table.add(events); Map<String, String> reportInstances = new HashMap<>(); reportInstances.put("value", "reportInstances"); - reportInstances.put("labelKey", "Report instances"); + reportInstances.put("labelKey", "systemSettings.archive.table.reportInstances"); table.add(reportInstances);src/org/scada_lts/archive/PostgresArchiveQueryProvider.java-197-216 (1)
197-216:columnTypesdoesn't handle schema-qualified table names consistently.The query filters by
current_schema()but doesn't extract the schema from a qualified table name. Ifarchive.pointValuesis passed, it will search for a table literally namedarchive.pointValuesin the current schema rather thanpointValuesin thearchiveschema.♻️ Suggested fix
@Override public Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName) { + final String[] parts = splitTable(tableName); + final String schema = parts[0]; + final String table = parts[1]; final String sql = "SELECT a.attname AS column_name, format_type(a.atttypid, a.atttypmod) AS column_type " + "FROM pg_attribute a " + "JOIN pg_class c ON a.attrelid = c.oid " + "JOIN pg_namespace n ON c.relnamespace = n.oid " + "WHERE c.relkind = 'r' " + - " AND (n.nspname = current_schema() OR n.nspname = ANY (current_schemas(true))) " + + " AND n.nspname = ? " + " AND c.relname = ? " + " AND a.attnum > 0 AND NOT a.attisdropped " + "ORDER BY a.attnum"; - List<Map<String,Object>> rows = jdbc.queryForList(sql, tableName); + List<Map<String,Object>> rows = jdbc.queryForList(sql, schema, table);Committable suggestion skipped: line range outside the PR's diff.
WebContent/WEB-INF/jsp/systemSettings.jsp-1396-1402 (1)
1396-1402: Invalid HTML:<div>inside<tbody>.The
archiveAgeFieldTemplatediv is placed directly inside the<tbody>element, which is invalid HTML. A<tbody>can only contain<tr>elements. This may cause rendering issues in some browsers. Move the template outside the table.♻️ Suggested fix
- <tbody id="archiveBody"> + </tbody> + </table> + <div id="archiveAgeFieldTemplate" style="display:none"> + <input type="number" name="ageValue" value="30" min="1" class="formShort" /> + <select name="ageUnit"> + <tag:timePeriodOptions h="true" d="true" w="true" mon="true" y="true"/> + </select> + </div> + <table> + <tbody id="archiveBody"> ... - <div id="archiveAgeFieldTemplate" style="display:none"> - <input type="number" name="ageValue" value="30" min="1" class="formShort" /> - <select name="ageUnit"> - <tag:timePeriodOptions h="true" d="true" w="true" mon="true" y="true"/> - </select> - </div> - </tbody>Committable suggestion skipped: line range outside the PR's diff.
src/org/scada_lts/archive/MySqlArchiveQueryProvider.java-166-179 (1)
166-179:columnTypesdoesn't handle schema-qualified table names.Unlike
tableExistsandlistColumns, this method passestableNamedirectly to the query without splitting schema from table name. If a schema-qualified name likearchive.pointValuesis passed, the query will fail to find columns.♻️ Suggested fix for consistency
@Override public Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName) { + final String[] parts = splitTable(tableName); + final String table = parts[1]; final String sql = "SELECT COLUMN_NAME, COLUMN_TYPE " + "FROM information_schema.columns " + "WHERE table_schema = DATABASE() AND table_name = ? " + "ORDER BY ORDINAL_POSITION"; - List<Map<String,Object>> rows = jdbc.queryForList(sql, tableName); + List<Map<String,Object>> rows = jdbc.queryForList(sql, table);
🧹 Nitpick comments (15)
webapp-resources/messages_zh.properties (1)
3560-3562: Three archive-related keys lack Chinese translation.Lines 3560–3562 add new keys for the data archiving feature, but the values are in English instead of Chinese:
validate.unsupportedDataType=Data type {0} is not supportedsystemSettings.archive.duplicateTask=Invalid archiver config - duplicate taskssystemSettings.dataRetentionAndArchiveSettings=Data Retention and Archiving SettingsAll other entries in this file use Chinese characters (e.g.,
名称必须,不能超过40个字符).Per learnings, SCADA-LTS maintains translations only for Polish and English; Chinese is not an actively maintained locale, and AI translation may be used in the future. These untranslated entries should not block the PR. However, if translations are available elsewhere in the codebase (e.g.,
messages_en.propertiesormessages_pl.properties), consider adding proper Chinese translations to maintain consistency.src/org/scada_lts/archive/ArchiveDataSourceManager.java (1)
17-26: Consider using a connection pool instead ofDriverManagerDataSource.
DriverManagerDataSourcecreates a new physical connection for everygetConnection()call, which can be inefficient for batch archiving operations that may execute many queries. Consider using a pooledDataSourcelike HikariCP for better performance and connection reuse.This is acceptable for low-frequency archive jobs, but worth noting for future optimization if archiving becomes more intensive.
WebContent/WEB-INF/dox/pt/dataArchiveSettings.htm (1)
1-67: English documentation in non-English locale directories.This Portuguese locale documentation file contains English text. The same pattern appears in the Russian locale file (WebContent/WEB-INF/dox/ru/dataArchiveSettings.htm). Based on project learnings, SCADA-LTS actively maintains only Polish and English translations, so English content in non-maintained locales is acceptable.
The documentation content itself is well-structured and clearly describes the Data Archiving module features. However, consider adding a note in the manifest or a comment in these locale-specific files indicating that they currently contain English text and may be translated in the future.
src/com/serotonin/mango/vo/TimePeriodFactory.java (1)
5-17: Unify fallback semantics betweengetByCode()andgetByName().
getByCode()(line 16) returnsTimeLongPeriodType.DEFAULT, whilegetByName(null)(line 20) returnsTimePeriodType.DEFAULT—these different defaults create inconsistency. Consider unifying them to a single constant for clarity. The codes are unique across enums, so the resolution order is not an issue.Proposed fix (consistent default + basic normalization)
public class TimePeriodFactory { + private static final TimePeriod DEFAULT = TimeLongPeriodType.DEFAULT; + public static TimePeriod getByCode(int code) { for (TimePeriodType tpt : TimePeriodType.values()) { if (tpt.getCode() == code) return tpt; } @@ -11,17 +15,19 @@ public class TimePeriodFactory { for (TimeLongPeriodType tlpt : TimeLongPeriodType.values()) { if (tlpt.getCode() == code) return tlpt; } - return TimeLongPeriodType.DEFAULT; + return DEFAULT; } public static TimePeriod getByName(String name) { - if (name == null) return TimePeriodType.DEFAULT; + if (name == null) return DEFAULT; + name = name.trim(); + if (name.isEmpty()) return DEFAULT; @@ -28,7 +34,7 @@ public class TimePeriodFactory { for (TimeLongPeriodType tlpt : TimeLongPeriodType.values()) { if (tlpt.name().equalsIgnoreCase(name)) return tlpt; } - return TimeLongPeriodType.DEFAULT; + return DEFAULT; } }WebContent/WEB-INF/dox/manifest.xml (1)
443-443: Remove extra space in attribute for consistency.The attribute has an extra space:
id ="dataArchiveSettings"should beid="dataArchiveSettings"(without the space afterid). All other items in this manifest follow theid="value"format without a space.✨ Proposed fix
- <item id ="dataArchiveSettings"/> + <item id="dataArchiveSettings"/>webapp-resources/messages_ru.properties (1)
3608-3610: Minor wording/consistency: “archiver” vs “archiving/archive”Given RU isn’t actively maintained, English text here is fine, but “Invalid archiver config” reads a bit off compared to the feature naming (“archiving”). Consider adjusting for consistency.
Proposed tweak
-systemSettings.archive.duplicateTask=Invalid archiver config - duplicate tasks +systemSettings.archive.duplicateTask=Invalid archiving config - duplicate taskssrc/com/serotonin/mango/vo/TimePeriodDeserializer.java (1)
6-7: Remove redundant imports from the same package.These imports are unnecessary since
TimePeriodandTimePeriodFactoryare in the same package (com.serotonin.mango.vo) as this class.Suggested fix
import com.fasterxml.jackson.core.JsonParser; import com.fasterxml.jackson.databind.DeserializationContext; import com.fasterxml.jackson.databind.JsonDeserializer; -import com.serotonin.mango.vo.TimePeriod; -import com.serotonin.mango.vo.TimePeriodFactory; import java.io.IOException;src/org/scada_lts/archive/ArchiveUtils.java (1)
39-49: Consider validating the cron expression before scheduling.If
cronExpressionis null or empty when archiving is enabled,scheduler.schedule(cronExpression)may fail with an unclear error. Consider adding an explicit check and logging a warning if the cron expression is missing.Suggested validation
try { + if (cronExpression == null || cronExpression.trim().isEmpty()) { + LOG.warn("[ARCHIVER] Cannot schedule: cron expression is missing or empty"); + scheduler.stop(); + lastCronApplied = null; + lastEnabled = false; + return; + } LOG.info("[ARCHIVER] (Re)Scheduling archiver with cron: " + cronExpression); scheduler.schedule(cronExpression);src/org/scada_lts/archive/ArchiveService.java (1)
366-380: DDL “sanitize via regex” is fragile; expect syntax edge cases.
santizeDDL()can easily produce invalid SQL (dangling commas, MySQL version-specific clauses, etc.). If you need simplified schema creation, it’s safer to have provider-builtCREATE TABLEper supported table, or introspect columns and generate DDL rather than regex-transforming.src/org/scada_lts/utils/SystemSettingsUtils.java (1)
825-885: Validation: treat whitespace-only password as missing; align batchSize bounds with message.
isEmpty(config.getDbPassword())allows" "; considerisBlank/trim for password.- Batch size check is
<=0 || >=32767; ensure UI message matches (see i18n comment).src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java (1)
649-654: Consider logging or preserving exception context.Wrapping
IOExceptionin aRuntimeExceptionobscures the original cause and may hinder debugging. Consider logging the exception before re-throwing or using a more specific runtime exception.♻️ Suggested improvement
try { ArchiveUtils.init(); } catch (IOException e) { + LOG.error("Failed to initialize archive utilities", e); throw new RuntimeException(e); }src/org/scada_lts/archive/ArchiveTask.java (1)
8-26: Consider adding field validation.The
ArchiveTaskclass accepts any values without validation. NegativeageValue, nullfunction, or emptytablestrings could cause issues downstream. Consider adding validation either via Bean Validation annotations or in the constructor.♻️ Suggested improvement with validation
+import javax.validation.constraints.Min; +import javax.validation.constraints.NotBlank; +import javax.validation.constraints.NotNull; public class ArchiveTask { + @Min(1) private int ageValue; @JsonDeserialize(using = com.serotonin.mango.vo.TimePeriodDeserializer.class) + @NotNull private TimePeriod ageUnit; + @NotNull private ArchiveFunction function; + @NotBlank private String table;WebContent/WEB-INF/jsp/systemSettings.jsp (2)
727-728: Remove debugconsole.logstatement.Debug logging should be removed before merging to release branch.
- console.log("Inside: loadArchiveConfig");
817-829: Missing null checks and input validation ingetArchiveRulesFromTable.The function assumes all query selectors return valid elements and that
parseIntwill succeed. If the DOM is malformed or inputs are empty, this could produce unexpected results or errors.♻️ Suggested defensive improvement
rows.forEach(row => { - const ageValue = parseInt(row.querySelector('input[name="ageValue"]').value); + const ageInput = row.querySelector('input[name="ageValue"]'); + const ageValue = ageInput ? parseInt(ageInput.value, 10) || 30 : 30; const ageUnit = row.querySelector('select[name="ageUnit"]').value; const func = row.querySelector('select[name="function"]').value; const table = row.querySelector('select[name="table"]').value;src/org/scada_lts/archive/PostgresArchiveQueryProvider.java (1)
10-21: Inconsistent state management pattern.This class has an instance-level
jdbcfield and setter, but most methods acceptJdbcTemplateas a parameter. OnlyshowCreateTableuses the instance field. This mixed approach is confusing and error-prone - callers might not realize they need to set the instance field forshowCreateTableto work.Consider either:
- Make all methods use the parameter (stateless), or
- Make all methods use the instance field (stateful)
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
WebContent/WEB-INF/applicationContext.xmlWebContent/WEB-INF/dox/de/dataArchiveSettings.htmWebContent/WEB-INF/dox/en/dataArchiveSettings.htmWebContent/WEB-INF/dox/es/dataArchiveSettings.htmWebContent/WEB-INF/dox/fr/dataArchiveSettings.htmWebContent/WEB-INF/dox/manifest.xmlWebContent/WEB-INF/dox/pt/dataArchiveSettings.htmWebContent/WEB-INF/dox/ru/dataArchiveSettings.htmWebContent/WEB-INF/dwr.xmlWebContent/WEB-INF/jsp/systemSettings.jspsrc/com/serotonin/mango/MangoContextListener.javasrc/com/serotonin/mango/vo/TimeLongPeriodType.javasrc/com/serotonin/mango/vo/TimePeriodDeserializer.javasrc/com/serotonin/mango/vo/TimePeriodFactory.javasrc/com/serotonin/mango/web/dwr/SystemSettingsDwr.javasrc/org/scada_lts/archive/ArchiveConfig.javasrc/org/scada_lts/archive/ArchiveDataSourceManager.javasrc/org/scada_lts/archive/ArchiveFunction.javasrc/org/scada_lts/archive/ArchiveJob.javasrc/org/scada_lts/archive/ArchiveQueryProviderFactory.javasrc/org/scada_lts/archive/ArchiveService.javasrc/org/scada_lts/archive/ArchiveTask.javasrc/org/scada_lts/archive/ArchiveUtils.javasrc/org/scada_lts/archive/IArchiveQueryProvider.javasrc/org/scada_lts/archive/MySqlArchiveQueryProvider.javasrc/org/scada_lts/archive/PostgresArchiveQueryProvider.javasrc/org/scada_lts/config/ScadaConfig.javasrc/org/scada_lts/dao/SystemSettingsDAO.javasrc/org/scada_lts/utils/SystemSettingsUtils.javawebapp-resources/env.propertieswebapp-resources/messages_de.propertieswebapp-resources/messages_en.propertieswebapp-resources/messages_es.propertieswebapp-resources/messages_fi.propertieswebapp-resources/messages_fr.propertieswebapp-resources/messages_lu.propertieswebapp-resources/messages_nl.propertieswebapp-resources/messages_pl.propertieswebapp-resources/messages_pt.propertieswebapp-resources/messages_ru.propertieswebapp-resources/messages_zh.properties
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-01-05T14:39:51.606Z
Learnt from: Limraj
Repo: SCADA-LTS/Scada-LTS PR: 3239
File: webapp-resources/messages_fi.properties:3565-3565
Timestamp: 2026-01-05T14:39:51.606Z
Learning: SCADA-LTS maintains translations only for Polish and English. Do not invest in or expect active maintenance for other languages (e.g., messages_fi.properties, messages_de.properties). If a messages_*.properties file contains English text and is not maintained, it can be left as-is for now. AI translation may be used in the future. During reviews, prioritize Polish and English keys/text; non-maintained locales should not block changes.
Applied to files:
webapp-resources/messages_pt.propertieswebapp-resources/messages_pl.propertieswebapp-resources/messages_es.propertieswebapp-resources/messages_de.propertieswebapp-resources/messages_fr.propertieswebapp-resources/messages_fi.propertieswebapp-resources/messages_nl.propertieswebapp-resources/messages_lu.propertieswebapp-resources/messages_ru.propertieswebapp-resources/messages_zh.propertieswebapp-resources/messages_en.properties
🧬 Code graph analysis (5)
src/com/serotonin/mango/MangoContextListener.java (1)
src/org/scada_lts/archive/ArchiveUtils.java (1)
ArchiveUtils(12-59)
src/org/scada_lts/config/ScadaConfig.java (2)
src/org/scada_lts/archive/ArchiveConfig.java (1)
ArchiveConfig(8-83)src/org/scada_lts/dao/SystemSettingsDAO.java (1)
SystemSettingsDAO(52-516)
src/org/scada_lts/archive/ArchiveUtils.java (4)
src/org/scada_lts/config/ScadaConfig.java (1)
ScadaConfig(43-441)src/org/scada_lts/dao/SystemSettingsDAO.java (1)
SystemSettingsDAO(52-516)src/org/scada_lts/quartz/CronTriggerScheduler.java (1)
CronTriggerScheduler(7-38)src/org/scada_lts/web/beans/ApplicationBeans.java (1)
ApplicationBeans(31-206)
src/org/scada_lts/dao/SystemSettingsDAO.java (3)
src/org/scada_lts/archive/ArchiveConfig.java (1)
ArchiveConfig(8-83)src/org/scada_lts/utils/SystemSettingsUtils.java (1)
SystemSettingsUtils(29-915)scadalts-ui/build-config.js (1)
json(17-17)
src/org/scada_lts/utils/SystemSettingsUtils.java (3)
src/org/scada_lts/archive/ArchiveConfig.java (1)
ArchiveConfig(8-83)src/org/scada_lts/archive/ArchiveTask.java (1)
ArchiveTask(8-61)src/org/scada_lts/dao/SystemSettingsDAO.java (1)
SystemSettingsDAO(52-516)
🔇 Additional comments (30)
webapp-resources/messages_fi.properties (1)
3585-3587: New archive-related keys are correctly positioned in non-maintained locale.The two new keys (
systemSettings.archive.duplicateTaskandsystemSettings.dataRetentionAndArchiveSettings) are appropriately left in English. Per documented practice, SCADA-LTS maintains active translations only for Polish and English; Finnish and other language files are non-maintained and English text is acceptable for future translation. This approach aligns with the project's localization strategy.webapp-resources/messages_lu.properties (1)
3473-3475: Archive feature localization entries added; no translation provided for Luxembourgish.The three new entries for the archiving feature (
validate.unsupportedDataType,systemSettings.archive.duplicateTask,systemSettings.dataRetentionAndArchiveSettings) are in English. Based on learnings, SCADA-LTS maintains translations only for Polish and English, and non-maintained locales such as Luxembourgish can remain in English as placeholder text for now. These entries follow the correct naming convention and are syntactically sound.src/org/scada_lts/archive/ArchiveDataSourceManager.java (2)
33-36:getDataSourcewill break if password is added to Key.If the cache key is updated to include the password (as recommended above), this method would need to accept the password parameter as well to perform the lookup. Consider updating the signature or providing an alternative approach such as storing the
DataSourcedirectly in the cache rather than theJdbcTemplate.
29-31: LGTM!The
clear()method appropriately clears the cache. SinceDriverManagerDataSourcedoesn't hold pooled connections, there are no resources to explicitly close here. If connection pooling is added later, this method should be updated to close the underlying pools.webapp-resources/messages_pt.properties (1)
3613-3614: Portuguese translations missing for new archive keys.Lines 3613-3614 contain English text instead of Portuguese translations:
systemSettings.archive.duplicateTask: "Invalid archiver config - duplicate tasks"systemSettings.dataRetentionAndArchiveSettings: "Data Retention and Archiving Settings"While per learnings, Portuguese (messages_pt.properties) is not actively maintained and English fallback is acceptable, it would be inconsistent with the file's purpose. Consider providing Portuguese translations if possible, or accept English as a temporary fallback pending future AI-assisted translation.
⛔ Skipped due to learnings
Learnt from: Limraj Repo: SCADA-LTS/Scada-LTS PR: 3239 File: webapp-resources/messages_fi.properties:3565-3565 Timestamp: 2026-01-05T14:39:59.665Z Learning: The Scada-LTS project's translation policy focuses only on Polish and English translations. Other language files (such as messages_fi.properties, messages_de.properties, etc.) may contain English text and are not actively maintained per project guidelines. AI translation may be used for these files in the future.webapp-resources/messages_nl.properties (1)
3575-3577: Non-maintained locale with English text is acceptable.The Dutch (nl) locale is not actively maintained for this project. Per project learnings, English text in non-maintained locale files is acceptable and can be left as-is. These new archive-related keys correctly support the Data Archiving feature.
Based on learnings, SCADA-LTS maintains translations only for Polish and English. Non-maintained locales should not block changes.
src/com/serotonin/mango/vo/TimeLongPeriodType.java (1)
36-37: LGTM! Consistent default value.The DEFAULT constant provides a public, reusable default value that aligns with the existing fallback behavior in both
getType()methods (lines 70 and 77), which already default toDAYS. This improves maintainability and provides a single source of truth for the default time period type.WebContent/WEB-INF/dox/en/dataArchiveSettings.htm (1)
1-67: LGTM! Clear and comprehensive documentation.The English documentation for the Data Archiving module is well-structured and provides clear guidance on configuration, rules, and usage. The inclusion of a practical example and additional notes about schema compatibility and backup recommendations is helpful for users.
src/com/serotonin/mango/MangoContextListener.java (2)
66-66: LGTM: Import addition is appropriate.The import is correctly placed and will be used in the initialization block.
712-718: Archive subsystem initialization looks good.The initialization block follows the established pattern for scheduled jobs (EverySecond, EverySecondTool) with appropriate exception handling and logging. The
ArchiveUtils.init()method handles the enable/disable state and cron configuration internally.WebContent/WEB-INF/dwr.xml (1)
248-253: LGTM: DWR converter mappings are correctly configured.The archive-related converters (ArchiveConfig, ArchiveTask, ArchiveFunction, ChronoUnit) are properly declared to enable marshaling/unmarshaling of these types for DWR calls. The inclusion of
java.time.temporal.ChronoUnitis appropriate for handling time unit enumerations in the archive configuration.webapp-resources/messages_pl.properties (1)
3597-3599: LGTM: Archive-related translation keys added.The new Polish translation entries for archive validation and settings are correctly added:
systemSettings.archive.duplicateTaskfor configuration validationsystemSettings.dataRetentionAndArchiveSettingsfor the UI section labelThese align with the archive feature additions described in the PR objectives.
Based on learnings, SCADA-LTS maintains translations for Polish and English as priority languages.
webapp-resources/messages_es.properties (1)
3500-3502: Translation keys contain English text in Spanish locale file.The three new keys added for the data archiving feature contain English text rather than Spanish translations. Based on learnings, this is acceptable since SCADA-LTS actively maintains only Polish and English translations—other locales like Spanish may use English as placeholder text for now.
webapp-resources/messages_de.properties (1)
3457-3459: German localization file contains English values — acceptable per maintenance policy.The three new entries supporting the archiving feature contain English text rather than German translations. While this creates an inconsistency with the German locale file's purpose, this aligns with the documented practice for non-maintained languages in SCADA-LTS.
Based on learnings, German translations are not actively maintained; AI-assisted translation may be applied in the future. These entries should not block the PR.
webapp-resources/messages_fr.properties (1)
3454-3456: New archive-related localization keys added.The new keys are in English rather than French, which aligns with the project policy that only Polish and English translations are actively maintained. Based on learnings, non-maintained locales like French can retain English text until AI translation is considered in the future.
WebContent/WEB-INF/dox/es/dataArchiveSettings.htm (1)
1-67: Documentation file for Data Archiving feature looks good.The documentation is well-structured, covering overview, settings, archiving rules, usage examples, and important notes. The content is in English despite being in the Spanish locale folder (
es/), which is consistent with how other non-maintained locale documentation is handled in this project.src/com/serotonin/mango/vo/TimePeriodDeserializer.java (1)
11-22: Deserialization logic is clean and handles both numeric and string inputs correctly.The implementation gracefully handles both integer codes and named identifiers by falling back to
getByNamewhen parsing fails. SinceTimePeriodFactorymethods return sensible defaults (TimeLongPeriodType.DEFAULT), this deserializer will never return null.src/org/scada_lts/archive/ArchiveFunction.java (1)
1-16: Clean enum design for archiving operations.The enum provides a type-safe representation of archiving actions with associated UI labels. The implementation is straightforward and follows Java conventions.
src/org/scada_lts/archive/ArchiveUtils.java (1)
16-17: Volatile fields provide visibility but not atomicity for the compound check.The combination of
lastEnabledandlastCronAppliedare checked together on line 35, but since they are updated separately, there's a theoretical race window. For the current usage pattern (startup/config refresh), this is acceptable, but be aware of this limitation if concurrent calls become more common.src/org/scada_lts/archive/ArchiveJob.java (3)
38-41: CatchingThrowableis intentionally broad for job robustness.Catching
Throwableensures that evenErrortypes (likeOutOfMemoryError) are logged before being rethrown asJobExecutionException. This prevents silent failures in the Quartz scheduler. The approach is acceptable for a background job, though be aware that someErrortypes may indicate unrecoverable states.
11-23: Good design with testable constructor injection.The dual constructor pattern—default for production (using
ApplicationBeans) and parameterized for testing—enables clean unit testing of the archive job without requiring the Spring context.
25-42: Job execution logic is well-structured.The implementation properly checks the
ARCHIVE_ENABLEDflag before processing, logs start/end times with duration, and usesStatefulJobto prevent concurrent executions. The early return pattern keeps the code clean.src/org/scada_lts/archive/ArchiveConfig.java (1)
19-82: ArchiveConfig looks solid (Jackson-friendly + password masking intoString()).src/org/scada_lts/archive/ArchiveService.java (1)
533-570: No changes needed. The project explicitly targets Java 11 (sourceCompatibility = 1.11, targetCompatibility = 1.11 in build.gradle), which fully supports thevarkeyword introduced in Java 10. Thevardeclarations are valid and will compile without issues.Likely an incorrect or invalid review comment.
WebContent/WEB-INF/applicationContext.xml (1)
364-379: Cron expression is properly set at runtime; no startup riskThe
archiveTriggerbean intentionally omits the cron expression in the XML constructor arguments.ArchiveUtils.init()(invoked during startup atMangoContextListener.java:714) retrieves the cron expression fromScadaConfig.getInstance().getArchiveCron()and callsscheduler.schedule(cronExpression), which internally invokestrigger.setCronExpression()before Quartz scheduling occurs. This pattern is consistent with other configuration-driven schedulers in the codebase (e.g.,DataSourcePointsCache,PointHierarchyCache).Likely an incorrect or invalid review comment.
src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java (1)
218-224: LGTM!The extension of
getSettings()to include archive configuration settings is straightforward and follows the existing pattern.WebContent/WEB-INF/jsp/systemSettings.jsp (1)
691-700:groveLoggingis hardcoded totrue.The
groveLoggingparameter is hardcoded totruewithout any UI control, whereas the deprecatedsaveDataRetentionSettingsaccepted this as a configurable parameter. This removes user control over this setting.Is this intentional, or should a checkbox be added to the UI for grove logging configuration?
src/org/scada_lts/archive/MySqlArchiveQueryProvider.java (2)
94-106: LGTM! Advisory locking implementation looks correct.The use of MySQL's
GET_LOCK/RELEASE_LOCKfor non-blocking advisory locks is appropriate for preventing concurrent archiving across nodes. The lock name includes the table name for granularity.
118-134: The SQL injection concern is already mitigated by validation earlier in the call stack.The
tableNameparameter is validated against an allowlist (ALLOWED_TABLESin ArchiveService) before any of these SQL builder methods are invoked. At line 71 of ArchiveService,ensureTableAllowed(tableName)is called immediately after extracting the table name from the task configuration and throws anIllegalArgumentExceptionif the table is not in the set of allowed tables:{"pointValues", "events", "userEvents", "reportInstances"}. This validation occurs beforearchiveTable()ordeleteFromSourceIfInArchive()are called (lines 113, 116), which subsequently invoke the SQL builder methods. The allowlist validation covers both bare table names and schema-qualified names (e.g.,schema.tableName).src/org/scada_lts/archive/PostgresArchiveQueryProvider.java (1)
78-114: The method signature correctly matches the interface pattern defined inIArchiveQueryProvider, which specifiesString showCreateTable(String tableName);without a JdbcTemplate parameter. The MySqlArchiveQueryProvider also implements the same signature without JdbcTemplate. Methods likegetTimestampColumnName()andbuildSelectBatchSql()similarly only take tableName, not JdbcTemplate. Methods that do accept JdbcTemplate parameters (e.g.,tableExists(),listColumns()) are separate interface methods with different responsibilities, not part of the same pattern.The class provides appropriate mechanisms to handle the instance
jdbcfield through a constructor parameter and a setter method, with validation viaensureJdbcFor(). This is the intended design—not a violation of it.Likely an incorrect or invalid review comment.
| systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, | ||
| archiveEnabled); | ||
|
|
||
| validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod); | ||
| validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_TYPE_DEFAULT, defaultPurgePeriodType); | ||
| validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIODS, futureDateLimitPeriods); | ||
| validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIOD_TYPE, futureDateLimitPeriodType); | ||
|
|
||
| if (!archiveEnabled) { | ||
| validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIOD_TYPE, eventPurgePeriodType); | ||
| validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIODS, eventPurgePeriods); | ||
| validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIOD_TYPE, reportPurgePeriodType); | ||
| validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIODS, reportPurgePeriods); | ||
| validateNonNegative(response, SystemSettingsDAO.VALUES_LIMIT_FOR_PURGE, valuesLimitForPurge); | ||
| } | ||
|
|
||
| try { | ||
| response = SystemSettingsUtils.validateArchiveConfig(configJson, response); | ||
| } catch (Exception ex) { | ||
| response.addContextualMessage("dataArchiveMessage", "emport.parseError"); | ||
| return response; | ||
| } | ||
|
|
||
| if(!response.getHasMessages()) { | ||
| systemSettingsDAO = new SystemSettingsDAO(); | ||
| systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson); | ||
| } | ||
| else { | ||
| return response; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ARCHIVE_ENABLED is persisted before configuration validation completes.
The ARCHIVE_ENABLED flag is saved to the database at line 588-589 before the archive config JSON is validated (lines 604-617). If validation fails and returns early, the system remains in an inconsistent state where archiving is enabled but the configuration is invalid or unchanged.
Consider deferring all persistence until validation passes, or wrap the entire operation in a transaction.
♻️ Suggested fix: defer persistence until after validation
public DwrResponseI18n saveDataRetentionAndArchiveConfig(boolean archiveEnabled, String configJson, int eventPurgePeriodType,
int eventPurgePeriods, int reportPurgePeriodType,
int reportPurgePeriods, boolean groveLogging,
int futureDateLimitPeriodType, int futureDateLimitPeriods,
int defaultPurgePeriod, int defaultPurgePeriodType, int valuesLimitForPurge) {
Permissions.ensureAdmin();
DwrResponseI18n response = new DwrResponseI18n();
SystemSettingsDAO systemSettingsDAO = new SystemSettingsDAO();
- systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
- archiveEnabled);
-
validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod);
// ... other validations ...
try {
response = SystemSettingsUtils.validateArchiveConfig(configJson, response);
} catch (Exception ex) {
response.addContextualMessage("dataArchiveMessage", "emport.parseError");
return response;
}
if(!response.getHasMessages()) {
+ systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, archiveEnabled);
systemSettingsDAO = new SystemSettingsDAO();
systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, | |
| archiveEnabled); | |
| validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod); | |
| validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_TYPE_DEFAULT, defaultPurgePeriodType); | |
| validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIODS, futureDateLimitPeriods); | |
| validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIOD_TYPE, futureDateLimitPeriodType); | |
| if (!archiveEnabled) { | |
| validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIOD_TYPE, eventPurgePeriodType); | |
| validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIODS, eventPurgePeriods); | |
| validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIOD_TYPE, reportPurgePeriodType); | |
| validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIODS, reportPurgePeriods); | |
| validateNonNegative(response, SystemSettingsDAO.VALUES_LIMIT_FOR_PURGE, valuesLimitForPurge); | |
| } | |
| try { | |
| response = SystemSettingsUtils.validateArchiveConfig(configJson, response); | |
| } catch (Exception ex) { | |
| response.addContextualMessage("dataArchiveMessage", "emport.parseError"); | |
| return response; | |
| } | |
| if(!response.getHasMessages()) { | |
| systemSettingsDAO = new SystemSettingsDAO(); | |
| systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson); | |
| } | |
| else { | |
| return response; | |
| } | |
| if(!response.getHasMessages()) { | |
| systemSettingsDAO = new SystemSettingsDAO(); | |
| systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, archiveEnabled); | |
| systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson); | |
| } |
🤖 Prompt for AI Agents
In @src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java around lines 588 -
617, The code currently calls
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled) before validating the archive config via
SystemSettingsUtils.validateArchiveConfig(configJson, response), which can leave
ARCHIVE_ENABLED persisted when validation fails; move or defer all persistence
(the setBooleanValue and later setValue(SystemSettingsDAO.ARCHIVE_CONFIG,
configJson)) until after validateArchiveConfig returns with no messages, or wrap
the whole sequence in a transaction so that if validateArchiveConfig throws or
response.hasMessages() is true you roll back and no changes are saved; ensure
the logic that creates a new SystemSettingsDAO and calls setValue only executes
after successful validation and that setBooleanValue is executed in the same
post-validation block or transaction.
| public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) { | ||
| Key k = new Key(url, username); | ||
| return cache.computeIfAbsent(k, kk -> { | ||
| DriverManagerDataSource ds = new DriverManagerDataSource(); | ||
| if (driverClassName != null && !driverClassName.isEmpty()) { | ||
| ds.setDriverClassName(driverClassName); | ||
| } | ||
| ds.setUrl(url); | ||
| ds.setUsername(username); | ||
| ds.setPassword(password); | ||
| return new JdbcTemplate(ds); | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cache key excludes password, causing stale credential usage.
The cache key is built from (url, username) only, but the password parameter is used when creating the DataSource. If the archive configuration is updated with a new password while keeping the same URL and username, this method will return the previously cached JdbcTemplate configured with the old password, leading to authentication failures or security issues.
Either include the password in the cache key, or invalidate the cache entry when credentials change.
🔧 Suggested fix: Include password in the cache key
- public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) {
- Key k = new Key(url, username);
+ public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) {
+ Key k = new Key(url, username, password);
return cache.computeIfAbsent(k, kk -> {And update the Key class accordingly (see next comment).
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveDataSourceManager.java around lines 15 -
27, The cache key for getOrCreate currently omits the password causing stale
credentials; update the Key used by cache (class Key) to include password (or
alternatively implement cache invalidation when credentials change) and modify
getOrCreate to construct Key with (url, username, password) so
DriverManagerDataSource/JdbcTemplate are recreated for credential changes;
ensure Key's equals and hashCode incorporate the password field and adjust any
Key constructors/accessors accordingly.
| private static final class Key { | ||
| final String url; | ||
| final String user; | ||
|
|
||
| Key(String url, String user) { | ||
| this.url = Objects.requireNonNull(url, "url"); | ||
| this.user = Objects.requireNonNull(user, "user"); | ||
| } | ||
|
|
||
| @Override public boolean equals(Object o) { | ||
| if (this == o) return true; | ||
| if (!(o instanceof Key)) return false; | ||
| Key key = (Key) o; | ||
| return url.equals(key.url) && user.equals(key.user); | ||
| } | ||
| @Override public int hashCode() { return Objects.hash(url, user); } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Include password in the Key class to match cache lookup semantics.
The Key class should include the password to ensure that credential changes result in a new DataSource being created rather than returning a stale cached instance.
♻️ Proposed fix
private static final class Key {
final String url;
final String user;
+ final String password;
- Key(String url, String user) {
+ Key(String url, String user, String password) {
this.url = Objects.requireNonNull(url, "url");
this.user = Objects.requireNonNull(user, "user");
+ this.password = password; // password may be null or empty
}
@Override public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Key)) return false;
Key key = (Key) o;
- return url.equals(key.url) && user.equals(key.user);
+ return url.equals(key.url) && user.equals(key.user)
+ && Objects.equals(password, key.password);
}
- @Override public int hashCode() { return Objects.hash(url, user); }
+ @Override public int hashCode() { return Objects.hash(url, user, password); }
}🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveDataSourceManager.java around lines 38 -
54, The Key class must include the password field so cache lookups reflect
credential changes: add a final String password field to class Key,
requireNonNull it in the Key(String url, String user, String password)
constructor (and update all call sites that construct Key to pass the password),
and update equals and hashCode to include password alongside url and user so
cached DataSource instances are keyed by url+user+password.
| public static IArchiveQueryProvider newInstance(JdbcTemplate targetJdbc) { | ||
| return forTarget(targetJdbc, targetJdbc); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Locate the factory file
find . -name "ArchiveQueryProviderFactory.java" -type fRepository: SCADA-LTS/Scada-LTS
Length of output: 123
🏁 Script executed:
# Locate the provider file
find . -name "PostgresArchiveQueryProvider.java" -type fRepository: SCADA-LTS/Scada-LTS
Length of output: 124
🏁 Script executed:
# Search for IArchiveQueryProvider interface
find . -name "IArchiveQueryProvider.java" -type fRepository: SCADA-LTS/Scada-LTS
Length of output: 117
🏁 Script executed:
# Read the factory file
cat -n src/org/scada_lts/archive/ArchiveQueryProviderFactory.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 2927
🏁 Script executed:
# Read the provider file to check setJdbcTemplate
cat -n src/org/scada_lts/archive/PostgresArchiveQueryProvider.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 15465
🏁 Script executed:
# Read the interface
cat -n src/org/scada_lts/archive/IArchiveQueryProvider.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 1758
🏁 Script executed:
# Search for usages of newInstance, forTarget, and get
rg -n --type=java "ArchiveQueryProviderFactory\.(newInstance|forTarget|get)\(" -B 2 -A 2Repository: SCADA-LTS/Scada-LTS
Length of output: 593
🏁 Script executed:
# Check if get() method is used and whether PostgresArchiveQueryProvider jdbc is set
rg -n --type=java "\.get\(\)" | grep -i archive || echo "No direct matches"
rg -n "factory.*get\|\.get\(\)" WebContent/WEB-INF/applicationContext.xml 2>/dev/null || echo "No XML config found"Repository: SCADA-LTS/Scada-LTS
Length of output: 240
🏁 Script executed:
# Search for showCreateTable usage to understand context
rg -n --type=java "showCreateTable" -B 2 -A 2Repository: SCADA-LTS/Scada-LTS
Length of output: 2482
🏁 Script executed:
# Verify if MySqlArchiveQueryProvider also has showCreateTable and whether it needs jdbc field
cat -n src/org/scada_lts/archive/MySqlArchiveQueryProvider.java | head -40Repository: SCADA-LTS/Scada-LTS
Length of output: 1493
🏁 Script executed:
# Verify if get() method is actually used anywhere in codebase
rg -n "ArchiveQueryProviderFactory.*get()" --type=java
rg -n "\.get\(\)" src/org/scada_lts/archive/ --type=java | grep -i provider || echo "No get() usage in archive"Repository: SCADA-LTS/Scada-LTS
Length of output: 236
Factory API is misleading and inconsistent; newInstance() should be removed or clarified, and get() is incomplete for PostgreSQL.
newInstance(targetJdbc)passes the sameJdbcTemplateas both target and source metadata, implying archive DB ≠ source DB is unsupported. This contradicts the actual usage pattern (seeArchiveServiceline 61:forTarget(archiveJdbc, jdbc)with separate DBs).get()createsPostgresArchiveQueryProviderwithout setting theJdbcTemplatefield, butshowCreateTable()(line 79) callsensureJdbcFor()and throws ifjdbcis null.MySqlArchiveQueryProvider.showCreateTable()only generates SQL, so it doesn't have this issue. Theget()method is unused, but this is a trap for future callers.- API inconsistency:
get()(instance-based, line 24–31) vsforTarget()(static, line 33–59) have different contracts for wiringJdbcTemplate.
Either remove newInstance() and get() entirely (if only forTarget() is needed), or refactor both to accept explicit JdbcTemplate parameters and set them consistently in PostgresArchiveQueryProvider.
🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveQueryProviderFactory.java around lines 20 -
22, Remove the confusing/newInstance() and instance get() API and standardize on
the static forTarget(JdbcTemplate targetJdbc, JdbcTemplate sourceJdbc) factory:
delete newInstance() and get(), update callers (e.g., ArchiveService usage which
already calls forTarget(archiveJdbc, jdbc)) to use forTarget, and ensure
forTarget fully wires the created PostgresArchiveQueryProvider by setting its
internal jdbc/source JdbcTemplate fields so showCreateTable() and
ensureJdbcFor() cannot see a null jdbc; alternatively if you prefer to keep
instance APIs, refactor both newInstance/get and forTarget to accept explicit
JdbcTemplate parameters and consistently set the
PostgresArchiveQueryProvider.jdbc field before returning.
| private static final int MAX_BATCHES_PER_TASK = 10_000; | ||
|
|
||
| private static final Set<String> ALLOWED_TABLES = Set.of( | ||
| "pointValues", | ||
| "events", | ||
| "userEvents", | ||
| "reportInstances" | ||
| ); | ||
|
|
||
| private final IArchiveQueryProvider queries; | ||
|
|
||
| public ArchiveService(IArchiveQueryProvider queries) { | ||
| this.queries = queries; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# Check Java version configuration in build files
fd -t f "\.(xml|gradle|properties)$" --max-results 20 | head -20Repository: SCADA-LTS/Scada-LTS
Length of output: 820
🏁 Script executed:
# Check for Java version configuration in pom.xml
rg -i "maven.compiler|sourceCompatibility|targetCompatibility|<source>|<target>" --type xml -A 2 -B 2Repository: SCADA-LTS/Scada-LTS
Length of output: 439
🏁 Script executed:
# Read the ArchiveService.java file to verify all claims
cat -n src/org/scada_lts/archive/ArchiveService.java | head -100Repository: SCADA-LTS/Scada-LTS
Length of output: 4742
🏁 Script executed:
# Check lines around the ones mentioned (24-37, 39-65, 451, 484-529, 536-537, 565-566)
sed -n '24,70p' src/org/scada_lts/archive/ArchiveService.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 1819
🏁 Script executed:
# Check gradle.properties for Java version
cat gradle.propertiesRepository: SCADA-LTS/Scada-LTS
Length of output: 365
🏁 Script executed:
# Read lines 451, 484-529, 536-537, 565-566 for var, String.isBlank(), and lock helpers
sed -n '445,475p;484,530p;536,540p;565,570p' src/org/scada_lts/archive/ArchiveService.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 4824
🏁 Script executed:
# Check if ArchiveConfig has driverClassName field
fd -t f "ArchiveConfig.java" | head -1 | xargs -I {} sh -c 'echo "File: {}"; cat -n {}'Repository: SCADA-LTS/Scada-LTS
Length of output: 3755
🏁 Script executed:
# Search for usage of this.queries in ArchiveService to confirm it's unused
rg -n "this\.queries" src/org/scada_lts/archive/ArchiveService.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 97
🏁 Script executed:
# Find build.gradle and check for Java version configuration
fd -t f "build.gradle" | head -5 | xargs -I {} sh -c 'echo "File: {}"; head -100 {}'Repository: SCADA-LTS/Scada-LTS
Length of output: 5877
🏁 Script executed:
# Check if this.queries is used anywhere else in ArchiveService.java
rg -n "queries" src/org/scada_lts/archive/ArchiveService.java | head -20Repository: SCADA-LTS/Scada-LTS
Length of output: 211
🏁 Script executed:
# Check if tryLockTable and releaseLockTable methods are actually called
rg -n "tryLockTable|releaseLockTable" src/org/scada_lts/archive/ArchiveService.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 445
🏁 Script executed:
# Verify if driverClassName is actually used anywhere in ArchiveService
rg -n "driverClassName|getDriverClassName" src/org/scada_lts/archive/ArchiveService.javaRepository: SCADA-LTS/Scada-LTS
Length of output: 45
Remove unused injected dependency and apply driverClassName to DataSource.
The ArchiveService constructor accepts IArchiveQueryProvider queries, but runArchive() creates a new provider via ArchiveQueryProviderFactory.forTarget(...) at line 61 and never uses the injected this.queries. Either remove the injected dependency and fix Spring wiring, or inject a factory and use it consistently.
Additionally, ArchiveConfig provides driverClassName (line 17), but it is not applied when creating DriverManagerDataSource at lines 54-58. Set archiveDataSource.setDriverClassName(config.getDriverClassName()) if the driver class is configured.
🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveService.java around lines 24 - 37,
ArchiveService currently accepts an injected IArchiveQueryProvider (field
queries) but runArchive() ignores it and constructs a new provider via
ArchiveQueryProviderFactory.forTarget(...); change this by either removing the
unused constructor parameter and field or (preferred) inject a factory/provider
creator and use it inside runArchive() instead of calling
ArchiveQueryProviderFactory.forTarget(...) so the injected dependency is
consistently used (e.g., replace direct factory call with
injectedArchiveQueryProviderFactory.forTarget(...)). Also ensure the
ArchiveConfig-provided driver class is applied when building the JDBC source:
after creating DriverManagerDataSource archiveDataSource set
archiveDataSource.setDriverClassName(config.getDriverClassName()) when
config.getDriverClassName() is non-null/non-empty so the configured driver is
used.
| public interface IArchiveQueryProvider { | ||
|
|
||
| String showCreateTable(String tableName); | ||
| String getTimestampColumnName(String tableName); | ||
| boolean tableExists(JdbcTemplate jdbc, String tableName); | ||
| List<String> listColumns(JdbcTemplate jdbc, String tableName); | ||
| boolean tryLockTable(JdbcTemplate archiveJdbc, String tableName); | ||
| void releaseLockTable(JdbcTemplate archiveJdbc, String tableName); | ||
| void applyPerTransactionTimeout(JdbcTemplate jdbc, int seconds); | ||
| void clearPerTransactionTimeout(JdbcTemplate jdbc); | ||
| String buildSelectBatchSql(String tableName, String selectColumns, String tsColumn, boolean haveCursor, int limit); | ||
| List<Object> bindSelectBatchArgs(long beforeTsMillis, boolean haveCursor, long lastTs, long lastId); | ||
| String buildVerifyIdsSql(String tableName, int idCount); | ||
| String buildDeleteChunkSql(String tableName, int idCount); | ||
| String buildCountIdsSql(String tableName, int idCount); | ||
| Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName); | ||
| String buildAddColumnSql(String tableName, String columnName, String columnType); | ||
| String archivedAtColumnName(); | ||
| String archivedAtSqlType(); | ||
| String buildInsertWithExtraColumnSql(String tableName, String selectColumns, String extraColumn); | ||
| boolean hasColumn(JdbcTemplate jdbc, String tableName, String columnName); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden identifier inputs + define preconditions for idCount/limit + clarify lock contract.
Because this interface builds SQL from identifiers/fragments (tableName, columnName, selectColumns, extraColumn), it’s critical that callers only pass allowlisted values (and implementations quote identifiers correctly). Also please define/guard preconditions like idCount > 0 and limit > 0 to avoid generating invalid SQL. Finally, tryLockTable(JdbcTemplate archiveJdbc, ...) should explicitly document whether the lock is taken on the main DB, archive DB, or both (given the “global lock across nodes” goal).
🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/IArchiveQueryProvider.java around lines 8 - 29,
The interface methods that accept identifier fragments (showCreateTable,
buildSelectBatchSql, buildVerifyIdsSql, buildDeleteChunkSql, buildCountIdsSql,
buildAddColumnSql, buildInsertWithExtraColumnSql, hasColumn, columnTypes and
related) must validate or document that callers supply allowlisted/validated
identifiers and implementations must quote identifiers safely; add or reference
a helper (e.g., validateIdentifier/quoteIdentifier) and throw
IllegalArgumentException on invalid input. Add precondition checks in
bindSelectBatchArgs/buildSelectBatchSql/buildVerifyIdsSql/buildDeleteChunkSql/buildCountIdsSql
to require idCount > 0 and limit > 0 (or explicitly document allowed zero
behavior) and throw IllegalArgumentException when violated. Clarify the lock
contract for tryLockTable/releaseLockTable in the JavaDoc (which DB the lock is
intended for) and ensure implementations use the provided archiveJdbc
consistently to acquire a global lock across nodes (or change signature if main
DB locking is required). Finally, add short JavaDoc to
archivedAtColumnName/archivedAtSqlType indicating they are safe constants and
any caller constraints.
| //Data archiving | ||
| public static final String ARCHIVE_CONFIG = "archiveConfig"; | ||
| public static final String ARCHIVE_ENABLED = "archiveEnabled"; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t crash on invalid ARCHIVE_CONFIG JSON; fail closed and log.
getArchiveConfig() throws a RuntimeException on parse errors, which can break unrelated flows. Consider logging and returning an empty ArchiveConfig (so the job effectively no-ops), or throwing a typed exception that callers explicitly handle.
Proposed diff
public class SystemSettingsDAO {
+ private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@@
public static ArchiveConfig getArchiveConfig() {
String json = getValue(ARCHIVE_CONFIG, null);
if (json == null || json.trim().isEmpty())
return new ArchiveConfig();
try {
- return new ObjectMapper().readValue(json, ArchiveConfig.class);
+ return OBJECT_MAPPER.readValue(json, ArchiveConfig.class);
} catch (Exception e) {
- throw new RuntimeException("Cannot parse ARCHIVE_CONFIG JSON: " + e.getMessage(), e);
+ LOG.error("Cannot parse ARCHIVE_CONFIG JSON; archiving will be disabled until fixed. Value length="
+ + json.length(), e);
+ return new ArchiveConfig();
}
}
}Also applies to: 438-440, 506-515
🤖 Prompt for AI Agents
In @src/org/scada_lts/dao/SystemSettingsDAO.java around lines 178 - 181,
getArchiveConfig() in SystemSettingsDAO currently lets JSON parse errors
propagate as RuntimeException; change it to catch JSON parsing/IO exceptions
(e.g., JsonProcessingException/IOException), log an error with context
(including the bad JSON and key ARCHIVE_CONFIG) via the class logger, and return
a safe default new ArchiveConfig() (so the archive job no-ops) instead of
throwing; apply the same pattern to the other parsing sites noted (the other
getArchiveConfig-like blocks around the file) so none of them crash unrelated
flows and all log the invalid payloads.
| import static org.scada_lts.dao.SystemSettingsDAO.ARCHIVE_ENABLED; | ||
| import static org.scada_lts.utils.CreateObjectUtils.parseObjects; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ARCHIVE_ENABLED is read using the wrong property key (likely always defaulting).
getArchiveEnabled() reads ScadaConfig with ARCHIVE_ENABLED imported from SystemSettingsDAO ("archiveEnabled"). If the intended env key is systemsettings.archive.enabled, this will never pick up the configured value.
Proposed diff
-import static org.scada_lts.dao.SystemSettingsDAO.ARCHIVE_ENABLED;
+// Env/property key (not the DB systemSettings key)
+private static final String ARCHIVE_ENABLED_PROP = "systemsettings.archive.enabled";
@@
public static boolean getArchiveEnabled() {
boolean defaultValue = false;
try {
- String value = ScadaConfig.getInstance().getConf().getProperty(ARCHIVE_ENABLED, String.valueOf(defaultValue));
+ String value = ScadaConfig.getInstance().getConf().getProperty(ARCHIVE_ENABLED_PROP, String.valueOf(defaultValue));
return Boolean.parseBoolean(value);
} catch (Exception e) {
LOG.error("Error reading archive.config", e);
return defaultValue;
}
}Also applies to: 898-907
🤖 Prompt for AI Agents
In @src/org/scada_lts/utils/SystemSettingsUtils.java around lines 24 - 25, The
code in SystemSettingsUtils.getArchiveEnabled() (and the similar block around
lines 898-907) uses the ARCHIVE_ENABLED constant from SystemSettingsDAO
("archiveEnabled") which is the wrong property key and causes defaults to be
used; update those calls to read ScadaConfig with the correct key
"systemsettings.archive.enabled" (or a new/appropriate constant named e.g.
SYSTEMSETTINGS_ARCHIVE_ENABLED) instead of ARCHIVE_ENABLED so the configured
environment value is picked up; ensure you change all occurrences in
getArchiveEnabled() and the repeated block at 898-907 to use the new key.
| scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2 | ||
|
|
||
| systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]} | ||
| systemsettings.archive.enabled=false No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid shipping default DB creds / potentially dangerous “archive-to-primary” example config.
The example config hardcodes root/root and uses a DB name that looks like the primary (scadalts)—easy to copy-paste into production and/or misconfigure archiving to the same DB. Also, having "enabled" inside JSON plus systemsettings.archive.enabled invites drift if both are read.
Safer example (placeholders + explicit archive DB name)
-systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
+systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts_archive","dbUsername":"CHANGE_ME","dbPassword":"CHANGE_ME","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
systemsettings.archive.enabled=false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2 | |
| systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]} | |
| systemsettings.archive.enabled=false | |
| scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2 | |
| systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts_archive","dbUsername":"CHANGE_ME","dbPassword":"CHANGE_ME","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]} | |
| systemsettings.archive.enabled=false |
🤖 Prompt for AI Agents
In @webapp-resources/env.properties around lines 219 - 222, Replace the unsafe
example archive config: remove hard-coded credentials and primary DB name in
systemsettings.archive.config (and avoid duplicating enabled flags), replace the
"dbUrl", "dbUsername", "dbPassword" values with clearly-marked placeholders
(e.g. ARCHIVE_DB_URL, ARCHIVE_DB_USER, ARCHIVE_DB_PASS) and use an explicit
archive-specific database name (not "scadalts"); ensure only one source of truth
for enabling (either the "enabled" inside systemsettings.archive.config or the
separate systemsettings.archive.enabled) and set the default to false so the
shipped defaults cannot accidentally archive to the primary DB.
| <bean id="archiveQueryProviderFactory" | ||
| class="org.scada_lts.archive.ArchiveQueryProviderFactory"> | ||
| <constructor-arg ref="databaseAccess"/> | ||
| </bean> | ||
|
|
||
| <bean id="archiveQueryProvider" | ||
| factory-bean="archiveQueryProviderFactory" | ||
| factory-method="get"/> | ||
|
|
||
| <bean id="archiveService" class="org.scada_lts.archive.ArchiveService"> | ||
| <constructor-arg ref="archiveQueryProvider"/> | ||
| </bean> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spring wiring likely injects a provider that ArchiveService doesn’t use.
archiveService is constructed with archiveQueryProvider, but ArchiveService.runArchive() currently creates a provider via ArchiveQueryProviderFactory.forTarget(...) instead. Either:
- inject a factory (or datasource manager) into the service and use that, or
- remove the constructor arg + bean wiring to avoid confusion.
🤖 Prompt for AI Agents
In @WebContent/WEB-INF/applicationContext.xml around lines 381 - 392, The bean
wiring injects archiveQueryProvider into ArchiveService but
ArchiveService.runArchive() ignores it and calls
ArchiveQueryProviderFactory.forTarget(...); update the service to use the
injected provider (remove the internal call to
ArchiveQueryProviderFactory.forTarget in ArchiveService.runArchive and use the
constructor-injected ArchiveQueryProvider instance) or alternatively change the
wiring to inject the factory instead (replace the constructor-arg on the
archiveService bean with a reference to archiveQueryProviderFactory and update
ArchiveService constructor/signature to accept and use
ArchiveQueryProviderFactory.forTarget), and remove the unused
archiveQueryProvider bean if you choose the factory injection route to avoid
conflicting/unused beans.
Summary by CodeRabbit
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.