Skip to content

Conversation

@Patrykb0802
Copy link
Contributor

@Patrykb0802 Patrykb0802 commented Jul 18, 2025

Summary by CodeRabbit

  • New Features

    • Introduced data archiving functionality enabling automated archival or deletion of historical data to an external database.
    • Supports configurable archiving rules with scheduled execution and batch processing.
    • Archive configuration management integrated into system settings with multi-database support.
  • Documentation

    • Added comprehensive multi-language user documentation for the data archiving feature.

✏️ Tip: You can customize this high-level summary in your review settings.

@Patrykb0802 Patrykb0802 requested review from Limraj and SoftQ as code owners July 18, 2025 14:06
@github-actions
Copy link

github-actions bot commented Jul 18, 2025

Java Script Mocha Unit Test Results

268 tests  ±0   268 ✅ ±0   3s ⏱️ ±0s
 70 suites ±0     0 💤 ±0 
  1 files   ±0     0 ❌ ±0 

Results for commit 997c788. ± Comparison against base commit 86f326a.

♻️ This comment has been updated with latest results.

@github-actions
Copy link

github-actions bot commented Jul 18, 2025

Java JUnit Test Results

3 220 tests  ±0   3 220 ✅ ±0   49s ⏱️ ±0s
  128 suites ±0       0 💤 ±0 
  128 files   ±0       0 ❌ ±0 

Results for commit 997c788. ± Comparison against base commit 86f326a.

♻️ This comment has been updated with latest results.

Copy link
Collaborator

@Limraj Limraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Using the word archive in all archive-related contexts;

  2. Using TimePeriods and the number of periods; minutes can also be added here for testing;

  3. The entire Archiver configuration is in a single JSON file, saved in a single archiveSettings parameter;

  4. It's good that you added validation, but from what I see, it's not implemented there. You could create a static method for this in Utils that accepts the entire object, with all validations, and returns messages. BatchSize can be used to limit the number of values in the query; it's good that you noticed this problem;

  5. We don't hardcode anything in JavaScript, except parameter names; that shouldn't be there;

  6. We move js functions from common.js to systemSettings.jsp or create a separate js file;

@Limraj Limraj added this to the 2.9.0 milestone Jul 31, 2025
@Patrykb0802 Patrykb0802 requested a review from Limraj August 4, 2025 10:25
@Patrykb0802
Copy link
Contributor Author

Data Archiver – Implemented Features

1. Core Service and Scheduling

  • Introduced a new ArchiveService responsible for executing archiving tasks:
    • Copying data into an archive database.
    • Deleting old records from the main database.
  • Added a Quartz-based ArchiveJob that runs automatically using a configurable cron expression.
  • ArchiveUtils manages the scheduling lifecycle (enable/disable, rescheduling when cron changes).

2. Configuration

  • New configuration model: ArchiveConfig
    Stores:
    • Archive database connection (URL, username, password).
    • Batch size (number of records per batch).
    • Cron expression for scheduling.
    • List of archiving rules (ArchiveTask).
  • Integrated with SystemSettingsDAO for persisting:
    • ARCHIVE_CONFIG (JSON with rules and connection details).
    • ARCHIVE_ENABLED (global toggle).
  • Frontend + backend support for validation and saving settings.
  • Added localized documentation/help texts in multiple languages (EN, DE, ES, FR, PT, RU, PL, etc.).

3. Archiving Rules

  • Introduced ArchiveTask definition:
    • Age threshold (how old data must be to archive/delete).
    • Time unit (days, weeks, months, years).
    • Action (Archive / Delete).
    • Target table (e.g., pointValues, events, reportInstances).
  • Supported operations are enumerated in ArchiveFunction:
    • COPY_TO_ARCHIVE
    • DELETE_IF_IN_ARCHIVE

4. Database Vendor Support

  • Abstracted via IArchiveQueryProvider and ArchiveQueryProviderFactory.
  • Supports MySQL/MariaDB and PostgreSQL with vendor-specific SQL:
    • Table creation / schema checks.
    • Ensuring presence of archived_at column.
  • Automatically copies table structure from source DB into archive DB if missing.

5. Safety and Reliability

  • Introduced an allowlist of supported tables for archiving.
  • Implemented global DB-level locks to prevent concurrent archiving of the same table across multiple nodes.
  • Added detailed logging:
    • Start and completion of archiving jobs.
    • Skipped tasks.
    • Number of rows archived/deleted.
    • Table creation actions.
  • Validation mechanisms prevent invalid or incomplete configuration from being saved.

@Patrykb0802
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

📝 Walkthrough

Walkthrough

A comprehensive data archiving subsystem has been introduced, enabling scheduled archiving of point values and events to external databases. The system includes Quartz-based job scheduling, database-specific query providers, configuration management, and multi-language UI support. New Java classes implement archive operations with PostgreSQL and MySQL support, while JSP and DWR layers provide configuration and retrieval endpoints.

Changes

Cohort / File(s) Summary
Spring Bean Configuration
WebContent/WEB-INF/applicationContext.xml
Added systemSettingsDAO bean and a complete Quartz-based archiving subsystem: archiveTrigger, archiveJobDetail, archiveScheduler, archiveQueryProviderFactory, archiveQueryProvider (via factory), and archiveService with lifecycle management.
Archive Data Models
src/org/scada_lts/archive/ArchiveConfig.java, ArchiveTask.java, ArchiveFunction.java
New POJO classes for archive configuration (dbUrl, dbUsername, dbPassword, batchSize, cron, tasks), archive rules (age value, unit, function, table), and archive operations (COPY_TO_ARCHIVE, DELETE_IF_IN_ARCHIVE). Jackson-annotated for JSON serialization.
Time Period Support
src/com/serotonin/mango/vo/TimePeriodFactory.java, TimeLongPeriodType.java, TimePeriodDeserializer.java
New factory and deserializer to support TimePeriod handling in archive task configurations; added DEFAULT constant to TimeLongPeriodType enum; enables flexible time unit specification (DAYS, HOURS, etc.).
Archive Service & Scheduling
src/org/scada_lts/archive/ArchiveService.java, ArchiveJob.java, ArchiveUtils.java
Core archive execution logic with table locking, batching, transactional safety; Quartz job wrapper checking ARCHIVE_ENABLED; utility class for initializing and refreshing archive scheduler based on config changes.
Database Query Providers
src/org/scada_lts/archive/IArchiveQueryProvider.java, MySqlArchiveQueryProvider.java, PostgresArchiveQueryProvider.java, ArchiveQueryProviderFactory.java, ArchiveDataSourceManager.java
Interface and database-specific implementations for schema inspection, table/column operations, advisory locking (MySQL GET_LOCK / PostgreSQL pg_try_advisory_lock), batch query construction, and metadata management. Factory pattern selects provider based on detected database. Data source manager caches JDBC templates per connection tuple.
Backend Integration
src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java, src/org/scada_lts/dao/SystemSettingsDAO.java, src/org/scada_lts/utils/SystemSettingsUtils.java
New DWR methods saveDataRetentionAndArchiveConfig and getArchiveOptions; DAO adds ARCHIVE_ENABLED and ARCHIVE_CONFIG settings with JSON parsing; utilities provide validation for archive config (tasks, cron, batch size) and convenience accessors.
Initialization
src/com/serotonin/mango/MangoContextListener.java
Added ArchiveUtils.init() call during application startup to initialize archive scheduler after EverySecond and EverySecondTool.
Configuration
src/org/scada_lts/config/ScadaConfig.java, webapp-resources/env.properties
New getArchiveCron() method to retrieve cron expression from ArchiveConfig or fallback to property; default env.properties includes archive.config JSON and archive.enabled=false.
Frontend & DWR
WebContent/WEB-INF/jsp/systemSettings.jsp, WebContent/WEB-INF/dwr.xml
JSP adds archive configuration UI (enable toggle, DB connection fields, batch size, cron, rules table with add/remove); dynamic rule rendering, save/load functions; DWR converts ArchiveConfig, ArchiveTask, ArchiveFunction, and ChronoUnit for client marshaling.
Documentation
WebContent/WEB-INF/dox/manifest.xml, WebContent/WEB-INF/dox/{de,en,es,fr,pt,ru}/dataArchiveSettings.htm
Manifest entry for dataArchiveSettings; multi-language HTML documentation describing overview, settings, archiving rules, usage example, and additional notes (schema parity, auto-table creation, recoverability).
Localization
webapp-resources/messages_{de,en,es,fi,fr,lu,nl,pl,pt,ru,zh}.properties
Added keys for archive settings UI labels, validation messages (duplicate tasks, invalid config), and Data Retention and Archiving Settings section header across all language files.

Sequence Diagram

sequenceDiagram
    participant App as Application Startup
    participant MGL as MangoContextListener
    participant AU as ArchiveUtils
    participant Sched as Archive Scheduler
    participant AJ as ArchiveJob
    participant AS as ArchiveService
    participant AQPF as ArchiveQueryProviderFactory
    participant DAQP as DB Query Provider
    participant SDB as Source Database
    participant ADB as Archive Database

    App->>MGL: initSchedule()
    MGL->>AU: init()
    AU->>Sched: schedule(trigger, jobDetail)
    Note over Sched: Waits for cron trigger

    Sched-->>AJ: execute() [on schedule]
    AJ->>AS: runArchive()
    
    AS->>AQPF: get()
    AQPF-->>AS: MySqlArchiveQueryProvider<br/>or PostgresArchiveQueryProvider
    
    AS->>DAQP: tryLockTable()
    DAQP->>ADB: GET_LOCK / pg_advisory_lock
    ADB-->>DAQP: lock acquired
    
    AS->>DAQP: listColumns(), tableExists()
    DAQP->>SDB: metadata queries
    SDB-->>DAQP: schema info
    
    AS->>SDB: SELECT batch (paginated)
    SDB-->>AS: data rows
    
    AS->>ADB: INSERT/DELETE (per function)
    ADB-->>AS: rows affected
    
    AS->>DAQP: releaseLockTable()
    DAQP->>ADB: RELEASE_LOCK / pg_advisory_unlock
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 Archive dreams in quarterly hops,
Quartz triggers while time never stops,
Old data flows, the database hums,
MySQL and Postgres dance as one,
A system of schedules, locks, and locks!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 6.41% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly identifies the main feature: systematic data archiving. It directly relates to the substantial archiving subsystem implemented across the changeset, including ArchiveService, ArchiveJob, configuration models, database vendors support, and UI components.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Patrykb0802
Copy link
Contributor Author

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Jan 9, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🤖 Fix all issues with AI agents
In @src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java:
- Around line 588-617: The code currently calls
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled) before validating the archive config via
SystemSettingsUtils.validateArchiveConfig(configJson, response), which can leave
ARCHIVE_ENABLED persisted when validation fails; move or defer all persistence
(the setBooleanValue and later setValue(SystemSettingsDAO.ARCHIVE_CONFIG,
configJson)) until after validateArchiveConfig returns with no messages, or wrap
the whole sequence in a transaction so that if validateArchiveConfig throws or
response.hasMessages() is true you roll back and no changes are saved; ensure
the logic that creates a new SystemSettingsDAO and calls setValue only executes
after successful validation and that setBooleanValue is executed in the same
post-validation block or transaction.

In @src/org/scada_lts/archive/ArchiveDataSourceManager.java:
- Around line 15-27: The cache key for getOrCreate currently omits the password
causing stale credentials; update the Key used by cache (class Key) to include
password (or alternatively implement cache invalidation when credentials change)
and modify getOrCreate to construct Key with (url, username, password) so
DriverManagerDataSource/JdbcTemplate are recreated for credential changes;
ensure Key's equals and hashCode incorporate the password field and adjust any
Key constructors/accessors accordingly.
- Around line 38-54: The Key class must include the password field so cache
lookups reflect credential changes: add a final String password field to class
Key, requireNonNull it in the Key(String url, String user, String password)
constructor (and update all call sites that construct Key to pass the password),
and update equals and hashCode to include password alongside url and user so
cached DataSource instances are keyed by url+user+password.

In @src/org/scada_lts/archive/ArchiveQueryProviderFactory.java:
- Around line 20-22: Remove the confusing/newInstance() and instance get() API
and standardize on the static forTarget(JdbcTemplate targetJdbc, JdbcTemplate
sourceJdbc) factory: delete newInstance() and get(), update callers (e.g.,
ArchiveService usage which already calls forTarget(archiveJdbc, jdbc)) to use
forTarget, and ensure forTarget fully wires the created
PostgresArchiveQueryProvider by setting its internal jdbc/source JdbcTemplate
fields so showCreateTable() and ensureJdbcFor() cannot see a null jdbc;
alternatively if you prefer to keep instance APIs, refactor both newInstance/get
and forTarget to accept explicit JdbcTemplate parameters and consistently set
the PostgresArchiveQueryProvider.jdbc field before returning.

In @src/org/scada_lts/archive/ArchiveService.java:
- Around line 66-133: runArchive() already uses the provider's locking API
(provider.tryLockTable(...) and provider.releaseLockTable(...)), so remove the
duplicate private lock helper methods in this class (the unused service-level
lock helpers and the Postgres-specific SQL string they reference) to avoid dual
ownership and potential drift; ensure only the provider's
tryLockTable/releaseLockTable are used and delete any private methods/fields
related to table locking and their SQL constants so the class has a single clear
owner of lock semantics.
- Around line 39-65: The ArchiveConfig.driverClassName is never applied to the
DriverManagerDataSource so the wrong JDBC driver may be chosen; in runArchive
retrieve the driverClassName from ArchiveConfig and call
archiveDataSource.setDriverClassName(driverClassName) (use
ArchiveConfig.getDriverClassName()), and if provided optionally attempt
Class.forName(driverClassName) to fail fast on invalid drivers before creating
the JdbcTemplate; ensure you guard null/empty driver names and only set/load
when non-blank, referencing runArchive, archiveDataSource, and ArchiveConfig.
- Around line 24-37: ArchiveService currently accepts an injected
IArchiveQueryProvider (field queries) but runArchive() ignores it and constructs
a new provider via ArchiveQueryProviderFactory.forTarget(...); change this by
either removing the unused constructor parameter and field or (preferred) inject
a factory/provider creator and use it inside runArchive() instead of calling
ArchiveQueryProviderFactory.forTarget(...) so the injected dependency is
consistently used (e.g., replace direct factory call with
injectedArchiveQueryProviderFactory.forTarget(...)). Also ensure the
ArchiveConfig-provided driver class is applied when building the JDBC source:
after creating DriverManagerDataSource archiveDataSource set
archiveDataSource.setDriverClassName(config.getDriverClassName()) when
config.getDriverClassName() is non-null/non-empty so the configured driver is
used.

In @src/org/scada_lts/archive/ArchiveUtils.java:
- Around line 35-37: The current early-return in ArchiveUtils checks
cronExpression.equals(lastCronApplied) which can throw a NullPointerException if
cronExpression is null; change the comparison to a null-safe check such as
Objects.equals(cronExpression, lastCronApplied) (or an equivalent null-safe
comparison) and add the necessary import for java.util.Objects, so the
conditional becomes if (enabled == lastEnabled && Objects.equals(cronExpression,
lastCronApplied)) return; to prevent NPEs when
ScadaConfig.getInstance().getArchiveCron() returns null.

In @src/org/scada_lts/archive/IArchiveQueryProvider.java:
- Around line 8-29: The interface methods that accept identifier fragments
(showCreateTable, buildSelectBatchSql, buildVerifyIdsSql, buildDeleteChunkSql,
buildCountIdsSql, buildAddColumnSql, buildInsertWithExtraColumnSql, hasColumn,
columnTypes and related) must validate or document that callers supply
allowlisted/validated identifiers and implementations must quote identifiers
safely; add or reference a helper (e.g., validateIdentifier/quoteIdentifier) and
throw IllegalArgumentException on invalid input. Add precondition checks in
bindSelectBatchArgs/buildSelectBatchSql/buildVerifyIdsSql/buildDeleteChunkSql/buildCountIdsSql
to require idCount > 0 and limit > 0 (or explicitly document allowed zero
behavior) and throw IllegalArgumentException when violated. Clarify the lock
contract for tryLockTable/releaseLockTable in the JavaDoc (which DB the lock is
intended for) and ensure implementations use the provided archiveJdbc
consistently to acquire a global lock across nodes (or change signature if main
DB locking is required). Finally, add short JavaDoc to
archivedAtColumnName/archivedAtSqlType indicating they are safe constants and
any caller constraints.

In @src/org/scada_lts/dao/SystemSettingsDAO.java:
- Around line 178-181: getArchiveConfig() in SystemSettingsDAO currently lets
JSON parse errors propagate as RuntimeException; change it to catch JSON
parsing/IO exceptions (e.g., JsonProcessingException/IOException), log an error
with context (including the bad JSON and key ARCHIVE_CONFIG) via the class
logger, and return a safe default new ArchiveConfig() (so the archive job
no-ops) instead of throwing; apply the same pattern to the other parsing sites
noted (the other getArchiveConfig-like blocks around the file) so none of them
crash unrelated flows and all log the invalid payloads.

In @src/org/scada_lts/utils/SystemSettingsUtils.java:
- Around line 24-25: The code in SystemSettingsUtils.getArchiveEnabled() (and
the similar block around lines 898-907) uses the ARCHIVE_ENABLED constant from
SystemSettingsDAO ("archiveEnabled") which is the wrong property key and causes
defaults to be used; update those calls to read ScadaConfig with the correct key
"systemsettings.archive.enabled" (or a new/appropriate constant named e.g.
SYSTEMSETTINGS_ARCHIVE_ENABLED) instead of ARCHIVE_ENABLED so the configured
environment value is picked up; ensure you change all occurrences in
getArchiveEnabled() and the repeated block at 898-907 to use the new key.

In @webapp-resources/env.properties:
- Around line 219-222: Replace the unsafe example archive config: remove
hard-coded credentials and primary DB name in systemsettings.archive.config (and
avoid duplicating enabled flags), replace the "dbUrl", "dbUsername",
"dbPassword" values with clearly-marked placeholders (e.g. ARCHIVE_DB_URL,
ARCHIVE_DB_USER, ARCHIVE_DB_PASS) and use an explicit archive-specific database
name (not "scadalts"); ensure only one source of truth for enabling (either the
"enabled" inside systemsettings.archive.config or the separate
systemsettings.archive.enabled) and set the default to false so the shipped
defaults cannot accidentally archive to the primary DB.

In @WebContent/WEB-INF/applicationContext.xml:
- Around line 381-392: The bean wiring injects archiveQueryProvider into
ArchiveService but ArchiveService.runArchive() ignores it and calls
ArchiveQueryProviderFactory.forTarget(...); update the service to use the
injected provider (remove the internal call to
ArchiveQueryProviderFactory.forTarget in ArchiveService.runArchive and use the
constructor-injected ArchiveQueryProvider instance) or alternatively change the
wiring to inject the factory instead (replace the constructor-arg on the
archiveService bean with a reference to archiveQueryProviderFactory and update
ArchiveService constructor/signature to accept and use
ArchiveQueryProviderFactory.forTarget), and remove the unused
archiveQueryProvider bean if you choose the factory injection route to avoid
conflicting/unused beans.
🟡 Minor comments (12)
WebContent/WEB-INF/dox/fr/dataArchiveSettings.htm-6-67 (1)

6-67: FR locale doc appears English + “move” wording risks implying deletion.

Since this is under dox/fr/, the content should likely be French (or moved to the correct locale). Also consider changing “move records” (Line 58) to “copy records” to match the “Archive (copy)” action and avoid implying data removal.

WebContent/WEB-INF/dox/de/dataArchiveSettings.htm-6-67 (1)

6-67: DE locale doc appears English + “move” wording risks implying deletion.

Since this is under dox/de/, it should likely be German (or moved). Also consider changing “move records” (Line 58) to “copy records” to match “Archive (copy)”.

src/org/scada_lts/config/ScadaConfig.java-433-440 (1)

433-440: Consider the null return case from this method.

When config.getCron() is null/empty AND the ARCHIVE_CRON property is not set in the configuration file, conf.getProperty(ARCHIVE_CRON) will return null. This could cause issues in callers like ArchiveUtils.init() where cronExpression.equals(lastCronApplied) would throw an NPE.

Suggested defensive fix
 public String getArchiveCron() {
     ArchiveConfig config = SystemSettingsDAO.getArchiveConfig();

     if (config != null && config.getCron() != null && !config.getCron().isEmpty()) {
         return config.getCron();
     }
-    return conf.getProperty(ARCHIVE_CRON);
+    String fallback = conf.getProperty(ARCHIVE_CRON);
+    return fallback != null ? fallback : "";
 }
webapp-resources/messages_en.properties-3461-3481 (1)

3461-3481: Align batch size error text with backend validation, and tighten wording.

systemSettings.archive.invalidBatchSize says “between 0 and 32767”, but validation rejects <= 0 and >= 32767 (so effectively 1..32766). Also consider renaming “archiver config” → “archiving config” for consistency. Based on learnings, focus on EN only.

Proposed diff
-systemSettings.archive.invalidBatchSize=Batch size must be a positive integer between 0 and 32767.
+systemSettings.archive.invalidBatchSize=Batch size must be a positive integer between 1 and 32766.
-systemSettings.archive.invalidConfig=Invalid archiver config
-systemSettings.archive.duplicateTask=Invalid archiver config - duplicate tasks
+systemSettings.archive.invalidConfig=Invalid archiving config
+systemSettings.archive.duplicateTask=Invalid archiving config - duplicate tasks
src/org/scada_lts/archive/MySqlArchiveQueryProvider.java-204-211 (1)

204-211: hasColumn also doesn't handle schema-qualified table names.

Similar to columnTypes, this method should use splitTable to handle schema-qualified names consistently.

src/org/scada_lts/archive/PostgresArchiveQueryProvider.java-242-252 (1)

242-252: hasColumn has the same schema handling issue.

Same as columnTypes - doesn't use splitTable for schema-qualified names.

WebContent/WEB-INF/jsp/systemSettings.jsp-686-688 (1)

686-688: parseInt without fallback may produce NaN.

If the archiveBatchSize input is empty or contains non-numeric text, parseInt returns NaN, which will be serialized and may cause backend validation or processing errors.

♻️ Suggested fix
-        batchSize: parseInt($get("archiveBatchSize")),
+        batchSize: parseInt($get("archiveBatchSize"), 10) || 1000,
WebContent/WEB-INF/jsp/systemSettings.jsp-768-770 (1)

768-770: Duplicate element IDs will be generated.

Each row creates an img with id="removeArchiveRuleRowImg", resulting in duplicate IDs in the DOM when multiple rules exist. This is invalid HTML and may cause issues with element selection.

♻️ Suggested fix: remove the id or make it unique
-    tdDel.innerHTML = `<img id="removeArchiveRuleRowImg" src="images/delete.png" alt="Remove" title="Remove" onclick="removeArchiveRuleRow(${idx})" border="0"/>`;
+    tdDel.innerHTML = `<img src="images/delete.png" alt="Remove" title="Remove" onclick="removeArchiveRuleRow(${idx})" style="cursor:pointer"/>`;
src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java-667-681 (1)

667-681: Hardcoded English labels should use i18n message keys.

The labelKey values for tables are hardcoded English strings ("Point values", "Events", "Report instances") rather than message keys. This breaks the i18n pattern used for archiveFunctions and will not translate properly in non-English locales.

♻️ Suggested fix: use message keys
     List<Map<String, String>> table = new ArrayList<>();
     Map<String, String> pv = new HashMap<>();
     pv.put("value", "pointValues");
-    pv.put("labelKey", "Point values");
+    pv.put("labelKey", "systemSettings.archive.table.pointValues");
     table.add(pv);

     Map<String, String> events = new HashMap<>();
     events.put("value", "events");
-    events.put("labelKey", "Events");
+    events.put("labelKey", "systemSettings.archive.table.events");
     table.add(events);

     Map<String, String> reportInstances = new HashMap<>();
     reportInstances.put("value", "reportInstances");
-    reportInstances.put("labelKey", "Report instances");
+    reportInstances.put("labelKey", "systemSettings.archive.table.reportInstances");
     table.add(reportInstances);
src/org/scada_lts/archive/PostgresArchiveQueryProvider.java-197-216 (1)

197-216: columnTypes doesn't handle schema-qualified table names consistently.

The query filters by current_schema() but doesn't extract the schema from a qualified table name. If archive.pointValues is passed, it will search for a table literally named archive.pointValues in the current schema rather than pointValues in the archive schema.

♻️ Suggested fix
 @Override
 public Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName) {
+    final String[] parts = splitTable(tableName);
+    final String schema = parts[0];
+    final String table = parts[1];
     final String sql =
             "SELECT a.attname AS column_name, format_type(a.atttypid, a.atttypmod) AS column_type " +
                     "FROM pg_attribute a " +
                     "JOIN pg_class c ON a.attrelid = c.oid " +
                     "JOIN pg_namespace n ON c.relnamespace = n.oid " +
                     "WHERE c.relkind = 'r' " +
-                    "  AND (n.nspname = current_schema() OR n.nspname = ANY (current_schemas(true))) " +
+                    "  AND n.nspname = ? " +
                     "  AND c.relname = ? " +
                     "  AND a.attnum > 0 AND NOT a.attisdropped " +
                     "ORDER BY a.attnum";
-    List<Map<String,Object>> rows = jdbc.queryForList(sql, tableName);
+    List<Map<String,Object>> rows = jdbc.queryForList(sql, schema, table);

Committable suggestion skipped: line range outside the PR's diff.

WebContent/WEB-INF/jsp/systemSettings.jsp-1396-1402 (1)

1396-1402: Invalid HTML: <div> inside <tbody>.

The archiveAgeFieldTemplate div is placed directly inside the <tbody> element, which is invalid HTML. A <tbody> can only contain <tr> elements. This may cause rendering issues in some browsers. Move the template outside the table.

♻️ Suggested fix
-    <tbody id="archiveBody">
+    </tbody>
+    </table>
+    <div id="archiveAgeFieldTemplate" style="display:none">
+      <input type="number" name="ageValue" value="30" min="1" class="formShort" />
+      <select name="ageUnit">
+        <tag:timePeriodOptions h="true" d="true" w="true" mon="true" y="true"/>
+      </select>
+    </div>
+    <table>
+    <tbody id="archiveBody">
       ...
-      <div id="archiveAgeFieldTemplate" style="display:none">
-        <input type="number" name="ageValue" value="30" min="1" class="formShort" />
-        <select name="ageUnit">
-          <tag:timePeriodOptions h="true" d="true" w="true" mon="true" y="true"/>
-        </select>
-      </div>
-    </tbody>

Committable suggestion skipped: line range outside the PR's diff.

src/org/scada_lts/archive/MySqlArchiveQueryProvider.java-166-179 (1)

166-179: columnTypes doesn't handle schema-qualified table names.

Unlike tableExists and listColumns, this method passes tableName directly to the query without splitting schema from table name. If a schema-qualified name like archive.pointValues is passed, the query will fail to find columns.

♻️ Suggested fix for consistency
 @Override
 public Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName) {
+    final String[] parts = splitTable(tableName);
+    final String table = parts[1];
     final String sql =
             "SELECT COLUMN_NAME, COLUMN_TYPE " +
                     "FROM information_schema.columns " +
                     "WHERE table_schema = DATABASE() AND table_name = ? " +
                     "ORDER BY ORDINAL_POSITION";
-    List<Map<String,Object>> rows = jdbc.queryForList(sql, tableName);
+    List<Map<String,Object>> rows = jdbc.queryForList(sql, table);
🧹 Nitpick comments (15)
webapp-resources/messages_zh.properties (1)

3560-3562: Three archive-related keys lack Chinese translation.

Lines 3560–3562 add new keys for the data archiving feature, but the values are in English instead of Chinese:

  • validate.unsupportedDataType=Data type {0} is not supported
  • systemSettings.archive.duplicateTask=Invalid archiver config - duplicate tasks
  • systemSettings.dataRetentionAndArchiveSettings=Data Retention and Archiving Settings

All other entries in this file use Chinese characters (e.g., 名称必须, 不能超过40个字符).

Per learnings, SCADA-LTS maintains translations only for Polish and English; Chinese is not an actively maintained locale, and AI translation may be used in the future. These untranslated entries should not block the PR. However, if translations are available elsewhere in the codebase (e.g., messages_en.properties or messages_pl.properties), consider adding proper Chinese translations to maintain consistency.

src/org/scada_lts/archive/ArchiveDataSourceManager.java (1)

17-26: Consider using a connection pool instead of DriverManagerDataSource.

DriverManagerDataSource creates a new physical connection for every getConnection() call, which can be inefficient for batch archiving operations that may execute many queries. Consider using a pooled DataSource like HikariCP for better performance and connection reuse.

This is acceptable for low-frequency archive jobs, but worth noting for future optimization if archiving becomes more intensive.

WebContent/WEB-INF/dox/pt/dataArchiveSettings.htm (1)

1-67: English documentation in non-English locale directories.

This Portuguese locale documentation file contains English text. The same pattern appears in the Russian locale file (WebContent/WEB-INF/dox/ru/dataArchiveSettings.htm). Based on project learnings, SCADA-LTS actively maintains only Polish and English translations, so English content in non-maintained locales is acceptable.

The documentation content itself is well-structured and clearly describes the Data Archiving module features. However, consider adding a note in the manifest or a comment in these locale-specific files indicating that they currently contain English text and may be translated in the future.

src/com/serotonin/mango/vo/TimePeriodFactory.java (1)

5-17: Unify fallback semantics between getByCode() and getByName().

getByCode() (line 16) returns TimeLongPeriodType.DEFAULT, while getByName(null) (line 20) returns TimePeriodType.DEFAULT—these different defaults create inconsistency. Consider unifying them to a single constant for clarity. The codes are unique across enums, so the resolution order is not an issue.

Proposed fix (consistent default + basic normalization)
 public class TimePeriodFactory {

+    private static final TimePeriod DEFAULT = TimeLongPeriodType.DEFAULT;
+
     public static TimePeriod getByCode(int code) {
         for (TimePeriodType tpt : TimePeriodType.values()) {
             if (tpt.getCode() == code)
                 return tpt;
         }
@@ -11,17 +15,19 @@ public class TimePeriodFactory {
         for (TimeLongPeriodType tlpt : TimeLongPeriodType.values()) {
             if (tlpt.getCode() == code)
                 return tlpt;
         }

-        return TimeLongPeriodType.DEFAULT;
+        return DEFAULT;
     }

     public static TimePeriod getByName(String name) {
-        if (name == null) return TimePeriodType.DEFAULT;
+        if (name == null) return DEFAULT;
+        name = name.trim();
+        if (name.isEmpty()) return DEFAULT;
@@ -28,7 +34,7 @@ public class TimePeriodFactory {
         for (TimeLongPeriodType tlpt : TimeLongPeriodType.values()) {
             if (tlpt.name().equalsIgnoreCase(name))
                 return tlpt;
         }

-        return TimeLongPeriodType.DEFAULT;
+        return DEFAULT;
     }
 }
WebContent/WEB-INF/dox/manifest.xml (1)

443-443: Remove extra space in attribute for consistency.

The attribute has an extra space: id ="dataArchiveSettings" should be id="dataArchiveSettings" (without the space after id). All other items in this manifest follow the id="value" format without a space.

✨ Proposed fix
-  <item id ="dataArchiveSettings"/>
+  <item id="dataArchiveSettings"/>
webapp-resources/messages_ru.properties (1)

3608-3610: Minor wording/consistency: “archiver” vs “archiving/archive”

Given RU isn’t actively maintained, English text here is fine, but “Invalid archiver config” reads a bit off compared to the feature naming (“archiving”). Consider adjusting for consistency.

Proposed tweak
-systemSettings.archive.duplicateTask=Invalid archiver config - duplicate tasks
+systemSettings.archive.duplicateTask=Invalid archiving config - duplicate tasks
src/com/serotonin/mango/vo/TimePeriodDeserializer.java (1)

6-7: Remove redundant imports from the same package.

These imports are unnecessary since TimePeriod and TimePeriodFactory are in the same package (com.serotonin.mango.vo) as this class.

Suggested fix
 import com.fasterxml.jackson.core.JsonParser;
 import com.fasterxml.jackson.databind.DeserializationContext;
 import com.fasterxml.jackson.databind.JsonDeserializer;
-import com.serotonin.mango.vo.TimePeriod;
-import com.serotonin.mango.vo.TimePeriodFactory;
 
 import java.io.IOException;
src/org/scada_lts/archive/ArchiveUtils.java (1)

39-49: Consider validating the cron expression before scheduling.

If cronExpression is null or empty when archiving is enabled, scheduler.schedule(cronExpression) may fail with an unclear error. Consider adding an explicit check and logging a warning if the cron expression is missing.

Suggested validation
         try {
+            if (cronExpression == null || cronExpression.trim().isEmpty()) {
+                LOG.warn("[ARCHIVER] Cannot schedule: cron expression is missing or empty");
+                scheduler.stop();
+                lastCronApplied = null;
+                lastEnabled = false;
+                return;
+            }
             LOG.info("[ARCHIVER] (Re)Scheduling archiver with cron: " + cronExpression);
             scheduler.schedule(cronExpression);
src/org/scada_lts/archive/ArchiveService.java (1)

366-380: DDL “sanitize via regex” is fragile; expect syntax edge cases.

santizeDDL() can easily produce invalid SQL (dangling commas, MySQL version-specific clauses, etc.). If you need simplified schema creation, it’s safer to have provider-built CREATE TABLE per supported table, or introspect columns and generate DDL rather than regex-transforming.

src/org/scada_lts/utils/SystemSettingsUtils.java (1)

825-885: Validation: treat whitespace-only password as missing; align batchSize bounds with message.

  • isEmpty(config.getDbPassword()) allows " "; consider isBlank/trim for password.
  • Batch size check is <=0 || >=32767; ensure UI message matches (see i18n comment).
src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java (1)

649-654: Consider logging or preserving exception context.

Wrapping IOException in a RuntimeException obscures the original cause and may hinder debugging. Consider logging the exception before re-throwing or using a more specific runtime exception.

♻️ Suggested improvement
     try {
         ArchiveUtils.init();
     } catch (IOException e) {
+        LOG.error("Failed to initialize archive utilities", e);
         throw new RuntimeException(e);
     }
src/org/scada_lts/archive/ArchiveTask.java (1)

8-26: Consider adding field validation.

The ArchiveTask class accepts any values without validation. Negative ageValue, null function, or empty table strings could cause issues downstream. Consider adding validation either via Bean Validation annotations or in the constructor.

♻️ Suggested improvement with validation
+import javax.validation.constraints.Min;
+import javax.validation.constraints.NotBlank;
+import javax.validation.constraints.NotNull;

 public class ArchiveTask {

+    @Min(1)
     private int ageValue;
     @JsonDeserialize(using = com.serotonin.mango.vo.TimePeriodDeserializer.class)
+    @NotNull
     private TimePeriod ageUnit;
+    @NotNull
     private ArchiveFunction function;
+    @NotBlank
     private String table;
WebContent/WEB-INF/jsp/systemSettings.jsp (2)

727-728: Remove debug console.log statement.

Debug logging should be removed before merging to release branch.

-        console.log("Inside: loadArchiveConfig");

817-829: Missing null checks and input validation in getArchiveRulesFromTable.

The function assumes all query selectors return valid elements and that parseInt will succeed. If the DOM is malformed or inputs are empty, this could produce unexpected results or errors.

♻️ Suggested defensive improvement
     rows.forEach(row => {
-      const ageValue = parseInt(row.querySelector('input[name="ageValue"]').value);
+      const ageInput = row.querySelector('input[name="ageValue"]');
+      const ageValue = ageInput ? parseInt(ageInput.value, 10) || 30 : 30;
       const ageUnit = row.querySelector('select[name="ageUnit"]').value;
       const func = row.querySelector('select[name="function"]').value;
       const table = row.querySelector('select[name="table"]').value;
src/org/scada_lts/archive/PostgresArchiveQueryProvider.java (1)

10-21: Inconsistent state management pattern.

This class has an instance-level jdbc field and setter, but most methods accept JdbcTemplate as a parameter. Only showCreateTable uses the instance field. This mixed approach is confusing and error-prone - callers might not realize they need to set the instance field for showCreateTable to work.

Consider either:

  1. Make all methods use the parameter (stateless), or
  2. Make all methods use the instance field (stateful)
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86f326a and 997c788.

📒 Files selected for processing (41)
  • WebContent/WEB-INF/applicationContext.xml
  • WebContent/WEB-INF/dox/de/dataArchiveSettings.htm
  • WebContent/WEB-INF/dox/en/dataArchiveSettings.htm
  • WebContent/WEB-INF/dox/es/dataArchiveSettings.htm
  • WebContent/WEB-INF/dox/fr/dataArchiveSettings.htm
  • WebContent/WEB-INF/dox/manifest.xml
  • WebContent/WEB-INF/dox/pt/dataArchiveSettings.htm
  • WebContent/WEB-INF/dox/ru/dataArchiveSettings.htm
  • WebContent/WEB-INF/dwr.xml
  • WebContent/WEB-INF/jsp/systemSettings.jsp
  • src/com/serotonin/mango/MangoContextListener.java
  • src/com/serotonin/mango/vo/TimeLongPeriodType.java
  • src/com/serotonin/mango/vo/TimePeriodDeserializer.java
  • src/com/serotonin/mango/vo/TimePeriodFactory.java
  • src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java
  • src/org/scada_lts/archive/ArchiveConfig.java
  • src/org/scada_lts/archive/ArchiveDataSourceManager.java
  • src/org/scada_lts/archive/ArchiveFunction.java
  • src/org/scada_lts/archive/ArchiveJob.java
  • src/org/scada_lts/archive/ArchiveQueryProviderFactory.java
  • src/org/scada_lts/archive/ArchiveService.java
  • src/org/scada_lts/archive/ArchiveTask.java
  • src/org/scada_lts/archive/ArchiveUtils.java
  • src/org/scada_lts/archive/IArchiveQueryProvider.java
  • src/org/scada_lts/archive/MySqlArchiveQueryProvider.java
  • src/org/scada_lts/archive/PostgresArchiveQueryProvider.java
  • src/org/scada_lts/config/ScadaConfig.java
  • src/org/scada_lts/dao/SystemSettingsDAO.java
  • src/org/scada_lts/utils/SystemSettingsUtils.java
  • webapp-resources/env.properties
  • webapp-resources/messages_de.properties
  • webapp-resources/messages_en.properties
  • webapp-resources/messages_es.properties
  • webapp-resources/messages_fi.properties
  • webapp-resources/messages_fr.properties
  • webapp-resources/messages_lu.properties
  • webapp-resources/messages_nl.properties
  • webapp-resources/messages_pl.properties
  • webapp-resources/messages_pt.properties
  • webapp-resources/messages_ru.properties
  • webapp-resources/messages_zh.properties
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-01-05T14:39:51.606Z
Learnt from: Limraj
Repo: SCADA-LTS/Scada-LTS PR: 3239
File: webapp-resources/messages_fi.properties:3565-3565
Timestamp: 2026-01-05T14:39:51.606Z
Learning: SCADA-LTS maintains translations only for Polish and English. Do not invest in or expect active maintenance for other languages (e.g., messages_fi.properties, messages_de.properties). If a messages_*.properties file contains English text and is not maintained, it can be left as-is for now. AI translation may be used in the future. During reviews, prioritize Polish and English keys/text; non-maintained locales should not block changes.

Applied to files:

  • webapp-resources/messages_pt.properties
  • webapp-resources/messages_pl.properties
  • webapp-resources/messages_es.properties
  • webapp-resources/messages_de.properties
  • webapp-resources/messages_fr.properties
  • webapp-resources/messages_fi.properties
  • webapp-resources/messages_nl.properties
  • webapp-resources/messages_lu.properties
  • webapp-resources/messages_ru.properties
  • webapp-resources/messages_zh.properties
  • webapp-resources/messages_en.properties
🧬 Code graph analysis (5)
src/com/serotonin/mango/MangoContextListener.java (1)
src/org/scada_lts/archive/ArchiveUtils.java (1)
  • ArchiveUtils (12-59)
src/org/scada_lts/config/ScadaConfig.java (2)
src/org/scada_lts/archive/ArchiveConfig.java (1)
  • ArchiveConfig (8-83)
src/org/scada_lts/dao/SystemSettingsDAO.java (1)
  • SystemSettingsDAO (52-516)
src/org/scada_lts/archive/ArchiveUtils.java (4)
src/org/scada_lts/config/ScadaConfig.java (1)
  • ScadaConfig (43-441)
src/org/scada_lts/dao/SystemSettingsDAO.java (1)
  • SystemSettingsDAO (52-516)
src/org/scada_lts/quartz/CronTriggerScheduler.java (1)
  • CronTriggerScheduler (7-38)
src/org/scada_lts/web/beans/ApplicationBeans.java (1)
  • ApplicationBeans (31-206)
src/org/scada_lts/dao/SystemSettingsDAO.java (3)
src/org/scada_lts/archive/ArchiveConfig.java (1)
  • ArchiveConfig (8-83)
src/org/scada_lts/utils/SystemSettingsUtils.java (1)
  • SystemSettingsUtils (29-915)
scadalts-ui/build-config.js (1)
  • json (17-17)
src/org/scada_lts/utils/SystemSettingsUtils.java (3)
src/org/scada_lts/archive/ArchiveConfig.java (1)
  • ArchiveConfig (8-83)
src/org/scada_lts/archive/ArchiveTask.java (1)
  • ArchiveTask (8-61)
src/org/scada_lts/dao/SystemSettingsDAO.java (1)
  • SystemSettingsDAO (52-516)
🔇 Additional comments (30)
webapp-resources/messages_fi.properties (1)

3585-3587: New archive-related keys are correctly positioned in non-maintained locale.

The two new keys (systemSettings.archive.duplicateTask and systemSettings.dataRetentionAndArchiveSettings) are appropriately left in English. Per documented practice, SCADA-LTS maintains active translations only for Polish and English; Finnish and other language files are non-maintained and English text is acceptable for future translation. This approach aligns with the project's localization strategy.

webapp-resources/messages_lu.properties (1)

3473-3475: Archive feature localization entries added; no translation provided for Luxembourgish.

The three new entries for the archiving feature (validate.unsupportedDataType, systemSettings.archive.duplicateTask, systemSettings.dataRetentionAndArchiveSettings) are in English. Based on learnings, SCADA-LTS maintains translations only for Polish and English, and non-maintained locales such as Luxembourgish can remain in English as placeholder text for now. These entries follow the correct naming convention and are syntactically sound.

src/org/scada_lts/archive/ArchiveDataSourceManager.java (2)

33-36: getDataSource will break if password is added to Key.

If the cache key is updated to include the password (as recommended above), this method would need to accept the password parameter as well to perform the lookup. Consider updating the signature or providing an alternative approach such as storing the DataSource directly in the cache rather than the JdbcTemplate.


29-31: LGTM!

The clear() method appropriately clears the cache. Since DriverManagerDataSource doesn't hold pooled connections, there are no resources to explicitly close here. If connection pooling is added later, this method should be updated to close the underlying pools.

webapp-resources/messages_pt.properties (1)

3613-3614: Portuguese translations missing for new archive keys.

Lines 3613-3614 contain English text instead of Portuguese translations:

  • systemSettings.archive.duplicateTask: "Invalid archiver config - duplicate tasks"
  • systemSettings.dataRetentionAndArchiveSettings: "Data Retention and Archiving Settings"

While per learnings, Portuguese (messages_pt.properties) is not actively maintained and English fallback is acceptable, it would be inconsistent with the file's purpose. Consider providing Portuguese translations if possible, or accept English as a temporary fallback pending future AI-assisted translation.

⛔ Skipped due to learnings
Learnt from: Limraj
Repo: SCADA-LTS/Scada-LTS PR: 3239
File: webapp-resources/messages_fi.properties:3565-3565
Timestamp: 2026-01-05T14:39:59.665Z
Learning: The Scada-LTS project's translation policy focuses only on Polish and English translations. Other language files (such as messages_fi.properties, messages_de.properties, etc.) may contain English text and are not actively maintained per project guidelines. AI translation may be used for these files in the future.
webapp-resources/messages_nl.properties (1)

3575-3577: Non-maintained locale with English text is acceptable.

The Dutch (nl) locale is not actively maintained for this project. Per project learnings, English text in non-maintained locale files is acceptable and can be left as-is. These new archive-related keys correctly support the Data Archiving feature.

Based on learnings, SCADA-LTS maintains translations only for Polish and English. Non-maintained locales should not block changes.

src/com/serotonin/mango/vo/TimeLongPeriodType.java (1)

36-37: LGTM! Consistent default value.

The DEFAULT constant provides a public, reusable default value that aligns with the existing fallback behavior in both getType() methods (lines 70 and 77), which already default to DAYS. This improves maintainability and provides a single source of truth for the default time period type.

WebContent/WEB-INF/dox/en/dataArchiveSettings.htm (1)

1-67: LGTM! Clear and comprehensive documentation.

The English documentation for the Data Archiving module is well-structured and provides clear guidance on configuration, rules, and usage. The inclusion of a practical example and additional notes about schema compatibility and backup recommendations is helpful for users.

src/com/serotonin/mango/MangoContextListener.java (2)

66-66: LGTM: Import addition is appropriate.

The import is correctly placed and will be used in the initialization block.


712-718: Archive subsystem initialization looks good.

The initialization block follows the established pattern for scheduled jobs (EverySecond, EverySecondTool) with appropriate exception handling and logging. The ArchiveUtils.init() method handles the enable/disable state and cron configuration internally.

WebContent/WEB-INF/dwr.xml (1)

248-253: LGTM: DWR converter mappings are correctly configured.

The archive-related converters (ArchiveConfig, ArchiveTask, ArchiveFunction, ChronoUnit) are properly declared to enable marshaling/unmarshaling of these types for DWR calls. The inclusion of java.time.temporal.ChronoUnit is appropriate for handling time unit enumerations in the archive configuration.

webapp-resources/messages_pl.properties (1)

3597-3599: LGTM: Archive-related translation keys added.

The new Polish translation entries for archive validation and settings are correctly added:

  • systemSettings.archive.duplicateTask for configuration validation
  • systemSettings.dataRetentionAndArchiveSettings for the UI section label

These align with the archive feature additions described in the PR objectives.

Based on learnings, SCADA-LTS maintains translations for Polish and English as priority languages.

webapp-resources/messages_es.properties (1)

3500-3502: Translation keys contain English text in Spanish locale file.

The three new keys added for the data archiving feature contain English text rather than Spanish translations. Based on learnings, this is acceptable since SCADA-LTS actively maintains only Polish and English translations—other locales like Spanish may use English as placeholder text for now.

webapp-resources/messages_de.properties (1)

3457-3459: German localization file contains English values — acceptable per maintenance policy.

The three new entries supporting the archiving feature contain English text rather than German translations. While this creates an inconsistency with the German locale file's purpose, this aligns with the documented practice for non-maintained languages in SCADA-LTS.

Based on learnings, German translations are not actively maintained; AI-assisted translation may be applied in the future. These entries should not block the PR.

webapp-resources/messages_fr.properties (1)

3454-3456: New archive-related localization keys added.

The new keys are in English rather than French, which aligns with the project policy that only Polish and English translations are actively maintained. Based on learnings, non-maintained locales like French can retain English text until AI translation is considered in the future.

WebContent/WEB-INF/dox/es/dataArchiveSettings.htm (1)

1-67: Documentation file for Data Archiving feature looks good.

The documentation is well-structured, covering overview, settings, archiving rules, usage examples, and important notes. The content is in English despite being in the Spanish locale folder (es/), which is consistent with how other non-maintained locale documentation is handled in this project.

src/com/serotonin/mango/vo/TimePeriodDeserializer.java (1)

11-22: Deserialization logic is clean and handles both numeric and string inputs correctly.

The implementation gracefully handles both integer codes and named identifiers by falling back to getByName when parsing fails. Since TimePeriodFactory methods return sensible defaults (TimeLongPeriodType.DEFAULT), this deserializer will never return null.

src/org/scada_lts/archive/ArchiveFunction.java (1)

1-16: Clean enum design for archiving operations.

The enum provides a type-safe representation of archiving actions with associated UI labels. The implementation is straightforward and follows Java conventions.

src/org/scada_lts/archive/ArchiveUtils.java (1)

16-17: Volatile fields provide visibility but not atomicity for the compound check.

The combination of lastEnabled and lastCronApplied are checked together on line 35, but since they are updated separately, there's a theoretical race window. For the current usage pattern (startup/config refresh), this is acceptable, but be aware of this limitation if concurrent calls become more common.

src/org/scada_lts/archive/ArchiveJob.java (3)

38-41: Catching Throwable is intentionally broad for job robustness.

Catching Throwable ensures that even Error types (like OutOfMemoryError) are logged before being rethrown as JobExecutionException. This prevents silent failures in the Quartz scheduler. The approach is acceptable for a background job, though be aware that some Error types may indicate unrecoverable states.


11-23: Good design with testable constructor injection.

The dual constructor pattern—default for production (using ApplicationBeans) and parameterized for testing—enables clean unit testing of the archive job without requiring the Spring context.


25-42: Job execution logic is well-structured.

The implementation properly checks the ARCHIVE_ENABLED flag before processing, logs start/end times with duration, and uses StatefulJob to prevent concurrent executions. The early return pattern keeps the code clean.

src/org/scada_lts/archive/ArchiveConfig.java (1)

19-82: ArchiveConfig looks solid (Jackson-friendly + password masking in toString()).

src/org/scada_lts/archive/ArchiveService.java (1)

533-570: No changes needed. The project explicitly targets Java 11 (sourceCompatibility = 1.11, targetCompatibility = 1.11 in build.gradle), which fully supports the var keyword introduced in Java 10. The var declarations are valid and will compile without issues.

Likely an incorrect or invalid review comment.

WebContent/WEB-INF/applicationContext.xml (1)

364-379: Cron expression is properly set at runtime; no startup risk

The archiveTrigger bean intentionally omits the cron expression in the XML constructor arguments. ArchiveUtils.init() (invoked during startup at MangoContextListener.java:714) retrieves the cron expression from ScadaConfig.getInstance().getArchiveCron() and calls scheduler.schedule(cronExpression), which internally invokes trigger.setCronExpression() before Quartz scheduling occurs. This pattern is consistent with other configuration-driven schedulers in the codebase (e.g., DataSourcePointsCache, PointHierarchyCache).

Likely an incorrect or invalid review comment.

src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java (1)

218-224: LGTM!

The extension of getSettings() to include archive configuration settings is straightforward and follows the existing pattern.

WebContent/WEB-INF/jsp/systemSettings.jsp (1)

691-700: groveLogging is hardcoded to true.

The groveLogging parameter is hardcoded to true without any UI control, whereas the deprecated saveDataRetentionSettings accepted this as a configurable parameter. This removes user control over this setting.

Is this intentional, or should a checkbox be added to the UI for grove logging configuration?

src/org/scada_lts/archive/MySqlArchiveQueryProvider.java (2)

94-106: LGTM! Advisory locking implementation looks correct.

The use of MySQL's GET_LOCK/RELEASE_LOCK for non-blocking advisory locks is appropriate for preventing concurrent archiving across nodes. The lock name includes the table name for granularity.


118-134: The SQL injection concern is already mitigated by validation earlier in the call stack.

The tableName parameter is validated against an allowlist (ALLOWED_TABLES in ArchiveService) before any of these SQL builder methods are invoked. At line 71 of ArchiveService, ensureTableAllowed(tableName) is called immediately after extracting the table name from the task configuration and throws an IllegalArgumentException if the table is not in the set of allowed tables: {"pointValues", "events", "userEvents", "reportInstances"}. This validation occurs before archiveTable() or deleteFromSourceIfInArchive() are called (lines 113, 116), which subsequently invoke the SQL builder methods. The allowlist validation covers both bare table names and schema-qualified names (e.g., schema.tableName).

src/org/scada_lts/archive/PostgresArchiveQueryProvider.java (1)

78-114: The method signature correctly matches the interface pattern defined in IArchiveQueryProvider, which specifies String showCreateTable(String tableName); without a JdbcTemplate parameter. The MySqlArchiveQueryProvider also implements the same signature without JdbcTemplate. Methods like getTimestampColumnName() and buildSelectBatchSql() similarly only take tableName, not JdbcTemplate. Methods that do accept JdbcTemplate parameters (e.g., tableExists(), listColumns()) are separate interface methods with different responsibilities, not part of the same pattern.

The class provides appropriate mechanisms to handle the instance jdbc field through a constructor parameter and a setter method, with validation via ensureJdbcFor(). This is the intended design—not a violation of it.

Likely an incorrect or invalid review comment.

Comment on lines +588 to +617
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled);

validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod);
validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_TYPE_DEFAULT, defaultPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIODS, futureDateLimitPeriods);
validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIOD_TYPE, futureDateLimitPeriodType);

if (!archiveEnabled) {
validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIOD_TYPE, eventPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIODS, eventPurgePeriods);
validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIOD_TYPE, reportPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIODS, reportPurgePeriods);
validateNonNegative(response, SystemSettingsDAO.VALUES_LIMIT_FOR_PURGE, valuesLimitForPurge);
}

try {
response = SystemSettingsUtils.validateArchiveConfig(configJson, response);
} catch (Exception ex) {
response.addContextualMessage("dataArchiveMessage", "emport.parseError");
return response;
}

if(!response.getHasMessages()) {
systemSettingsDAO = new SystemSettingsDAO();
systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson);
}
else {
return response;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

ARCHIVE_ENABLED is persisted before configuration validation completes.

The ARCHIVE_ENABLED flag is saved to the database at line 588-589 before the archive config JSON is validated (lines 604-617). If validation fails and returns early, the system remains in an inconsistent state where archiving is enabled but the configuration is invalid or unchanged.

Consider deferring all persistence until validation passes, or wrap the entire operation in a transaction.

♻️ Suggested fix: defer persistence until after validation
 public DwrResponseI18n saveDataRetentionAndArchiveConfig(boolean archiveEnabled, String configJson, int eventPurgePeriodType,
                                                          int eventPurgePeriods, int reportPurgePeriodType,
                                                          int reportPurgePeriods, boolean groveLogging,
                                                          int futureDateLimitPeriodType, int futureDateLimitPeriods,
                                                          int defaultPurgePeriod, int defaultPurgePeriodType, int valuesLimitForPurge) {
     Permissions.ensureAdmin();
     DwrResponseI18n response = new DwrResponseI18n();
     SystemSettingsDAO systemSettingsDAO = new SystemSettingsDAO();

-    systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
-            archiveEnabled);
-
     validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod);
     // ... other validations ...

     try {
         response = SystemSettingsUtils.validateArchiveConfig(configJson, response);
     } catch (Exception ex) {
         response.addContextualMessage("dataArchiveMessage", "emport.parseError");
         return response;
     }

     if(!response.getHasMessages()) {
+        systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, archiveEnabled);
         systemSettingsDAO = new SystemSettingsDAO();
         systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson);
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled);
validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_DEFAULT, defaultPurgePeriod);
validateNonNegative(response, SystemSettingsDAO.PURGE_POINT_VALUES_PERIOD_TYPE_DEFAULT, defaultPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIODS, futureDateLimitPeriods);
validateNonNegative(response, SystemSettingsDAO.FUTURE_DATE_LIMIT_PERIOD_TYPE, futureDateLimitPeriodType);
if (!archiveEnabled) {
validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIOD_TYPE, eventPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.EVENT_PURGE_PERIODS, eventPurgePeriods);
validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIOD_TYPE, reportPurgePeriodType);
validateNonNegative(response, SystemSettingsDAO.REPORT_PURGE_PERIODS, reportPurgePeriods);
validateNonNegative(response, SystemSettingsDAO.VALUES_LIMIT_FOR_PURGE, valuesLimitForPurge);
}
try {
response = SystemSettingsUtils.validateArchiveConfig(configJson, response);
} catch (Exception ex) {
response.addContextualMessage("dataArchiveMessage", "emport.parseError");
return response;
}
if(!response.getHasMessages()) {
systemSettingsDAO = new SystemSettingsDAO();
systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson);
}
else {
return response;
}
if(!response.getHasMessages()) {
systemSettingsDAO = new SystemSettingsDAO();
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED, archiveEnabled);
systemSettingsDAO.setValue(SystemSettingsDAO.ARCHIVE_CONFIG, configJson);
}
🤖 Prompt for AI Agents
In @src/com/serotonin/mango/web/dwr/SystemSettingsDwr.java around lines 588 -
617, The code currently calls
systemSettingsDAO.setBooleanValue(SystemSettingsDAO.ARCHIVE_ENABLED,
archiveEnabled) before validating the archive config via
SystemSettingsUtils.validateArchiveConfig(configJson, response), which can leave
ARCHIVE_ENABLED persisted when validation fails; move or defer all persistence
(the setBooleanValue and later setValue(SystemSettingsDAO.ARCHIVE_CONFIG,
configJson)) until after validateArchiveConfig returns with no messages, or wrap
the whole sequence in a transaction so that if validateArchiveConfig throws or
response.hasMessages() is true you roll back and no changes are saved; ensure
the logic that creates a new SystemSettingsDAO and calls setValue only executes
after successful validation and that setBooleanValue is executed in the same
post-validation block or transaction.

Comment on lines +15 to +27
public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) {
Key k = new Key(url, username);
return cache.computeIfAbsent(k, kk -> {
DriverManagerDataSource ds = new DriverManagerDataSource();
if (driverClassName != null && !driverClassName.isEmpty()) {
ds.setDriverClassName(driverClassName);
}
ds.setUrl(url);
ds.setUsername(username);
ds.setPassword(password);
return new JdbcTemplate(ds);
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Cache key excludes password, causing stale credential usage.

The cache key is built from (url, username) only, but the password parameter is used when creating the DataSource. If the archive configuration is updated with a new password while keeping the same URL and username, this method will return the previously cached JdbcTemplate configured with the old password, leading to authentication failures or security issues.

Either include the password in the cache key, or invalidate the cache entry when credentials change.

🔧 Suggested fix: Include password in the cache key
-    public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) {
-        Key k = new Key(url, username);
+    public JdbcTemplate getOrCreate(String driverClassName, String url, String username, String password) {
+        Key k = new Key(url, username, password);
         return cache.computeIfAbsent(k, kk -> {

And update the Key class accordingly (see next comment).

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveDataSourceManager.java around lines 15 -
27, The cache key for getOrCreate currently omits the password causing stale
credentials; update the Key used by cache (class Key) to include password (or
alternatively implement cache invalidation when credentials change) and modify
getOrCreate to construct Key with (url, username, password) so
DriverManagerDataSource/JdbcTemplate are recreated for credential changes;
ensure Key's equals and hashCode incorporate the password field and adjust any
Key constructors/accessors accordingly.

Comment on lines +38 to +54
private static final class Key {
final String url;
final String user;

Key(String url, String user) {
this.url = Objects.requireNonNull(url, "url");
this.user = Objects.requireNonNull(user, "user");
}

@Override public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Key)) return false;
Key key = (Key) o;
return url.equals(key.url) && user.equals(key.user);
}
@Override public int hashCode() { return Objects.hash(url, user); }
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Include password in the Key class to match cache lookup semantics.

The Key class should include the password to ensure that credential changes result in a new DataSource being created rather than returning a stale cached instance.

♻️ Proposed fix
     private static final class Key {
         final String url;
         final String user;
+        final String password;

-        Key(String url, String user) {
+        Key(String url, String user, String password) {
             this.url = Objects.requireNonNull(url, "url");
             this.user = Objects.requireNonNull(user, "user");
+            this.password = password; // password may be null or empty
         }

         @Override public boolean equals(Object o) {
             if (this == o) return true;
             if (!(o instanceof Key)) return false;
             Key key = (Key) o;
-            return url.equals(key.url) && user.equals(key.user);
+            return url.equals(key.url) && user.equals(key.user) 
+                && Objects.equals(password, key.password);
         }
-        @Override public int hashCode() { return Objects.hash(url, user); }
+        @Override public int hashCode() { return Objects.hash(url, user, password); }
     }
🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveDataSourceManager.java around lines 38 -
54, The Key class must include the password field so cache lookups reflect
credential changes: add a final String password field to class Key,
requireNonNull it in the Key(String url, String user, String password)
constructor (and update all call sites that construct Key to pass the password),
and update equals and hashCode to include password alongside url and user so
cached DataSource instances are keyed by url+user+password.

Comment on lines +20 to +22
public static IArchiveQueryProvider newInstance(JdbcTemplate targetJdbc) {
return forTarget(targetJdbc, targetJdbc);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Locate the factory file
find . -name "ArchiveQueryProviderFactory.java" -type f

Repository: SCADA-LTS/Scada-LTS

Length of output: 123


🏁 Script executed:

# Locate the provider file
find . -name "PostgresArchiveQueryProvider.java" -type f

Repository: SCADA-LTS/Scada-LTS

Length of output: 124


🏁 Script executed:

# Search for IArchiveQueryProvider interface
find . -name "IArchiveQueryProvider.java" -type f

Repository: SCADA-LTS/Scada-LTS

Length of output: 117


🏁 Script executed:

# Read the factory file
cat -n src/org/scada_lts/archive/ArchiveQueryProviderFactory.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 2927


🏁 Script executed:

# Read the provider file to check setJdbcTemplate
cat -n src/org/scada_lts/archive/PostgresArchiveQueryProvider.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 15465


🏁 Script executed:

# Read the interface
cat -n src/org/scada_lts/archive/IArchiveQueryProvider.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 1758


🏁 Script executed:

# Search for usages of newInstance, forTarget, and get
rg -n --type=java "ArchiveQueryProviderFactory\.(newInstance|forTarget|get)\(" -B 2 -A 2

Repository: SCADA-LTS/Scada-LTS

Length of output: 593


🏁 Script executed:

# Check if get() method is used and whether PostgresArchiveQueryProvider jdbc is set
rg -n --type=java "\.get\(\)" | grep -i archive || echo "No direct matches"
rg -n "factory.*get\|\.get\(\)" WebContent/WEB-INF/applicationContext.xml 2>/dev/null || echo "No XML config found"

Repository: SCADA-LTS/Scada-LTS

Length of output: 240


🏁 Script executed:

# Search for showCreateTable usage to understand context
rg -n --type=java "showCreateTable" -B 2 -A 2

Repository: SCADA-LTS/Scada-LTS

Length of output: 2482


🏁 Script executed:

# Verify if MySqlArchiveQueryProvider also has showCreateTable and whether it needs jdbc field
cat -n src/org/scada_lts/archive/MySqlArchiveQueryProvider.java | head -40

Repository: SCADA-LTS/Scada-LTS

Length of output: 1493


🏁 Script executed:

# Verify if get() method is actually used anywhere in codebase
rg -n "ArchiveQueryProviderFactory.*get()" --type=java
rg -n "\.get\(\)" src/org/scada_lts/archive/ --type=java | grep -i provider || echo "No get() usage in archive"

Repository: SCADA-LTS/Scada-LTS

Length of output: 236


Factory API is misleading and inconsistent; newInstance() should be removed or clarified, and get() is incomplete for PostgreSQL.

  • newInstance(targetJdbc) passes the same JdbcTemplate as both target and source metadata, implying archive DB ≠ source DB is unsupported. This contradicts the actual usage pattern (see ArchiveService line 61: forTarget(archiveJdbc, jdbc) with separate DBs).
  • get() creates PostgresArchiveQueryProvider without setting the JdbcTemplate field, but showCreateTable() (line 79) calls ensureJdbcFor() and throws if jdbc is null. MySqlArchiveQueryProvider.showCreateTable() only generates SQL, so it doesn't have this issue. The get() method is unused, but this is a trap for future callers.
  • API inconsistency: get() (instance-based, line 24–31) vs forTarget() (static, line 33–59) have different contracts for wiring JdbcTemplate.

Either remove newInstance() and get() entirely (if only forTarget() is needed), or refactor both to accept explicit JdbcTemplate parameters and set them consistently in PostgresArchiveQueryProvider.

🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveQueryProviderFactory.java around lines 20 -
22, Remove the confusing/newInstance() and instance get() API and standardize on
the static forTarget(JdbcTemplate targetJdbc, JdbcTemplate sourceJdbc) factory:
delete newInstance() and get(), update callers (e.g., ArchiveService usage which
already calls forTarget(archiveJdbc, jdbc)) to use forTarget, and ensure
forTarget fully wires the created PostgresArchiveQueryProvider by setting its
internal jdbc/source JdbcTemplate fields so showCreateTable() and
ensureJdbcFor() cannot see a null jdbc; alternatively if you prefer to keep
instance APIs, refactor both newInstance/get and forTarget to accept explicit
JdbcTemplate parameters and consistently set the
PostgresArchiveQueryProvider.jdbc field before returning.

Comment on lines +24 to +37
private static final int MAX_BATCHES_PER_TASK = 10_000;

private static final Set<String> ALLOWED_TABLES = Set.of(
"pointValues",
"events",
"userEvents",
"reportInstances"
);

private final IArchiveQueryProvider queries;

public ArchiveService(IArchiveQueryProvider queries) {
this.queries = queries;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check Java version configuration in build files
fd -t f "\.(xml|gradle|properties)$" --max-results 20 | head -20

Repository: SCADA-LTS/Scada-LTS

Length of output: 820


🏁 Script executed:

# Check for Java version configuration in pom.xml
rg -i "maven.compiler|sourceCompatibility|targetCompatibility|<source>|<target>" --type xml -A 2 -B 2

Repository: SCADA-LTS/Scada-LTS

Length of output: 439


🏁 Script executed:

# Read the ArchiveService.java file to verify all claims
cat -n src/org/scada_lts/archive/ArchiveService.java | head -100

Repository: SCADA-LTS/Scada-LTS

Length of output: 4742


🏁 Script executed:

# Check lines around the ones mentioned (24-37, 39-65, 451, 484-529, 536-537, 565-566)
sed -n '24,70p' src/org/scada_lts/archive/ArchiveService.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 1819


🏁 Script executed:

# Check gradle.properties for Java version
cat gradle.properties

Repository: SCADA-LTS/Scada-LTS

Length of output: 365


🏁 Script executed:

# Read lines 451, 484-529, 536-537, 565-566 for var, String.isBlank(), and lock helpers
sed -n '445,475p;484,530p;536,540p;565,570p' src/org/scada_lts/archive/ArchiveService.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 4824


🏁 Script executed:

# Check if ArchiveConfig has driverClassName field
fd -t f "ArchiveConfig.java" | head -1 | xargs -I {} sh -c 'echo "File: {}"; cat -n {}'

Repository: SCADA-LTS/Scada-LTS

Length of output: 3755


🏁 Script executed:

# Search for usage of this.queries in ArchiveService to confirm it's unused
rg -n "this\.queries" src/org/scada_lts/archive/ArchiveService.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 97


🏁 Script executed:

# Find build.gradle and check for Java version configuration
fd -t f "build.gradle" | head -5 | xargs -I {} sh -c 'echo "File: {}"; head -100 {}'

Repository: SCADA-LTS/Scada-LTS

Length of output: 5877


🏁 Script executed:

# Check if this.queries is used anywhere else in ArchiveService.java
rg -n "queries" src/org/scada_lts/archive/ArchiveService.java | head -20

Repository: SCADA-LTS/Scada-LTS

Length of output: 211


🏁 Script executed:

# Check if tryLockTable and releaseLockTable methods are actually called
rg -n "tryLockTable|releaseLockTable" src/org/scada_lts/archive/ArchiveService.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 445


🏁 Script executed:

# Verify if driverClassName is actually used anywhere in ArchiveService
rg -n "driverClassName|getDriverClassName" src/org/scada_lts/archive/ArchiveService.java

Repository: SCADA-LTS/Scada-LTS

Length of output: 45


Remove unused injected dependency and apply driverClassName to DataSource.

The ArchiveService constructor accepts IArchiveQueryProvider queries, but runArchive() creates a new provider via ArchiveQueryProviderFactory.forTarget(...) at line 61 and never uses the injected this.queries. Either remove the injected dependency and fix Spring wiring, or inject a factory and use it consistently.

Additionally, ArchiveConfig provides driverClassName (line 17), but it is not applied when creating DriverManagerDataSource at lines 54-58. Set archiveDataSource.setDriverClassName(config.getDriverClassName()) if the driver class is configured.

🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/ArchiveService.java around lines 24 - 37,
ArchiveService currently accepts an injected IArchiveQueryProvider (field
queries) but runArchive() ignores it and constructs a new provider via
ArchiveQueryProviderFactory.forTarget(...); change this by either removing the
unused constructor parameter and field or (preferred) inject a factory/provider
creator and use it inside runArchive() instead of calling
ArchiveQueryProviderFactory.forTarget(...) so the injected dependency is
consistently used (e.g., replace direct factory call with
injectedArchiveQueryProviderFactory.forTarget(...)). Also ensure the
ArchiveConfig-provided driver class is applied when building the JDBC source:
after creating DriverManagerDataSource archiveDataSource set
archiveDataSource.setDriverClassName(config.getDriverClassName()) when
config.getDriverClassName() is non-null/non-empty so the configured driver is
used.

Comment on lines +8 to +29
public interface IArchiveQueryProvider {

String showCreateTable(String tableName);
String getTimestampColumnName(String tableName);
boolean tableExists(JdbcTemplate jdbc, String tableName);
List<String> listColumns(JdbcTemplate jdbc, String tableName);
boolean tryLockTable(JdbcTemplate archiveJdbc, String tableName);
void releaseLockTable(JdbcTemplate archiveJdbc, String tableName);
void applyPerTransactionTimeout(JdbcTemplate jdbc, int seconds);
void clearPerTransactionTimeout(JdbcTemplate jdbc);
String buildSelectBatchSql(String tableName, String selectColumns, String tsColumn, boolean haveCursor, int limit);
List<Object> bindSelectBatchArgs(long beforeTsMillis, boolean haveCursor, long lastTs, long lastId);
String buildVerifyIdsSql(String tableName, int idCount);
String buildDeleteChunkSql(String tableName, int idCount);
String buildCountIdsSql(String tableName, int idCount);
Map<String, String> columnTypes(JdbcTemplate jdbc, String tableName);
String buildAddColumnSql(String tableName, String columnName, String columnType);
String archivedAtColumnName();
String archivedAtSqlType();
String buildInsertWithExtraColumnSql(String tableName, String selectColumns, String extraColumn);
boolean hasColumn(JdbcTemplate jdbc, String tableName, String columnName);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Harden identifier inputs + define preconditions for idCount/limit + clarify lock contract.

Because this interface builds SQL from identifiers/fragments (tableName, columnName, selectColumns, extraColumn), it’s critical that callers only pass allowlisted values (and implementations quote identifiers correctly). Also please define/guard preconditions like idCount > 0 and limit > 0 to avoid generating invalid SQL. Finally, tryLockTable(JdbcTemplate archiveJdbc, ...) should explicitly document whether the lock is taken on the main DB, archive DB, or both (given the “global lock across nodes” goal).

🤖 Prompt for AI Agents
In @src/org/scada_lts/archive/IArchiveQueryProvider.java around lines 8 - 29,
The interface methods that accept identifier fragments (showCreateTable,
buildSelectBatchSql, buildVerifyIdsSql, buildDeleteChunkSql, buildCountIdsSql,
buildAddColumnSql, buildInsertWithExtraColumnSql, hasColumn, columnTypes and
related) must validate or document that callers supply allowlisted/validated
identifiers and implementations must quote identifiers safely; add or reference
a helper (e.g., validateIdentifier/quoteIdentifier) and throw
IllegalArgumentException on invalid input. Add precondition checks in
bindSelectBatchArgs/buildSelectBatchSql/buildVerifyIdsSql/buildDeleteChunkSql/buildCountIdsSql
to require idCount > 0 and limit > 0 (or explicitly document allowed zero
behavior) and throw IllegalArgumentException when violated. Clarify the lock
contract for tryLockTable/releaseLockTable in the JavaDoc (which DB the lock is
intended for) and ensure implementations use the provided archiveJdbc
consistently to acquire a global lock across nodes (or change signature if main
DB locking is required). Finally, add short JavaDoc to
archivedAtColumnName/archivedAtSqlType indicating they are safe constants and
any caller constraints.

Comment on lines +178 to +181
//Data archiving
public static final String ARCHIVE_CONFIG = "archiveConfig";
public static final String ARCHIVE_ENABLED = "archiveEnabled";

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t crash on invalid ARCHIVE_CONFIG JSON; fail closed and log.

getArchiveConfig() throws a RuntimeException on parse errors, which can break unrelated flows. Consider logging and returning an empty ArchiveConfig (so the job effectively no-ops), or throwing a typed exception that callers explicitly handle.

Proposed diff
 public class SystemSettingsDAO {
+    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();
@@
 	public static ArchiveConfig getArchiveConfig() {
 		String json = getValue(ARCHIVE_CONFIG, null);
 		if (json == null || json.trim().isEmpty())
 			return new ArchiveConfig();
 		try {
-			return new ObjectMapper().readValue(json, ArchiveConfig.class);
+			return OBJECT_MAPPER.readValue(json, ArchiveConfig.class);
 		} catch (Exception e) {
-			throw new RuntimeException("Cannot parse ARCHIVE_CONFIG JSON: " + e.getMessage(), e);
+			LOG.error("Cannot parse ARCHIVE_CONFIG JSON; archiving will be disabled until fixed. Value length="
+					+ json.length(), e);
+			return new ArchiveConfig();
 		}
 	}
 }

Also applies to: 438-440, 506-515

🤖 Prompt for AI Agents
In @src/org/scada_lts/dao/SystemSettingsDAO.java around lines 178 - 181,
getArchiveConfig() in SystemSettingsDAO currently lets JSON parse errors
propagate as RuntimeException; change it to catch JSON parsing/IO exceptions
(e.g., JsonProcessingException/IOException), log an error with context
(including the bad JSON and key ARCHIVE_CONFIG) via the class logger, and return
a safe default new ArchiveConfig() (so the archive job no-ops) instead of
throwing; apply the same pattern to the other parsing sites noted (the other
getArchiveConfig-like blocks around the file) so none of them crash unrelated
flows and all log the invalid payloads.

Comment on lines +24 to 25
import static org.scada_lts.dao.SystemSettingsDAO.ARCHIVE_ENABLED;
import static org.scada_lts.utils.CreateObjectUtils.parseObjects;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

ARCHIVE_ENABLED is read using the wrong property key (likely always defaulting).

getArchiveEnabled() reads ScadaConfig with ARCHIVE_ENABLED imported from SystemSettingsDAO ("archiveEnabled"). If the intended env key is systemsettings.archive.enabled, this will never pick up the configured value.

Proposed diff
-import static org.scada_lts.dao.SystemSettingsDAO.ARCHIVE_ENABLED;
+// Env/property key (not the DB systemSettings key)
+private static final String ARCHIVE_ENABLED_PROP = "systemsettings.archive.enabled";
@@
     public static boolean getArchiveEnabled() {
         boolean defaultValue = false;
         try {
-            String value = ScadaConfig.getInstance().getConf().getProperty(ARCHIVE_ENABLED, String.valueOf(defaultValue));
+            String value = ScadaConfig.getInstance().getConf().getProperty(ARCHIVE_ENABLED_PROP, String.valueOf(defaultValue));
             return Boolean.parseBoolean(value);
         } catch (Exception e) {
             LOG.error("Error reading archive.config", e);
             return defaultValue;
         }
     }

Also applies to: 898-907

🤖 Prompt for AI Agents
In @src/org/scada_lts/utils/SystemSettingsUtils.java around lines 24 - 25, The
code in SystemSettingsUtils.getArchiveEnabled() (and the similar block around
lines 898-907) uses the ARCHIVE_ENABLED constant from SystemSettingsDAO
("archiveEnabled") which is the wrong property key and causes defaults to be
used; update those calls to read ScadaConfig with the correct key
"systemsettings.archive.enabled" (or a new/appropriate constant named e.g.
SYSTEMSETTINGS_ARCHIVE_ENABLED) instead of ARCHIVE_ENABLED so the configured
environment value is picked up; ensure you change all occurrences in
getArchiveEnabled() and the repeated block at 898-907 to use the new key.

Comment on lines +219 to +222
scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2

systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
systemsettings.archive.enabled=false No newline at end of file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid shipping default DB creds / potentially dangerous “archive-to-primary” example config.

The example config hardcodes root/root and uses a DB name that looks like the primary (scadalts)—easy to copy-paste into production and/or misconfigure archiving to the same DB. Also, having "enabled" inside JSON plus systemsettings.archive.enabled invites drift if both are read.

Safer example (placeholders + explicit archive DB name)
-systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
+systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts_archive","dbUsername":"CHANGE_ME","dbPassword":"CHANGE_ME","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
 systemsettings.archive.enabled=false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2
systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts","dbUsername":"root","dbPassword":"root","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
systemsettings.archive.enabled=false
scadalts.datapoint.unreliable.execute-in-pool-if-tasks-exceeds=2
systemsettings.archive.config={"enabled":false,"dbUrl":"jdbc:mysql://localhost:3308/scadalts_archive","dbUsername":"CHANGE_ME","dbPassword":"CHANGE_ME","batchSize":1000,"cron":"0 0 2 * * ?","tasks":[{"ageValue":30,"ageUnit":"4","function":"COPY_TO_ARCHIVE","table":"pointValues"}]}
systemsettings.archive.enabled=false
🤖 Prompt for AI Agents
In @webapp-resources/env.properties around lines 219 - 222, Replace the unsafe
example archive config: remove hard-coded credentials and primary DB name in
systemsettings.archive.config (and avoid duplicating enabled flags), replace the
"dbUrl", "dbUsername", "dbPassword" values with clearly-marked placeholders
(e.g. ARCHIVE_DB_URL, ARCHIVE_DB_USER, ARCHIVE_DB_PASS) and use an explicit
archive-specific database name (not "scadalts"); ensure only one source of truth
for enabling (either the "enabled" inside systemsettings.archive.config or the
separate systemsettings.archive.enabled) and set the default to false so the
shipped defaults cannot accidentally archive to the primary DB.

Comment on lines +381 to +392
<bean id="archiveQueryProviderFactory"
class="org.scada_lts.archive.ArchiveQueryProviderFactory">
<constructor-arg ref="databaseAccess"/>
</bean>

<bean id="archiveQueryProvider"
factory-bean="archiveQueryProviderFactory"
factory-method="get"/>

<bean id="archiveService" class="org.scada_lts.archive.ArchiveService">
<constructor-arg ref="archiveQueryProvider"/>
</bean>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Spring wiring likely injects a provider that ArchiveService doesn’t use.

archiveService is constructed with archiveQueryProvider, but ArchiveService.runArchive() currently creates a provider via ArchiveQueryProviderFactory.forTarget(...) instead. Either:

  • inject a factory (or datasource manager) into the service and use that, or
  • remove the constructor arg + bean wiring to avoid confusion.
🤖 Prompt for AI Agents
In @WebContent/WEB-INF/applicationContext.xml around lines 381 - 392, The bean
wiring injects archiveQueryProvider into ArchiveService but
ArchiveService.runArchive() ignores it and calls
ArchiveQueryProviderFactory.forTarget(...); update the service to use the
injected provider (remove the internal call to
ArchiveQueryProviderFactory.forTarget in ArchiveService.runArchive and use the
constructor-injected ArchiveQueryProvider instance) or alternatively change the
wiring to inject the factory instead (replace the constructor-arg on the
archiveService bean with a reference to archiveQueryProviderFactory and update
ArchiveService constructor/signature to accept and use
ArchiveQueryProviderFactory.forTarget), and remove the unused
archiveQueryProvider bean if you choose the factory injection route to avoid
conflicting/unused beans.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants