Version: 1.0 Date: 2025-12-12 Status: Approved [Pending] Project Duration: 4 weeks Estimated Effort: 13 developer-days Budget: €6,500 (@ €500/day)
This roadmap provides a phased implementation plan to remediate 7 CRITICAL and 12 HIGH severity issues identified in the architectural review of the Cloudempiere Search Index plugin.
Phase 0 (Week 1):
- ✅ Zero SQL injection vulnerabilities
- ✅ Multi-tenant data integrity verified
- ✅ All syntax errors fixed
Phase 1 (Week 2):
- ✅ Transaction isolation implemented
- ✅ Database migration tested
- ✅ Zero data corruption incidents
Phase 2 (Week 3):
- ✅ Thread-safe event processing
- ✅ Cache invalidation working
- ✅ RBAC enforced
Phase 3 (Week 4):
- ✅ Search performance <100ms
- ✅ Code coverage >80%
- ✅ Production-ready release
- Investment: €6,500 (13 days)
- Avoided Costs: €60,000+/year (incidents + support + churn)
- Break-even: 1 month
- Payback: 9× return in first year
Goal: Stop data corruption and security vulnerabilities
Duration: 3 days Effort: 3 developer-days Cost: €1,500
Priority: P0 - CRITICAL Effort: 4 hours Owner: [Developer Name]
Issue: Finding 5.2 - Multi-tenant data corruption risk
Files to Change:
com.cloudempiere.searchindex/src/com/cloudempiere/searchindex/indexprovider/pgtextsearch/PGTextSearchIndexProvider.java:115
Changes:
// Line 115
- .append("ON CONFLICT (ad_table_id, record_id) DO UPDATE " +
+ .append("ON CONFLICT (ad_client_id, ad_table_id, record_id) DO UPDATE " +Database Migration:
# Create migration script
touch postgresql/migration/202512120001_fix_multitenant_unique.sql
# See ADR-006 for complete migration scriptTesting:
# Unit test
mvn test -Dtest=PGTextSearchIndexProviderTest#testMultiTenantIsolation
# Integration test (manual)
# 1. Create index entries for Client A and B with same record_id
# 2. Update Client B's entry
# 3. Verify Client A's entry unchangedSuccess Criteria:
- ✅ UNIQUE constraint includes ad_client_id
- ✅ Multi-client test passes
- ✅ No duplicate entries in searchindex_* tables
References:
- ADR-006: Multi-Tenant Data Integrity
- ARCHITECTURAL-ANALYSIS-2025.md:238-255
Priority: P0 - CRITICAL Effort: 1 hour Owner: [Developer Name]
Issue: Finding 5.1 - Broken getAD_SearchIndexProvider_ID() method
Files to Change:
com.cloudempiere.searchindex/src/com/cloudempiere/searchindex/model/MSearchIndex.java:211
Changes:
// Line 211
- String sql = "... WHERE AD_Client_ID = IN(0, ?)";
+ String sql = "... WHERE AD_Client_ID IN (0, ?)";Testing:
@Test
public void testGetAD_SearchIndexProvider_ID() {
int providerId = MSearchIndex.getAD_SearchIndexProvider_ID(
"product_idx", Env.getAD_Client_ID(getCtx())
);
assertTrue(providerId > 0);
}Success Criteria:
- ✅ SQL syntax valid
- ✅ Method returns correct provider ID
- ✅ Unit test passes
Priority: P0 - CRITICAL Effort: 2 days Owner: [Developer Name]
Issue: Findings 1.1, 1.2, 1.3 - SQL injection vulnerabilities
New Files to Create:
com.cloudempiere.searchindex/src/com/cloudempiere/searchindex/util/SearchIndexSecurityValidator.java
Files to Change:
PGTextSearchIndexProvider.java:148-151, 167-170, 144, 163, 361SearchIndexConfigBuilder.java:320-326
Implementation:
// Create SearchIndexSecurityValidator class
public class SearchIndexSecurityValidator {
// Method 1: validateWhereClause(String)
// Method 2: validateTableName(String, String)
// Method 3: validateColumnName(String, String, String)
}
// Apply to all SQL concatenation pointsTesting:
// Security test suite
@Test(expected = AdempiereException.class)
public void testSQLInjection_DropTable() {
SearchIndexSecurityValidator.validateWhereClause(
"IsActive='Y'; DROP TABLE M_Product; --"
);
}
@Test
public void testValidWhereClause() {
SearchIndexSecurityValidator.validateWhereClause(
"IsActive='Y' AND Created > '2025-01-01'"
);
}
// Run security scan
mvn sonar:sonar -Dsonar.security.hotspots=trueSuccess Criteria:
- ✅ Zero SQL injection vulnerabilities (SonarQube scan)
- ✅ All malicious inputs rejected
- ✅ Valid inputs accepted
- ✅ Security tests pass
References:
- ADR-002: SQL Injection Prevention
- ARCHITECTURAL-ANALYSIS-2025.md:138-176
Priority: P0 - CRITICAL Effort: 2 hours Owner: [Release Manager]
Activities:
-
Build hotfix bundle
cd com.cloudempiere.searchindex mvn clean install -
Create release tag
git tag -a v1.0.1-hotfix -m "Hotfix: SQL injection + multi-tenant bug" git push origin v1.0.1-hotfix -
Deploy to staging
# Copy to staging environment scp target/com.cloudempiere.searchindex-1.0.1.jar staging:/opt/idempiere/plugins/ # Restart OSGi bundle ssh staging "cd /opt/idempiere && ./restart-plugin.sh searchindex"
-
Run database migration
psql -U adempiere -d idempiere -f postgresql/migration/202512120001_fix_multitenant_unique.sql
-
Validation tests
# Test multi-tenant isolation # Test SQL injection protection # Test search functionality
-
Deploy to production (after 24h staging)
# Same process as staging
Success Criteria:
- ✅ Hotfix deployed to staging successfully
- ✅ All tests pass on staging
- ✅ Zero critical issues in 24h monitoring
- ✅ Production deployment completed
Goal: Ensure transactional consistency and multi-tenant isolation
Duration: 1 week Effort: 3 developer-days Cost: €1,500
Priority: P0 - CRITICAL Effort: 2 days Owner: [Developer Name]
Issue: Finding 2.1 - Transaction boundary violations
Files to Change:
com.cloudempiere.searchindex/src/com/cloudempiere/searchindex/event/SearchIndexEventHandler.java:62-65, 107, 197, 203, 215, 221, 225
Changes:
// Remove instance variables (thread safety issue)
- private Properties ctx = null;
- private String trxName = null;
// Use local variables in doHandleEvent()
@Override
protected void doHandleEvent(Event event) {
PO eventPO = getPO(event);
Properties ctx = Env.getCtx(); // Local variable
// Use separate transaction for index operations
Trx indexTrx = Trx.get(Trx.createTrxName("SearchIdx"), true);
try {
String indexTrxName = indexTrx.getTrxName();
provider.createIndex(ctx, builder.build().getData(false), indexTrxName);
indexTrx.commit();
} catch (Exception e) {
indexTrx.rollback();
log.log(Level.SEVERE, "Failed to update search index", e);
} finally {
indexTrx.close();
}
}Testing:
@Test
public void testTransactionIsolation() {
// Create PO with transaction
Trx poTrx = Trx.get("TestPO", true);
MProduct product = new MProduct(ctx, 0, poTrx.getTrxName());
product.setName("Test Product");
product.saveEx();
// Rollback PO transaction
poTrx.rollback();
// Verify: Index still updated (separate transaction)
int count = DB.getSQLValue(null,
"SELECT COUNT(*) FROM searchindex_product WHERE record_id=?",
product.getM_Product_ID()
);
assertEquals(1, count); // Index persisted despite rollback
}
@Test
public void testIndexFailureDoesNotBlockPO() {
// Simulate index failure (e.g., Elasticsearch down)
// Verify: PO save still succeeds
}Success Criteria:
- ✅ Event handler uses separate transactions
- ✅ PO save succeeds even if index update fails
- ✅ Index consistency >99.9%
- ✅ Zero transaction deadlocks
References:
- ADR-001: Transaction Isolation Strategy
- ARCHITECTURAL-ANALYSIS-2025.md:178-195
Priority: P1 - HIGH Effort: 1 day Owner: [DBA]
Activities:
-
Test migration on copy of production data
# Create test database createdb idempiere_test pg_dump idempiere_prod | psql idempiere_test # Run migration psql idempiere_test -f postgresql/migration/202512120001_fix_multitenant_unique.sql # Validate psql idempiere_test -c " SELECT indexname, indexdef FROM pg_indexes WHERE tablename LIKE 'searchindex_%' AND indexdef LIKE '%ad_client_id%' "
-
Performance testing
-- Measure migration time \timing on -- Run migration script -- Expected: <5 minutes for 100K records
-
Rollback testing
# Test rollback procedure # Ensure data recoverable if migration fails
Success Criteria:
- ✅ Migration completes in <10 minutes
- ✅ Zero data loss
- ✅ All indexes valid after migration
- ✅ Rollback procedure tested and working
Priority: P1 - HIGH Effort: 0.5 days Owner: [QA Team]
Test Scenarios:
-
Multi-client search isolation
Given: Client A and Client B both have M_Product record_id=500 When: Client A searches for "product" Then: Only Client A's product returned And: Client B's product not visible -
Transaction rollback
Given: Product save in transaction T1 When: Index update succeeds in separate transaction T2 And: T1 rolls back Then: Product not saved But: Index entry remains (manual cleanup job will fix) -
Concurrent updates
Given: 100 concurrent product updates When: All updates complete Then: All index entries correct And: Zero duplicate entries And: Zero missing entries
Success Criteria:
- ✅ All test scenarios pass
- ✅ Zero data corruption incidents
- ✅ Performance acceptable (<500ms per transaction)
Goal: Thread-safe operations and security compliance
Duration: 1 week Effort: 4 developer-days Cost: €2,000
Priority: P1 - HIGH Effort: 2 days Owner: [Developer Name]
Issue: Finding 3.1, 3.2 - Thread safety violations
Files to Change:
SearchIndexEventHandler.java:62-65(instance variables)PGTextSearchIndexProvider.java:79(HashMap → ConcurrentHashMap)
Changes:
// SearchIndexEventHandler.java
- private Properties ctx = null; // REMOVE
- private String trxName = null; // REMOVE
- private Map<Integer, Set<IndexedTable>> indexedTablesByClient = null;
+ private final ConcurrentHashMap<Integer, Set<IndexedTable>> indexedTablesByClient = new ConcurrentHashMap<>();
// PGTextSearchIndexProvider.java:79
- private HashMap<Integer, String> indexQuery = new HashMap<>();
+ private final ConcurrentHashMap<Integer, String> indexQuery = new ConcurrentHashMap<>();Testing:
@Test
public void testConcurrentEventProcessing() {
// Create 100 concurrent events
ExecutorService executor = Executors.newFixedThreadPool(10);
List<Future<Boolean>> futures = new ArrayList<>();
for (int i = 0; i < 100; i++) {
int clientId = (i % 2 == 0) ? 1000 : 1001; // Alternate clients
futures.add(executor.submit(() -> {
// Trigger event
MProduct product = new MProduct(ctx, 0, null);
product.setName("Product " + System.currentTimeMillis());
product.saveEx();
return true;
}));
}
// Wait for all to complete
for (Future<Boolean> future : futures) {
assertTrue(future.get());
}
// Verify: No ConcurrentModificationException
// Verify: All products indexed correctly
}Success Criteria:
- ✅ Zero instance variables in event handler
- ✅ All collections thread-safe
- ✅ 100+ concurrent events processed without errors
- ✅ No race conditions detected
References:
- ADR-003: Thread Safety Model (to be created)
- ARCHITECTURAL-ANALYSIS-2025.md:196-218
Priority: P1 - HIGH Effort: 1 day Owner: [Developer Name]
Issue: Finding 4.1 - Cache never invalidated
Files to Change:
SearchIndexConfigBuilder.java:39, 256SearchIndexEventHandler.java:331-333
Changes:
// SearchIndexConfigBuilder.java
+ /**
+ * Resets configuration cache for specific index or all indexes
+ * @param searchIndexId Index ID to reset, or -1 for all
+ */
+ public static void resetCache(int searchIndexId) {
+ if (searchIndexId > 0) {
+ searchIndexConfigCache.remove(searchIndexId);
+ } else {
+ searchIndexConfigCache.clear();
+ }
+ }
// SearchIndexEventHandler.java:331-333
private void handleSearchIndexConfigChange(PO po) {
int searchIndexId = getSearchIndexId(po);
+ // Invalidate cache
+ SearchIndexConfigBuilder.resetCache(searchIndexId);
// Mark index as invalid
String sql = "UPDATE AD_SearchIndex SET IsValid='N' WHERE AD_SearchIndex_ID=?";
DB.executeUpdateEx(sql, new Object[] {searchIndexId}, trxName);
// Re-register events
unbindEventManager(eventManager);
bindEventManager(eventManager);
}Testing:
@Test
public void testCacheInvalidation() {
// Load config (populates cache)
List<SearchIndexConfig> config1 = SearchIndexConfigBuilder.build(ctx, 1000);
// Modify AD_SearchIndexColumn
DB.executeUpdate("UPDATE AD_SearchIndexColumn SET IsIndexed='N' WHERE ...", null);
// Without cache invalidation:
List<SearchIndexConfig> config2 = SearchIndexConfigBuilder.build(ctx, 1000);
// assertEquals(config1, config2); // WRONG (stale cache)
// With cache invalidation:
SearchIndexConfigBuilder.resetCache(1000);
List<SearchIndexConfig> config3 = SearchIndexConfigBuilder.build(ctx, 1000);
// assertNotEquals(config1, config3); // CORRECT (fresh data)
}Success Criteria:
- ✅ Cache invalidated on config changes
- ✅ Fresh config loaded after invalidation
- ✅ No bundle restart required for config changes
References:
- ADR-004: Cache Invalidation Strategy (to be created)
- ARCHITECTURAL-ANALYSIS-2025.md:219-237
Priority: P1 - HIGH Effort: 1 day Owner: [Developer Name]
Issue: Finding 10.1 - RBAC commented out (security vulnerability)
Files to Change:
PGTextSearchIndexProvider.java:274-280
Changes:
// Line 274-280 (UN-COMMENT AND ENABLE)
- // FIXME: uncomment and discuss
- // int AD_Window_ID = Env.getZoomWindowID(AD_Table_ID, recordID);
- //
- // if (AD_Window_ID > 0 && role.getWindowAccess(AD_Window_ID) == null)
- // continue;
- // if (AD_Window_ID > 0 && !role.isRecordAccess(AD_Table_ID, recordID, true))
- // continue;
+ // Enforce role-based access control
+ int AD_Window_ID = Env.getZoomWindowID(AD_Table_ID, recordID);
+
+ if (AD_Window_ID > 0) {
+ // Check window access
+ if (role.getWindowAccess(AD_Window_ID) == null) {
+ if (log.isLoggable(Level.FINE))
+ log.fine("Filtered result - no window access: " + AD_Window_ID);
+ continue;
+ }
+
+ // Check record-level access
+ if (!role.isRecordAccess(AD_Table_ID, recordID, true)) {
+ if (log.isLoggable(Level.FINE))
+ log.fine("Filtered result - no record access: " + recordID);
+ continue;
+ }
+ }Performance Optimization:
// Option: SQL-based filtering (faster than Java loop)
sql.append(" AND EXISTS (")
.append(" SELECT 1 FROM AD_Window_Access wa ")
.append(" JOIN AD_Window w ON wa.AD_Window_ID = w.AD_Window_ID ")
.append(" WHERE wa.AD_Role_ID = ? AND w.AD_Table_ID = ad_table_id")
.append(") ");
params.add(role.getAD_Role_ID());Testing:
@Test
public void testRBACEnforcement() {
// Create restricted role (no access to M_Product window)
MRole restrictedRole = createRestrictedRole();
// Search as restricted user
Env.setContext(ctx, "#AD_Role_ID", restrictedRole.getAD_Role_ID());
List<ISearchResult> results = provider.getSearchResults(...);
// Verify: No results (access denied)
assertEquals(0, results.size());
// Search as admin
Env.setContext(ctx, "#AD_Role_ID", MRole.getDefault(ctx).getAD_Role_ID());
results = provider.getSearchResults(...);
// Verify: Results returned
assertTrue(results.size() > 0);
}Success Criteria:
- ✅ RBAC enforced on all searches
- ✅ Users only see records they have access to
- ✅ Performance acceptable (<200ms overhead)
- ✅ Security audit passed
References:
- ADR-007: RBAC Enforcement (to be created)
- ARCHITECTURAL-ANALYSIS-2025.md:347-375
Goal: Optimize performance and improve maintainability
Duration: 1 week Effort: 3 developer-days Cost: €1,500
Priority: P1 - HIGH Effort: 1 day Owner: [Developer Name]
Issue: Known Issue - SearchType.POSITION performance (100× slower)
Files to Change:
ZkSearchIndexUI.java:189DefaultQueryConverter.java:689(cloudempiere-rest repo)ProductAttributeQueryConverter.java:505(cloudempiere-rest repo)
Implementation:
-
Create Slovak text search configuration
-- Run on database CREATE EXTENSION IF NOT EXISTS unaccent; CREATE TEXT SEARCH CONFIGURATION sk_unaccent (COPY = simple); ALTER TEXT SEARCH CONFIGURATION sk_unaccent ALTER MAPPING FOR word WITH unaccent, simple;
-
Update getTSConfig() method
private String getTSConfig(Properties ctx) { String language = Env.getContext(ctx, "#AD_Language"); if ("sk_SK".equals(language) || "cs_CZ".equals(language)) { return "sk_unaccent"; // Slovak/Czech } return switch (language) { case "en_US" -> "english"; case "de_DE" -> "german"; default -> "simple"; }; }
-
Change default SearchType
// ZkSearchIndexUI.java:189 - SearchType.POSITION + SearchType.TS_RANK // cloudempiere-rest DefaultQueryConverter.java:689 - SearchType.POSITION + SearchType.TS_RANK // cloudempiere-rest ProductAttributeQueryConverter.java:505 - SearchType.POSITION + SearchType.TS_RANK
Testing:
# Performance benchmark
psql -U adempiere -d idempiere -c "
-- Before (POSITION): ~5000ms
SELECT * FROM searchindex_product
WHERE idx_tsvector::text ~ '.*ruza.*'
LIMIT 10;
-- After (TS_RANK): ~50ms
SELECT * FROM searchindex_product
WHERE idx_tsvector @@ to_tsquery('sk_unaccent', 'ruza')
ORDER BY ts_rank(idx_tsvector, to_tsquery('sk_unaccent', 'ruza')) DESC
LIMIT 10;
"
# Slovak diacritics test
# Search "ruza" (no diacritics) should find "ruža" (with diacritics)Success Criteria:
- ✅ Search performance <100ms for 10K records
- ✅ 100× faster than POSITION
- ✅ Slovak diacritics handled correctly
- ✅ All languages supported
References:
- ADR-005: SearchType Migration
- CLAUDE.md:55-124 (Slovak Language Analysis)
- LOW-COST-SLOVAK-ECOMMERCE-SEARCH.md
Priority: P2 - MEDIUM Effort: 1 day Owner: [Developer Name]
Issue: Finding 7.1 - Generic exception catching
Files to Change:
- Multiple locations with
catch (Exception e)
Changes:
// BEFORE (bad)
} catch (Exception e) {
log.log(Level.SEVERE, sql.toString(), e);
}
// No re-throw, no user feedback
// AFTER (good)
} catch (SQLException e) {
log.log(Level.SEVERE, "Search query failed: " + sql.toString(), e);
throw new AdempiereException("Search failed: " + e.getMessage(), e);
} catch (Exception e) {
log.log(Level.SEVERE, "Unexpected error in search", e);
throw new AdempiereException("Search failed unexpectedly", e);
}Success Criteria:
- ✅ No generic
catch (Exception)blocks - ✅ Specific exception types caught
- ✅ All exceptions logged with context
- ✅ User-friendly error messages
Priority: P2 - MEDIUM Effort: 1 day Owner: [Developer Name]
Issue: Finding 6.1 - Missing @Activate/@Deactivate
Files to Change:
SearchIndexEventHandler.javaActivator.java
Changes:
// SearchIndexEventHandler.java
@Component(reference = @Reference(...))
public class SearchIndexEventHandler extends AbstractEventHandler {
private volatile boolean active = false;
+ @Activate
+ protected void activate(ComponentContext context) {
+ log.info("Activating SearchIndexEventHandler");
+ active = true;
+ initialize();
+ }
+ @Deactivate
+ protected void deactivate(ComponentContext context) {
+ log.info("Deactivating SearchIndexEventHandler");
+ active = false;
+ if (eventManager != null) {
+ unregisterAllEvents();
+ }
+ indexedTablesByClient.clear();
+ }
@Override
protected void doHandleEvent(Event event) {
+ if (!active) {
+ log.warning("Event handler not active, ignoring event");
+ return;
+ }
// ... normal processing
}
}
// Activator.java
@Override
public void start(BundleContext context) throws Exception {
+ super.start(context); // Parent first
Core.getMappedModelFactory().scan(context, "com.cloudempiere.searchindex.model");
- super.start(context);
}
+ @Override
+ public void stop(BundleContext context) throws Exception {
+ SearchIndexConfigBuilder.resetCache(-1);
+ super.stop(context);
+ }Success Criteria:
- ✅ Clean startup/shutdown lifecycle
- ✅ No resource leaks on bundle stop
- ✅ Proper event unregistration
Priority: P2 - MEDIUM Effort: 1 day Owner: [QA Team + Tech Writer]
Activities:
-
Comprehensive testing
# Unit tests mvn clean test # Integration tests mvn verify -Pintegration-tests # Code coverage mvn jacoco:report # Target: >80% coverage
-
Performance benchmarks
# Run performance test suite # Document results
-
Security audit
# SonarQube scan mvn sonar:sonar # OWASP dependency check mvn org.owasp:dependency-check-maven:check # Target: Zero critical/high issues
-
Update documentation
- Update CLAUDE.md with ADR references
- Create migration guide
- Update README.md
- Create release notes
Success Criteria:
- ✅ All tests pass
- ✅ Code coverage >80%
- ✅ Zero critical/high security issues
- ✅ Documentation complete
Priority: P2 - MEDIUM Effort: 0.5 days Owner: [Release Manager]
Activities:
-
Build production bundle
mvn clean install -Prelease
-
Create release tag
git tag -a v1.3.0 -m "Production-ready release" git push origin v1.3.0 -
Deployment plan
- Deploy to staging (48h soak test)
- Monitor for issues
- Deploy to production (phased rollout)
- 10% → 50% → 100% over 3 days
-
Rollback plan
- Keep v1.0.0 bundle available
- Document rollback procedure
- Test rollback on staging
Success Criteria:
- ✅ Release deployed successfully
- ✅ Zero critical issues in 48h
- ✅ Performance metrics met
- ✅ User acceptance confirmed
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| Database migration fails | Medium | HIGH | Test on copy of production, have rollback script |
| Performance regression | Low | MEDIUM | Benchmark before/after, load testing |
| Breaking changes in REST API | Medium | HIGH | Version REST API endpoints, deprecation period |
| Thread safety issues | Low | HIGH | Comprehensive concurrency testing |
| Security bypass discovered | Low | CRITICAL | Security audit, penetration testing |
If migration fails:
- Rollback database changes
- Deploy previous bundle version
- Investigate root cause
- Fix and retry on staging
If performance worse:
- Identify regression (profiling)
- Optimize critical path
- Add caching if needed
- Consider rollback if unfixable
If security issue found:
- Emergency hotfix
- Security advisory
- Forced upgrade for all clients
-
1× Senior Developer (full-time, 4 weeks)
- Code changes
- Architecture decisions
- Technical leadership
-
1× DBA (part-time, 1 week)
- Database migrations
- Performance tuning
- Backup/restore procedures
-
1× QA Engineer (part-time, 2 weeks)
- Test planning
- Integration testing
- Performance testing
-
1× Release Manager (part-time, 3 days)
- Build/release process
- Deployment coordination
- Rollback procedures
| Role | Effort (days) | Rate (€/day) | Cost |
|---|---|---|---|
| Senior Developer | 13 | 500 | €6,500 |
| DBA | 3 | 400 | €1,200 |
| QA Engineer | 5 | 350 | €1,750 |
| Release Manager | 1.5 | 400 | €600 |
| Total | 22.5 | €10,050 |
Note: Initial budget €6,500 covers critical development only. Full implementation with QA/DBA support requires €10,050.
| Metric | Baseline | Target | Measurement |
|---|---|---|---|
| Code coverage | 45% | >80% | JaCoCo report |
| SonarQube issues | 25 critical | 0 critical | SonarQube scan |
| Security vulnerabilities | 3 critical | 0 | OWASP check |
| Technical debt | 15 days | <5 days | SonarQube |
| Metric | Baseline (POSITION) | Target (TS_RANK) | Improvement |
|---|---|---|---|
| Search latency (p50) | 2500ms | <50ms | 50× |
| Search latency (p95) | 8000ms | <150ms | 53× |
| Throughput (req/sec) | 2 | 200 | 100× |
| CPU usage | 80% | <20% | 4× better |
| Metric | Target | Measurement |
|---|---|---|
| Zero data corruption incidents | 0/month | Monitoring + alerts |
| User satisfaction | >90% | Post-deployment survey |
| Support tickets | <5/week | Ticket system |
| System uptime | >99.9% | Monitoring |
Week 1 (Phase 0): Emergency Hotfixes
├── Mon-Tue: SQL injection prevention
├── Wed: Multi-tenant fix + SQL syntax fix
├── Thu: Testing & validation
└── Fri: Hotfix release 1.0.1
✅ Milestone: Production-safe (no data corruption)
Week 2 (Phase 1): Data Integrity
├── Mon-Tue: Transaction isolation refactoring
├── Wed: Database migration validation
├── Thu-Fri: Integration testing
└── Fri: Release 1.1.0
✅ Milestone: Transactional consistency guaranteed
Week 3 (Phase 2): Concurrency & Security
├── Mon-Tue: Thread safety refactoring
├── Wed: Cache invalidation
├── Thu: RBAC enforcement
└── Fri: Release 1.2.0
✅ Milestone: Thread-safe & secure
Week 4 (Phase 3): Performance & Quality
├── Mon: SearchType migration
├── Tue: Exception handling + OSGi lifecycle
├── Wed: Comprehensive testing
├── Thu: Security audit + documentation
└── Fri: Release 1.3.0 PRODUCTION-READY
✅ Milestone: Performance optimized, production-ready
-
Week 5: Production Soak
- Monitor metrics
- Collect user feedback
- Fix minor issues
-
Week 6-8: Continuous Improvement
- Performance tuning
- Feature enhancements
- Documentation improvements
-
Month 2+: Maintenance Mode
- Monthly security updates
- Quarterly performance reviews
- User training sessions
This implementation roadmap provides a structured, phased approach to remediate all critical issues in the Cloudempiere Search Index plugin. The plan is realistic (4 weeks), affordable (€6,500-€10,050), and delivers high ROI (€60,000+ saved in first year).
Recommended Action: Approve Phase 0 (Week 1) immediately to stop data corruption and security vulnerabilities. Approve full 4-phase plan for complete production readiness.
Document Version: 1.0 Approval Status: [Pending] Approved By: [Name] Approval Date: [Date] Implementation Start: [Date]
Related Documents: