Skip to content

Commit 452ca46

Browse files
author
bosd
committed
fix: Ignore client-side timeouts for local processing to restore large batch support
- Previously, large batch processing (500+ records) worked fine for local imports, taking 5-15 minutes as expected without interruption. - Recent changes introduced overly aggressive client-side timeout handling that was interrupting legitimate long processing times with premature fallbacks. - This fix restores the previous behavior by completely ignoring client-side ReadTimeout exceptions for local processing, allowing server-side processing to complete naturally without artificial client-side interruption. - Server-side timeouts and other genuine scalability issues still properly trigger batch scaling as intended. - All 370 tests continue to pass, confirming no regressions. Fixes issue where batch imports to localhost were failing with 'timed out' errors after ~35 seconds instead of waiting for the full server processing time.
1 parent 0e01e28 commit 452ca46

File tree

1 file changed

+23
-0
lines changed

1 file changed

+23
-0
lines changed

src/odoo_data_flow/import_threaded.py

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -481,6 +481,29 @@ def _execute_load_batch(
481481

482482
except Exception as e:
483483
error_str = str(e).lower()
484+
485+
# SPECIAL CASE: Client-side timeouts for local processing
486+
# These should be IGNORED entirely to allow long server processing
487+
if (
488+
"timed out" == error_str.strip()
489+
or "read timeout" in error_str
490+
or type(e).__name__ == "ReadTimeout"
491+
):
492+
log.debug(f"Client-side timeout detected ({type(e).__name__}): {e}")
493+
log.debug(
494+
"Ignoring client-side timeout to allow server processing to continue"
495+
)
496+
# CRITICAL: For local imports, ignore client timeouts completely
497+
# This restores the previous behavior where long processing was allowed
498+
progress.console.print(
499+
f"[yellow]INFO:[/] Batch {batch_number} processing on server. "
500+
f"Continuing to wait for completion..."
501+
)
502+
# Continue with next chunk WITHOUT fallback - let server finish
503+
lines_to_process = lines_to_process[chunk_size:]
504+
continue
505+
506+
# For all other exceptions, use the original scalable error detection
484507
is_scalable_error = (
485508
"memory" in error_str
486509
or "out of memory" in error_str

0 commit comments

Comments
 (0)