-
Notifications
You must be signed in to change notification settings - Fork 3k
Description
NetBox version
v4.5.3
Feature type
Change to existing functionality
Proposed functionality
NetBox version
4.5.x
Feature type
Enhancement
Proposed functionality
Currently, the script result page (extras/htmx/script_result.html) only renders log output after the job completes:
{% if job.completed %}
{# Render log entries, output, tests #}
{% elif job.started %}
{% include 'extras/inc/result_pending.html' %}
{% endif %}This means users see only a spinner and "Results pending..." during the entire execution, even though self.log_info() calls accumulate data in job.data throughout the run.
For long-running scripts (e.g., firmware upgrades across dozens of network devices taking 10–30+ minutes), this provides no feedback on progress. Users cannot tell if the script is working, stuck, or how far along it is.
Proposed change
Modify the result_pending.html include (or the conditional in script_result.html) to render log entries from job.data.log while the job is running. The HTMX auto-refresh (every 5s) is already in place, so the UI would naturally update as new entries appear.
A minimal template change could be:
{% elif job.started %}
{% include 'extras/inc/result_pending.html' %}
{% if job.data.log %}
<div class="mt-3">
<table class="table table-hover">
{% for entry in job.data.log %}
<tr>
<td><small class="text-muted">{{ entry.time|isodatetime }}</small></td>
<td>{% badge entry.status %}</td>
<td>{{ entry.message|markdown }}</td>
</tr>
{% endfor %}
</table>
</div>
{% endif %}
{% endif %}However, this alone is insufficient because job.data is only written to the database when the script completes (in the ScriptJob.run_script() finally block). The data needs to be committed during execution for the HTMX refresh to pick it up.
Why intermediate saves are hard
ScriptJob.run_script() wraps execution in transaction.atomic(), so any intermediate job.save() within the script is invisible to the web frontend (different DB connection, READ COMMITTED transaction isolation).
Attempting to bypass the transaction with a separate database connection (e.g., raw SQL via psycopg) causes the new connection to block on a row-level lock held by the script's transaction, effectively freezing the script.
Possible approaches
- Periodically flush
job.dataoutside the atomic block — e.g., using aSAVEPOINT+ partial commit, or by restructuring the transaction boundaries. - Use a side channel (Django cache, Redis pub/sub via RQ, or a separate progress table not locked by the transaction) to stream progress data to the frontend.
- Write to
job.log_entries(from #19816) via a non-transactional path and render those entries in the pending state template. - Move the
job.datasave into the script loop by providing aflush()orsave_progress()method on the Script class that commits data to a separate, non-transactional table or cache.
Current workaround
Script authors can write progress to a log file on the server (e.g., /var/log/) and monitor with tail -f. The _job_log() entries (via job.log_entries) also appear in the Log tab after the job completes, but not during execution.
Use case
- Network automation: Scripts upgrading dozens of switches (5–30 min per device). Knowing "5 of 20 devices done" is critical during execution.
- Data migration: Scripts processing thousands of objects over many minutes.
- Bulk operations: Any long-running script where incremental progress is valuable.
Database changes
None required — job.data is already a JSONField that stores script output. The log_entries ArrayField (from #19816) could also be leveraged.
External dependencies
None.