feat: Replace simulavr implementation with Linux Host Implementation#67
feat: Replace simulavr implementation with Linux Host Implementation#67mspi92 wants to merge 1 commit intomainsail-crew:masterfrom
Conversation
📝 WalkthroughWalkthroughAdds a Linux-based printer hardware simulator: LD_PRELOAD C hook for GPIO/ADC, a Python fake hardware server and components (axes, extruder, heatbed, renderer), Docker and supervisor refactoring to run the simulator, tooling and config updates, and updated example printer configs to gpiochip/analog naming. Changes
Sequence Diagram(s)sequenceDiagram
participant Klipper as Klipper Process
participant Hook as hardware_bridge_hook.c
participant FakeServer as FakeHardwareServer
participant Hardware as Hardware Components
Klipper->>Hook: open("/dev/gpiochipN", ...)
Hook->>Hook: allocate fake FD, record type
Hook-->>Klipper: return fake FD
Klipper->>Hook: ioctl(fd, GPIO_GET_LINEHANDLE, ...)
Hook->>FakeServer: "GET chipN_gpioM"
FakeServer->>Hardware: get(chipN_gpioM)
Hardware-->>FakeServer: value
FakeServer-->>Hook: response
Hook-->>Klipper: return handle
Klipper->>Hook: ioctl(fd, GPIOHANDLE_SET_LINE_VALUES, ...)
Hook->>FakeServer: "SET chipN_gpioM <value>"
FakeServer->>Hardware: set(chipN_gpioM, value)
Hardware->>Hardware: update state
FakeServer-->>Hook: ACK
Hook-->>Klipper: success
Klipper->>Hook: pread(adc_fd, buf, ...)
Hook->>FakeServer: "GET analogX"
FakeServer->>Hardware: get(analogX)
Hardware-->>FakeServer: ADC value
FakeServer-->>Hook: ADC data
Hook-->>Klipper: return data
sequenceDiagram
participant Webcam as webcam_renderer.py
participant StateFile as State File (/home/printer/printer_data/printer_state.json)
participant Renderer as PrinterRenderer
participant Simulator as FakeHardwareServer
loop every 1s
Webcam->>StateFile: read_state()
StateFile-->>Webcam: axes x,y,z
Webcam->>Webcam: detect position change?
alt changed
Webcam->>Renderer: render()
Renderer->>StateFile: save image
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable sequence diagrams in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (11)
printer_simulator/Makefile (2)
10-10: Consider adding atesttarget placeholder.Static analysis flagged the missing
testphony target. While tests may be run via a different mechanism, having a placeholder target (even if it just prints a message or runs Python tests) maintains Makefile convention consistency.🔧 Suggested placeholder
-.PHONY: all clean lint format typecheck +.PHONY: all clean lint format typecheck test +test: + `@echo` "Run tests via pytest or other test runner"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/Makefile` at line 10, Add a placeholder Makefile target named test and include it in the .PHONY list so the file follows Makefile conventions; update the existing .PHONY line (which currently lists all, clean, lint, format, typecheck) to include test and add a test: target that either echoes a short message or invokes the test runner (e.g., pytest) so CI/static analysis won't flag the missing phony target; reference the .PHONY declaration and the new test target when making the change.
4-6: Consider adding compiler warning flags for better code quality.Adding standard warning flags helps catch potential issues during development.
🔧 Suggested improvement
CC = gcc -CFLAGS = -shared -fPIC +CFLAGS = -shared -fPIC -Wall -Wextra -Werror LDFLAGS = -ldl🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/Makefile` around lines 4 - 6, The Makefile currently sets CC, CFLAGS, and LDFLAGS but lacks compiler warning flags; update the CFLAGS variable (used when invoking gcc) to include standard warning flags such as -Wall -Wextra and optionally -Werror or -Wpedantic to surface potential issues during build, e.g., augment the existing CFLAGS definition so all compiled objects get these warnings enabled while keeping -shared -fPIC and preserving LDFLAGS.config/linux.config (1)
50-50: Consider adding a trailing newline.The file is missing a newline at the end. While not functionally impactful, POSIX-compliant text files should end with a newline, and some tools may warn about this.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/linux.config` at line 50, Add a POSIX trailing newline to this config file by ensuring the last line "CONFIG_HAVE_GPIO_HARD_PWM=y" is terminated with a newline character (i.e., add an empty line break after CONFIG_HAVE_GPIO_HARD_PWM=y) so the file ends with a newline.scripts/service_control.sh (1)
79-86: Service ordering may matter for start/stop operations.When starting,
printer_simulatorshould ideally start beforeklipper_mcu(which depends on the simulator's socket). When stopping, the reverse order is safer. Supervisorctl processes services in the order listed, so the current order (printer_simulatorfirst) is correct forstart. However, forstop, you might want to reverse the order to ensureklipper_mcustops before its dependencies.💡 Consider explicit ordering for stop command
if [ "$UNIT" = "klipper" ]; then - sudo /usr/bin/supervisorctl "$COMMAND" printer_simulator webcam_renderer klipper_mcu klipper_klippy + if [ "$COMMAND" = "stop" ]; then + sudo /usr/bin/supervisorctl "$COMMAND" klipper_klippy klipper_mcu webcam_renderer printer_simulator + else + sudo /usr/bin/supervisorctl "$COMMAND" printer_simulator webcam_renderer klipper_mcu klipper_klippy + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/service_control.sh` around lines 79 - 86, The current loop always calls supervisorctl for services in the same order using UNIT/UNITS and COMMAND; change it so when COMMAND is "stop" you iterate UNITS in reverse order (or explicitly reverse the list for that branch) so klipper_mcu is stopped before printer_simulator, while preserving the existing forward order for "start" and "restart"; modify the block that handles start|stop|restart to detect "$COMMAND" = "stop" and reverse the iteration before calling /usr/bin/supervisorctl "$COMMAND" "$UNIT" (and keep the special-case when UNIT = "klipper" to call the multi-service supervisorctl invocation).config/supervisord.conf (1)
38-46: Hardcoded sleep for service synchronization may be fragile.The
sleep 10delay assumesprinter_simulatorwill be ready within 10 seconds. If the simulator takes longer (e.g., under heavy load) or fails to start,klipper_mcumay encounter connection issues. Consider a more robust synchronization mechanism, such as waiting for the socket file to exist.💡 Alternative approach using socket readiness check
-command=bash -c 'sleep 10 && /home/printer/klipper/out/klipper.elf' +command=bash -c 'while [ ! -S /tmp/printer_hook.sock ]; do sleep 1; done && /home/printer/klipper/out/klipper.elf'This waits for the actual socket to exist rather than relying on a fixed delay. You may want to add a timeout to prevent infinite waiting if the simulator fails entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/supervisord.conf` around lines 38 - 46, Replace the fragile fixed delay in the program:klipper_mcu startup command with a readiness loop that waits for the FAKE_HARDWARE_SOCKET to exist before launching /home/printer/klipper/out/klipper.elf; specifically, modify the command (the current bash -c 'sleep 10 && /home/printer/klipper/out/klipper.elf') so it checks for the socket path defined by FAKE_HARDWARE_SOCKET or waits with a short polling loop and a sensible timeout, then starts klipper.elf as the printer user with the same environment (LD_PRELOAD) and preserve autostart/autorestart/redirect_stderr settings.README.md (1)
46-47: Minor formatting: Add blank line before horizontal rule.The alternative command runs directly into the horizontal rule, which may cause rendering issues in some Markdown parsers and reduces readability.
📝 Suggested fix
Alternatively you can execute `docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build` + ---🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 46 - 47, Add a blank line between the alternative command line ("docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build") and the horizontal rule marker (---) so the command isn't directly adjacent to the HR; update the README content around the command and the '---' separator to insert one empty line before the '---'.printer_simulator/requirements-dev.txt (1)
1-4: Consider pinning dev dependency versions for reproducibility.The dev dependencies (
mypy,pre-commit,ruff) are unpinned, which could lead to inconsistent behavior across environments or CI runs as new versions are released. For production dependencies this would be critical; for dev tools it's a lower priority but still worth considering.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/requirements-dev.txt` around lines 1 - 4, The dev dependencies in requirements-dev.txt (entries "mypy", "pre-commit", "ruff") are unpinned; update that file to pin each tool to a specific version or tight constraint (e.g., exact versions or ~= constraints) to ensure reproducible installs—choose current stable versions for "mypy", "pre-commit", and "ruff", replace the bare names with those pinned strings, and consider adding a comment indicating the chosen versions and date or using a generated constraints file if you prefer automated updates.printer_simulator/printer_hardware/renderer.py (2)
23-23: Remove or implement_stop_render_threadto avoid dead state.This field is currently unused in this class.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/renderer.py` at line 23, The field _stop_render_thread is declared but never used; either remove it or wire it into the renderer's thread control. If you remove it, delete the self._stop_render_thread assignment and any unused references; otherwise implement cooperative shutdown by checking self._stop_render_thread inside the rendering loop (e.g., in _render_loop or similar), set the flag to True in a new stop_render_thread/stop method and join the thread in that method to avoid dead state, and ensure start_render_thread (or the thread starter) clears the flag before launching.
1-2: Pin matplotlib to theAggbackend for headless rendering reliability.The code only uses
plt.savefig()for file-based output in a headless container environment. Explicitly setting theAggbackend before importing pyplot prevents environment-dependent fallbacks and potential warnings:Proposed patch
+import matplotlib +matplotlib.use("Agg") import matplotlib.pyplot as plt import numpy as npAs a minor note, the
_stop_render_threadfield (line 23) is set but never used—consider removing it if threading is not planned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/renderer.py` around lines 1 - 2, The renderer currently imports matplotlib.pyplot directly which can pick an interactive backend in headless environments; before importing pyplot in printer_simulator/printer_hardware/renderer.py set the matplotlib backend to "Agg" by importing matplotlib, calling matplotlib.use("Agg"), then import matplotlib.pyplot as plt so savefig works reliably in containers; also remove the unused _stop_render_thread attribute (or use it) in the Renderer class to avoid dead code.Dockerfile (1)
11-13: Drop leftover simulavr-only packages from builder image.
rst2pdf,help2man, andtexinfolook unused in the new Linux-host flow and add build time/image size.🧹 Minimal cleanup
- ### simulavr \ - g++ make rst2pdf help2man texinfo \ + ### native builds \ + g++ make \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 11 - 13, Remove the leftover simulavr-only packages from the builder image by deleting the entries "rst2pdf", "help2man", and "texinfo" (the simulavr block/commented install list) from the Dockerfile package install section so the builder no longer installs these unused tools; update the package list where the simulavr block appears (the commented lines containing "simulavr \", "g++ make rst2pdf help2man texinfo \", and the closing "### \") to omit those three package names or remove the simulavr-only block entirely.printer_simulator/printer_hardware/axis.py (1)
77-78: Normalize direction input to binary state.Restricting
directionto0/1avoids accidental non-binary values affecting motion semantics.🔧 Small hardening
elif key == self.dir_pin: - self.direction = value + self.direction = 1 if value else 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/axis.py` around lines 77 - 78, The branch that assigns direction when key == self.dir_pin should coerce the incoming value to a binary state to prevent non-binary values from affecting motion; change the assignment in the Axis handling to normalize value into 0 or 1 (e.g., convert via bool/int or explicit comparison) so that self.direction is always 0 or 1, using the existing symbols self.dir_pin and self.direction to locate the assignment to update.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Dockerfile`:
- Around line 26-27: The Dockerfile uses an unpinned git clone (RUN git clone
--depth 1 ${KLIPPER_REPO} /home/printer/klipper) which makes images
non-reproducible; update the clone to accept and use an immutable ref (e.g., ARG
KLIPPER_REF or KLIPPER_COMMIT) and either clone with --branch ${KLIPPER_REF}
--depth 1 or clone then run git -C /home/printer/klipper checkout
${KLIPPER_COMMIT} && git -C /home/printer/klipper reset --hard
${KLIPPER_COMMIT}; apply the same change to the other clone occurrences (lines
referenced in the comment) so builds are deterministic and reproducible.
In `@example-configs/addons/dual_extruder_stepper.cfg`:
- Around line 18-19: Configs use inverted pin notation ("!") which breaks
simulator pin routing because axis.py and single_extruder.py perform exact
string matches (e.g., if key == self.enable_pin) and do not normalize prefixes;
remove the leading "!" from any dir_pin and enable_pin entries in the affected
example config files so the pin strings match exactly (e.g., change dir_pin:
!gpiochip4/gpio2 to dir_pin: gpiochip4/gpio2 and enable_pin: gpiochip4/gpio3 to
enable_pin: gpiochip4/gpio3), ensuring all occurrences listed in the review are
updated.
In `@example-configs/addons/temp_sensors.cfg`:
- Line 16: The config references sensor_pin "analog3" but there is no simulator
component registering that pin, causing startup failure; either implement a
TemperatureFan simulator class that registers "analog3" as a readable analog pin
and models chamber heater behavior (use existing Heatbed and Extruder simulator
classes as patterns for methods, registration, and thermal update logic), or
update the addon documentation to state that the temp_sensors addon requires a
manual simulator extension and remove or replace sensor_pin: analog3 with a
supported/simulated pin; ensure the new TemperatureFan class name and the
sensor_pin "analog3" are consistently used where pins are registered and read.
In `@printer_simulator/hardware_bridge_hook.c`:
- Around line 308-309: The handler that currently ignores unknown ioctls on fake
fds by returning 0 (see the comment "Unknown ioctls on fake fds are simply
ignored" and the trailing return 0 in hardware_bridge_hook.c) should instead set
errno = ENOTTY and return -1 to follow POSIX semantics; update the
ioctl-handling function to include <errno.h> (or ensure errno is available),
replace the return 0 with code that sets errno = ENOTTY and returns -1, and keep
the descriptive comment so callers receive an appropriate error for unsupported
ioctls.
- Around line 65-89: The fake_fds array access must be synchronized: add a
pthread mutex (e.g., pthread_mutex_t fake_fds_lock) that is initialized before
use and use it to guard all accesses and mutations of fake_fds; wrap the body of
find_fake_fd, alloc_fake_fd (including the memset and field assignments), and
free_fake_fd in pthread_mutex_lock/unlock (or equivalent) and ensure any other
hooks that read/write fake_fds (open, ioctl, pread, close) also lock around
their fake_fds interactions to prevent races and corruption.
- Around line 158-194: The varargs handling in open() and open64() is unsafe:
open() always consumes a mode_t from va_list even when flags don't include
O_CREAT or O_TMPFILE, and open64() drops varargs entirely. Change open() to only
va_start/va_arg and obtain mode when (flags & (O_CREAT|O_TMPFILE)) is true (use
a default mode_t like 0 otherwise) and pass that mode to real_open(path, flags,
mode); update all early-return branches to avoid touching varargs when not
needed. Reimplement open64() to accept varargs like open64(const char *path, int
flags, ...) and forward the mode to open() (or call real_open) only when the
flags indicate a mode argument, using the same conditional va_list logic as
open(). Ensure va_start/va_end are paired correctly and reference the functions
open, open64, real_open, flags, O_CREAT, O_TMPFILE, and mode_t when making
changes.
- Around line 267-281: In the GPIO_GET_LINEHANDLE_IOCTL handling, check for
failures from make_dummy_fd() and alloc_fake_fd() and return a failure code
instead of proceeding: if make_dummy_fd() returns a negative value (e.g.,
line_fd < 0) or alloc_fake_fd() returns NULL, do not set req->fd or use the fake
fd, free any partial state if needed, and return a negative errno (e.g., -ENOMEM
for allocation failure / -ENFILE or a suitable negative error for fd creation
failure) so the caller sees the ioctl error; update the block around
make_dummy_fd(), alloc_fake_fd(), lf, and req->fd accordingly.
In `@printer_simulator/printer_hardware/axis.py`:
- Around line 26-53: The constructor __init__ currently accepts
full_steps_per_rotation (and distance_mm_per_rotation) without validation which
can cause a division-by-zero when computing step_size later; update __init__ in
class Axis to validate that full_steps_per_rotation is an int > 0 (and
distance_mm_per_rotation > 0) and raise a ValueError with a clear message if
not, and ensure any code that computes step_size (the step_size calculation
referenced around lines 74-75) uses the validated attributes so the
division-by-zero cannot occur at runtime.
In `@printer_simulator/printer_hardware/server.py`:
- Around line 75-85: The handler currently assumes data has tokens and silently
ignores malformed input; update the connection handling (the block that calls
conn.recv, splits into parts and calls self.get/self.set) to detect
empty/whitespace data and immediately send a clear error like "ERR Empty
command\n", validate parts before indexing and respond "ERR Invalid command\n"
when parts length or command name is unsupported, and wrap the int conversion
for SET (where the code uses int(parts[2])) in a try/except to send "ERR Invalid
value\n" on ValueError; ensure unsupported verbs return "ERR Unsupported
command\n" so clients always receive an explicit error instead of silent
failure.
In `@printer_simulator/printer_hardware/thermals.py`:
- Around line 39-53: The constructor for the thermals class does not validate
inputs, so downstream calculation of heat_capacity (derived from mass_kg) can be
zero or invalid; update __init__ to validate and raise ValueError for invalid
parameters: ensure mass_kg > 0, cooling_coefficient > 0, and (optionally)
heating_power_watts >= 0 and that ambient_temp/initial_temp are finite numbers;
reference the __init__ method and the instance attributes _mass_kg,
_cooling_coefficient, _heating_power_watts, _ambient_temp, and _temperature when
adding these guards so heat_capacity calculation later cannot divide by zero or
operate on invalid values.
- Around line 70-73: The heater state is being changed before integrating
temperature, causing the elapsed-time update to use the new state; in set_heater
you should first call __update_temperature() to advance temperature using the
current _is_active and elapsed dt, then assign self._is_active = on to change
the heater; ensure __update_temperature() consumes the elapsed time since the
last update (using its existing time-tracking) so the state transition
integration is correct.
- Around line 21-24: The calc_adc_value() implementation can produce values
outside 0..4095; modify it so after computing adc_value (using adc_max, voltage,
etc.) you clamp the result to the 12-bit range by ensuring adc_value is at least
0 and at most adc_max (then return that clamped int). Keep the existing
conversion logic but apply the clamp before returning to guarantee bounds safety
for temperatures <0°C or >500°C.
In `@printer_simulator/webcam_renderer.py`:
- Around line 26-35: load_axes assumes each axis dict has position, min_pos and
max_pos which can raise if missing; update load_axes to validate each axis
payload before calling create_axis_state (or make create_axis_state tolerant) by
checking that axes["x"], axes["y"], axes["z"] are mappings and contain the keys
"position", "min_pos", "max_pos" (or supply safe defaults) and return None or
skip rendering if any axis is malformed; reference the load_axes function and
create_axis_state so you either add explicit key checks/try/except around
create_axis_state calls or add defensive defaults/validation inside
create_axis_state to prevent exceptions from missing fields.
---
Nitpick comments:
In `@config/linux.config`:
- Line 50: Add a POSIX trailing newline to this config file by ensuring the last
line "CONFIG_HAVE_GPIO_HARD_PWM=y" is terminated with a newline character (i.e.,
add an empty line break after CONFIG_HAVE_GPIO_HARD_PWM=y) so the file ends with
a newline.
In `@config/supervisord.conf`:
- Around line 38-46: Replace the fragile fixed delay in the program:klipper_mcu
startup command with a readiness loop that waits for the FAKE_HARDWARE_SOCKET to
exist before launching /home/printer/klipper/out/klipper.elf; specifically,
modify the command (the current bash -c 'sleep 10 &&
/home/printer/klipper/out/klipper.elf') so it checks for the socket path defined
by FAKE_HARDWARE_SOCKET or waits with a short polling loop and a sensible
timeout, then starts klipper.elf as the printer user with the same environment
(LD_PRELOAD) and preserve autostart/autorestart/redirect_stderr settings.
In `@Dockerfile`:
- Around line 11-13: Remove the leftover simulavr-only packages from the builder
image by deleting the entries "rst2pdf", "help2man", and "texinfo" (the simulavr
block/commented install list) from the Dockerfile package install section so the
builder no longer installs these unused tools; update the package list where the
simulavr block appears (the commented lines containing "simulavr \", "g++ make
rst2pdf help2man texinfo \", and the closing "### \") to omit those three
package names or remove the simulavr-only block entirely.
In `@printer_simulator/Makefile`:
- Line 10: Add a placeholder Makefile target named test and include it in the
.PHONY list so the file follows Makefile conventions; update the existing .PHONY
line (which currently lists all, clean, lint, format, typecheck) to include test
and add a test: target that either echoes a short message or invokes the test
runner (e.g., pytest) so CI/static analysis won't flag the missing phony target;
reference the .PHONY declaration and the new test target when making the change.
- Around line 4-6: The Makefile currently sets CC, CFLAGS, and LDFLAGS but lacks
compiler warning flags; update the CFLAGS variable (used when invoking gcc) to
include standard warning flags such as -Wall -Wextra and optionally -Werror or
-Wpedantic to surface potential issues during build, e.g., augment the existing
CFLAGS definition so all compiled objects get these warnings enabled while
keeping -shared -fPIC and preserving LDFLAGS.
In `@printer_simulator/printer_hardware/axis.py`:
- Around line 77-78: The branch that assigns direction when key == self.dir_pin
should coerce the incoming value to a binary state to prevent non-binary values
from affecting motion; change the assignment in the Axis handling to normalize
value into 0 or 1 (e.g., convert via bool/int or explicit comparison) so that
self.direction is always 0 or 1, using the existing symbols self.dir_pin and
self.direction to locate the assignment to update.
In `@printer_simulator/printer_hardware/renderer.py`:
- Line 23: The field _stop_render_thread is declared but never used; either
remove it or wire it into the renderer's thread control. If you remove it,
delete the self._stop_render_thread assignment and any unused references;
otherwise implement cooperative shutdown by checking self._stop_render_thread
inside the rendering loop (e.g., in _render_loop or similar), set the flag to
True in a new stop_render_thread/stop method and join the thread in that method
to avoid dead state, and ensure start_render_thread (or the thread starter)
clears the flag before launching.
- Around line 1-2: The renderer currently imports matplotlib.pyplot directly
which can pick an interactive backend in headless environments; before importing
pyplot in printer_simulator/printer_hardware/renderer.py set the matplotlib
backend to "Agg" by importing matplotlib, calling matplotlib.use("Agg"), then
import matplotlib.pyplot as plt so savefig works reliably in containers; also
remove the unused _stop_render_thread attribute (or use it) in the Renderer
class to avoid dead code.
In `@printer_simulator/requirements-dev.txt`:
- Around line 1-4: The dev dependencies in requirements-dev.txt (entries "mypy",
"pre-commit", "ruff") are unpinned; update that file to pin each tool to a
specific version or tight constraint (e.g., exact versions or ~= constraints) to
ensure reproducible installs—choose current stable versions for "mypy",
"pre-commit", and "ruff", replace the bare names with those pinned strings, and
consider adding a comment indicating the chosen versions and date or using a
generated constraints file if you prefer automated updates.
In `@README.md`:
- Around line 46-47: Add a blank line between the alternative command line
("docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d
--build") and the horizontal rule marker (---) so the command isn't directly
adjacent to the HR; update the README content around the command and the '---'
separator to insert one empty line before the '---'.
In `@scripts/service_control.sh`:
- Around line 79-86: The current loop always calls supervisorctl for services in
the same order using UNIT/UNITS and COMMAND; change it so when COMMAND is "stop"
you iterate UNITS in reverse order (or explicitly reverse the list for that
branch) so klipper_mcu is stopped before printer_simulator, while preserving the
existing forward order for "start" and "restart"; modify the block that handles
start|stop|restart to detect "$COMMAND" = "stop" and reverse the iteration
before calling /usr/bin/supervisorctl "$COMMAND" "$UNIT" (and keep the
special-case when UNIT = "klipper" to call the multi-service supervisorctl
invocation).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
.dockerignore.gitignore.pre-commit-config.yamlDockerfilePINS.mdREADME.mdconfig/linux.configconfig/simulavr.configconfig/supervisord.confdocker-compose.build.ymldocker-compose.ymlexample-configs/addons/basic_cartesian_kinematics.cfgexample-configs/addons/dual_extruder.cfgexample-configs/addons/dual_extruder_stepper.cfgexample-configs/addons/heater_bed.cfgexample-configs/addons/led_neopixel.cfgexample-configs/addons/miscellaneous.cfgexample-configs/addons/single_extruder.cfgexample-configs/addons/temp_sensors.cfgexample-configs/printer.cfgprinter_simulator/.gitignoreprinter_simulator/Makefileprinter_simulator/hardware_bridge_hook.cprinter_simulator/printer_hardware/__init__.pyprinter_simulator/printer_hardware/axis.pyprinter_simulator/printer_hardware/heatbed.pyprinter_simulator/printer_hardware/input_pin.pyprinter_simulator/printer_hardware/output_pin.pyprinter_simulator/printer_hardware/pin_handler.pyprinter_simulator/printer_hardware/renderer.pyprinter_simulator/printer_hardware/server.pyprinter_simulator/printer_hardware/single_extruder.pyprinter_simulator/printer_hardware/state_io.pyprinter_simulator/printer_hardware/thermals.pyprinter_simulator/printer_simulator.pyprinter_simulator/pyproject.tomlprinter_simulator/requirements-dev.txtprinter_simulator/requirements.txtprinter_simulator/webcam_renderer.pyscripts/fix_venvs.shscripts/service_control.sh
💤 Files with no reviewable changes (2)
- scripts/fix_venvs.sh
- config/simulavr.config
| RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper \ | ||
| && virtualenv -p /usr/local/bin/python3 /home/printer/python-env \ |
There was a problem hiding this comment.
Pin cloned repos to immutable refs to keep images reproducible.
git clone --depth 1 without a pinned tag/commit makes rebuilds drift over time and can break CI/runtime unexpectedly.
🔧 Suggested hardening
ARG KLIPPER_REPO=https://github.com/Klipper3d/klipper.git
+ARG KLIPPER_REF=v0.12.0
ENV KLIPPER_REPO=${KLIPPER_REPO}
-RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper \
+RUN git clone --depth 1 --branch ${KLIPPER_REF} ${KLIPPER_REPO} /home/printer/klipper \
&& virtualenv -p /usr/local/bin/python3 /home/printer/python-env \
&& /home/printer/python-env/bin/pip install --no-cache-dir -r /home/printer/klipper/scripts/klippy-requirements.txt
@@
-RUN git clone --depth 1 https://github.com/Arksine/moonraker \
+ARG MOONRAKER_REPO=https://github.com/Arksine/moonraker.git
+ARG MOONRAKER_REF=v0.9.0
+RUN git clone --depth 1 --branch ${MOONRAKER_REF} ${MOONRAKER_REPO} /home/printer/moonraker \
&& /home/printer/python-env/bin/pip install --no-cache-dir -r /home/printer/moonraker/scripts/moonraker-requirements.txtAlso applies to: 42-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 26 - 27, The Dockerfile uses an unpinned git clone
(RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper) which makes
images non-reproducible; update the clone to accept and use an immutable ref
(e.g., ARG KLIPPER_REF or KLIPPER_COMMIT) and either clone with --branch
${KLIPPER_REF} --depth 1 or clone then run git -C /home/printer/klipper checkout
${KLIPPER_COMMIT} && git -C /home/printer/klipper reset --hard
${KLIPPER_COMMIT}; apply the same change to the other clone occurrences (lines
referenced in the comment) so builds are deterministic and reproducible.
| static FakeFd fake_fds[MAX_FAKE_FDS]; | ||
|
|
||
| static FakeFd *find_fake_fd(int fd) { | ||
| for (int i = 0; i < MAX_FAKE_FDS; i++) | ||
| if (fake_fds[i].type != FD_NONE && fake_fds[i].fd == fd) | ||
| return &fake_fds[i]; | ||
| return NULL; | ||
| } | ||
|
|
||
| static FakeFd *alloc_fake_fd(int fd, FdType type) { | ||
| for (int i = 0; i < MAX_FAKE_FDS; i++) { | ||
| if (fake_fds[i].type == FD_NONE) { | ||
| memset(&fake_fds[i], 0, sizeof(FakeFd)); | ||
| fake_fds[i].fd = fd; | ||
| fake_fds[i].type = type; | ||
| return &fake_fds[i]; | ||
| } | ||
| } | ||
| return NULL; | ||
| } | ||
|
|
||
| static void free_fake_fd(int fd) { | ||
| FakeFd *f = find_fake_fd(fd); | ||
| if (f) f->type = FD_NONE; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 125
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '1,100p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 4306
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '200,250p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 2305
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '310,330p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 388
🏁 Script executed:
rg -i "pthread|mutex|lock|atomic|thread|concurrent" printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 158
🏁 Script executed:
cat printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 11303
🏁 Script executed:
grep -n "pthread\|mutex\|lock\|thread\|Thread\|THREAD\|volatile\|atomic" printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 162
🏁 Script executed:
rg -i "klipper|multi.*thread|concurrent|race" printer_simulator/ --type c --type hRepository: mainsail-crew/virtual-klipper-printer
Length of output: 934
Guard fake FD bookkeeping with synchronization.
fake_fds is mutated and read from multiple hooks (open, ioctl, pread, close) with no lock. Concurrent access can race, corrupting the FD mapping. For example, alloc_fake_fd may find a free slot and write to it while another thread is iterating through the array in find_fake_fd, or free_fake_fd may clear an entry while another thread is reading from it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 65 - 89, The fake_fds
array access must be synchronized: add a pthread mutex (e.g., pthread_mutex_t
fake_fds_lock) that is initialized before use and use it to guard all accesses
and mutations of fake_fds; wrap the body of find_fake_fd, alloc_fake_fd
(including the memset and field assignments), and free_fake_fd in
pthread_mutex_lock/unlock (or equivalent) and ensure any other hooks that
read/write fake_fds (open, ioctl, pread, close) also lock around their fake_fds
interactions to prevent races and corruption.
| data = conn.recv(64).decode().strip() | ||
| parts = data.split() | ||
|
|
||
| if parts[0] == "GET" and len(parts) == 2: | ||
| value = self.get(parts[1]) | ||
| conn.send(f"{value}\n".encode()) | ||
|
|
||
| elif parts[0] == "SET" and len(parts) == 3: | ||
| key, value = parts[1], int(parts[2]) | ||
| self.set(key, value) | ||
| conn.send(b"OK\n") |
There was a problem hiding this comment.
Return explicit errors for empty/invalid commands.
The handler assumes parts[0] exists and silently drops unsupported commands; this makes client behavior brittle on malformed input.
🛠️ Proposed fix
def _handle_client(self, conn):
try:
data = conn.recv(64).decode().strip()
parts = data.split()
+ if not parts:
+ conn.send(b"ERR empty command\n")
+ return
if parts[0] == "GET" and len(parts) == 2:
value = self.get(parts[1])
conn.send(f"{value}\n".encode())
elif parts[0] == "SET" and len(parts) == 3:
- key, value = parts[1], int(parts[2])
+ try:
+ key, value = parts[1], int(parts[2])
+ except ValueError:
+ conn.send(b"ERR invalid value\n")
+ return
self.set(key, value)
conn.send(b"OK\n")
+ else:
+ conn.send(b"ERR invalid command\n")
except Exception as e:
print(f"[server] error: {e}")
finally:
conn.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/server.py` around lines 75 - 85, The
handler currently assumes data has tokens and silently ignores malformed input;
update the connection handling (the block that calls conn.recv, splits into
parts and calls self.get/self.set) to detect empty/whitespace data and
immediately send a clear error like "ERR Empty command\n", validate parts before
indexing and respond "ERR Invalid command\n" when parts length or command name
is unsupported, and wrap the int conversion for SET (where the code uses
int(parts[2])) in a try/except to send "ERR Invalid value\n" on ValueError;
ensure unsupported verbs return "ERR Unsupported command\n" so clients always
receive an explicit error instead of silent failure.
| def __init__( | ||
| self, | ||
| mass_kg: float, | ||
| heating_power_watts: float, | ||
| ambient_temp: float = 20.0, | ||
| cooling_coefficient: float = 0.01, # k, higher = faster cooling | ||
| initial_temp: float = 20.0, | ||
| ): | ||
| self._mass_kg = mass_kg | ||
| self._ambient_temp = ambient_temp | ||
| self._cooling_coefficient = cooling_coefficient | ||
| self._temperature = initial_temp | ||
| self._heating_power_watts = heating_power_watts | ||
| self._is_active = False | ||
| self._time = time.monotonic() |
There was a problem hiding this comment.
Validate thermal parameters to prevent runtime faults.
At Line 59, heat_capacity can become zero/invalid if mass_kg is non-positive.
🛡️ Constructor guards
def __init__(
self,
mass_kg: float,
heating_power_watts: float,
@@
):
+ if mass_kg <= 0:
+ raise ValueError("mass_kg must be > 0")
+ if cooling_coefficient < 0:
+ raise ValueError("cooling_coefficient must be >= 0")
self._mass_kg = mass_kgAlso applies to: 59-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/thermals.py` around lines 39 - 53, The
constructor for the thermals class does not validate inputs, so downstream
calculation of heat_capacity (derived from mass_kg) can be zero or invalid;
update __init__ to validate and raise ValueError for invalid parameters: ensure
mass_kg > 0, cooling_coefficient > 0, and (optionally) heating_power_watts >= 0
and that ambient_temp/initial_temp are finite numbers; reference the __init__
method and the instance attributes _mass_kg, _cooling_coefficient,
_heating_power_watts, _ambient_temp, and _temperature when adding these guards
so heat_capacity calculation later cannot divide by zero or operate on invalid
values.
| def set_heater(self, on: bool) -> None: | ||
| self._is_active = on | ||
| self.__update_temperature() | ||
|
|
There was a problem hiding this comment.
Apply elapsed-time update before changing heater state.
At Line 71, toggling _is_active before __update_temperature() attributes past dt to the new state, which skews temperature evolution.
✅ Fix state-transition integration
def set_heater(self, on: bool) -> None:
- self._is_active = on
- self.__update_temperature()
+ self.__update_temperature()
+ self._is_active = on📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def set_heater(self, on: bool) -> None: | |
| self._is_active = on | |
| self.__update_temperature() | |
| def set_heater(self, on: bool) -> None: | |
| self.__update_temperature() | |
| self._is_active = on | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/thermals.py` around lines 70 - 73, The
heater state is being changed before integrating temperature, causing the
elapsed-time update to use the new state; in set_heater you should first call
__update_temperature() to advance temperature using the current _is_active and
elapsed dt, then assign self._is_active = on to change the heater; ensure
__update_temperature() consumes the elapsed time since the last update (using
its existing time-tracking) so the state transition integration is correct.
| def load_axes(state_payload: dict) -> tuple[Axis, Axis, Axis] | None: | ||
| axes = state_payload.get("axes", {}) | ||
| required_keys = ("x", "y", "z") | ||
| if any(key not in axes for key in required_keys): | ||
| return None | ||
| return ( | ||
| create_axis_state("x", axes["x"]), | ||
| create_axis_state("y", axes["y"]), | ||
| create_axis_state("z", axes["z"]), | ||
| ) |
There was a problem hiding this comment.
Handle malformed axis payloads defensively.
load_axes() currently assumes each axis object has position, min_pos, and max_pos; missing fields can raise and stop rendering.
🛠️ Proposed fix
def load_axes(state_payload: dict) -> tuple[Axis, Axis, Axis] | None:
axes = state_payload.get("axes", {})
required_keys = ("x", "y", "z")
if any(key not in axes for key in required_keys):
return None
- return (
- create_axis_state("x", axes["x"]),
- create_axis_state("y", axes["y"]),
- create_axis_state("z", axes["z"]),
- )
+ try:
+ return (
+ create_axis_state("x", axes["x"]),
+ create_axis_state("y", axes["y"]),
+ create_axis_state("z", axes["z"]),
+ )
+ except (KeyError, TypeError):
+ return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/webcam_renderer.py` around lines 26 - 35, load_axes assumes
each axis dict has position, min_pos and max_pos which can raise if missing;
update load_axes to validate each axis payload before calling create_axis_state
(or make create_axis_state tolerant) by checking that axes["x"], axes["y"],
axes["z"] are mappings and contain the keys "position", "min_pos", "max_pos" (or
supply safe defaults) and return None or skip rendering if any axis is
malformed; reference the load_axes function and create_axis_state so you either
add explicit key checks/try/except around create_axis_state calls or add
defensive defaults/validation inside create_axis_state to prevent exceptions
from missing fields.
Add printer hardware simulation components and server - Implemented Axis, Heatbed, InputPin, OutputPin, and Extruder classes for simulating printer hardware. - Created PinHandler abstract base class to standardize pin handling across components. - Developed FakeHardwareServer to manage communication between hardware components and external requests. - Added state management functionality to persist printer state in JSON format. - Introduced thermal simulation for heatbed and extruder using ThermalMass class. - Integrated rendering capabilities for visualizing printer state using matplotlib. - Updated project structure with necessary configuration files and dependencies. - Removed obsolete scripts and adjusted service control for new components. Signed-off-by: Marcel Spitz <git@marcel-spitz.de>
There was a problem hiding this comment.
Actionable comments posted: 8
♻️ Duplicate comments (5)
Dockerfile (1)
25-25:⚠️ Potential issue | 🟠 MajorPin external git clones to immutable refs to keep builds reproducible.
Line 25 (and similarly Lines 41, 45, 48) still clones moving default branches/tips, so rebuilds can drift unexpectedly.
Suggested hardening
ARG KLIPPER_REPO=https://github.com/Klipper3d/klipper.git +ARG KLIPPER_REF=<immutable-tag-or-commit> ENV KLIPPER_REPO=${KLIPPER_REPO} -RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper \ +RUN git clone --depth 1 --branch ${KLIPPER_REF} ${KLIPPER_REPO} /home/printer/klipper \ @@ -RUN git clone --depth 1 https://github.com/Arksine/moonraker \ +ARG MOONRAKER_REPO=https://github.com/Arksine/moonraker.git +ARG MOONRAKER_REF=<immutable-tag-or-commit> +RUN git clone --depth 1 --branch ${MOONRAKER_REF} ${MOONRAKER_REPO} /home/printer/moonraker \ @@ -RUN git clone https://github.com/mainsail-crew/moonraker-timelapse +ARG MOONRAKER_TIMELAPSE_REPO=https://github.com/mainsail-crew/moonraker-timelapse.git +ARG MOONRAKER_TIMELAPSE_REF=<immutable-tag-or-commit> +RUN git clone --depth 1 --branch ${MOONRAKER_TIMELAPSE_REF} ${MOONRAKER_TIMELAPSE_REPO} /home/printer/moonraker-timelapse @@ -RUN git clone --depth 1 https://github.com/jacksonliam/mjpg-streamer \ +ARG MJPG_STREAMER_REPO=https://github.com/jacksonliam/mjpg-streamer.git +ARG MJPG_STREAMER_REF=<immutable-tag-or-commit> +RUN git clone --depth 1 --branch ${MJPG_STREAMER_REF} ${MJPG_STREAMER_REPO} /home/printer/mjpg-streamer \Also applies to: 41-41, 45-45, 48-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` at line 25, The Dockerfile's ephemeral git clones (the RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper and the other similar RUN git clone lines) must be pinned to immutable refs/tags/commit SHAs to make builds reproducible; update each clone invocation to fetch a specific tag or commit specified via build-arg (e.g., KLIPPER_REF) and then checkout that ref (or use git clone --branch <ref> --depth 1 <repo>) so the image builds always use the same commit instead of the moving default branch.printer_simulator/printer_hardware/axis.py (1)
26-39:⚠️ Potential issue | 🟠 MajorValidate step configuration in constructor.
At Line 74, division uses
full_steps_per_rotationwithout constructor guards. Invalid config (<=0) can crash or corrupt motion behavior.Suggested patch
def __init__( @@ ): + if full_steps_per_rotation <= 0: + raise ValueError("full_steps_per_rotation must be > 0") + if distance_mm_per_rotation <= 0: + raise ValueError("distance_mm_per_rotation must be > 0") self.name = name🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/axis.py` around lines 26 - 39, The constructor for Axis should validate the step configuration to prevent division by zero/invalid motion—check that full_steps_per_rotation is an int > 0 (and optionally that distance_mm_per_rotation > 0) inside __init__ and raise a clear ValueError if not; update Axis.__init__ to perform these guards before any computation that uses full_steps_per_rotation (e.g., where it’s later used in a division) so invalid configs are rejected early with an explanatory error message.printer_simulator/printer_hardware/thermals.py (2)
70-73:⚠️ Potential issue | 🟠 MajorIntegrate elapsed time before toggling heater state.
At Line 71, assigning
_is_activebefore__update_temperature()attributes priordtto the new state. Update temperature first, then flip the heater flag.Suggested patch
def set_heater(self, on: bool) -> None: - self._is_active = on - self.__update_temperature() + self.__update_temperature() + self._is_active = on🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/thermals.py` around lines 70 - 73, The current set_heater method sets self._is_active before calling __update_temperature(), causing the temperature update to use the new heater state for the elapsed interval; change the order so __update_temperature() is called first to advance temperature using the previous state, then assign self._is_active = on. Update the method body in set_heater to call self.__update_temperature() before flipping the _is_active flag (refer to set_heater and __update_temperature).
39-53:⚠️ Potential issue | 🟠 MajorValidate thermal parameters at construction.
At Line 59,
heat_capacitydepends onmass_kg; non-positive/invalid inputs can lead to runtime faults or unstable math. Add constructor guards formass_kg,cooling_coefficient, and numeric finiteness.Suggested patch
+import math import time @@ def __init__( @@ ): + if mass_kg <= 0: + raise ValueError("mass_kg must be > 0") + if cooling_coefficient <= 0: + raise ValueError("cooling_coefficient must be > 0") + if heating_power_watts < 0: + raise ValueError("heating_power_watts must be >= 0") + for n, v in { + "ambient_temp": ambient_temp, + "initial_temp": initial_temp, + }.items(): + if not math.isfinite(v): + raise ValueError(f"{n} must be finite") self._mass_kg = mass_kg🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/thermals.py` around lines 39 - 53, The constructor (__init__) must validate thermal inputs to avoid runtime errors in heat_capacity and other math: check that mass_kg and cooling_coefficient are positive (>0) and finite (use math.isfinite), and ensure heating_power_watts, ambient_temp, and initial_temp are finite numbers; raise ValueError with clear message if any check fails. Add these guards before assigning to self._mass_kg, self._cooling_coefficient, self._heating_power_watts, self._ambient_temp, and self._temperature so downstream uses (e.g., heat_capacity and any thermal update methods) always receive valid numeric inputs.printer_simulator/printer_hardware/server.py (1)
75-89:⚠️ Potential issue | 🟡 MinorGuard empty/malformed commands before indexing and parsing.
At Line 78,
parts[0]can raise on empty input; at Line 83,int(parts[2])can throw and currently returns no explicit protocol error. Please keep responses explicit for invalid input paths (ERR empty command,ERR invalid value, etc.).Suggested patch
def _handle_client(self, conn): try: data = conn.recv(64).decode().strip() parts = data.split() + if not parts: + conn.send(b"ERR empty command\n") + return if parts[0] == "GET" and len(parts) == 2: value = self.get(parts[1]) conn.send(f"{value}\n".encode()) elif parts[0] == "SET" and len(parts) == 3: - key, value = parts[1], int(parts[2]) + try: + key, value = parts[1], int(parts[2]) + except ValueError: + conn.send(b"ERR invalid value\n") + return self.set(key, value) conn.send(b"OK\n") else: conn.send(b"ERR malformed command\n")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/server.py` around lines 75 - 89, Guard against empty or malformed incoming commands before indexing: after reading data from conn in server.py, check if data (or parts after split) is empty and if so send "ERR empty command\n" and continue. Replace direct indexing of parts[0] with a check that parts has at least 1 element, and validate lengths for GET and SET cases before accessing parts[1]/parts[2]. When handling SET (where code calls int(parts[2]) and self.set), catch ValueError for non-integer values and respond with "ERR invalid value\n" rather than letting an exception propagate; also ensure IndexError falls back to "ERR malformed command\n". Keep using self.get and self.set but make all error paths send explicit protocol responses to conn.
🧹 Nitpick comments (3)
example-configs/addons/heater_bed.cfg (1)
1-6: Verify the linear ADC temperature mapping is intentional for simulator simplicity.The linear voltage-to-temperature mapping (0V→0°C, 3.3V→500°C) is significantly simplified compared to real thermistor behavior. Real NTC thermistors like the replaced EPCOS 100K B57560G104F have non-linear exponential resistance curves.
For a simulator prioritizing predictability and ease of implementation, the linear mapping is reasonable—especially since thermal dynamics are likely modeled separately in the Python
ThermalMassclass. However, if realistic thermal sensor behavior is desired, consider using a Steinhart-Hart equation or thermistor table.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@example-configs/addons/heater_bed.cfg` around lines 1 - 6, The ADC temperature mapping in the [adc_temperature heater_bed_adc] section currently uses a linear mapping (temperature1/voltage1 and temperature2/voltage2) which is a simulator simplification; decide whether to keep this for predictability or replace it with realistic thermistor behavior by implementing a Steinhart-Hart mapping or lookup table and wire it into the same input path used by the ThermalMass class; update the heater_bed_adc config (temperature1/voltage1, temperature2/voltage2) or replace with a thermistor table/coefficients and adjust any parser that reads heater_bed_adc accordingly so the emulator uses the new mapping.printer_simulator/printer_hardware/output_pin.py (1)
27-32: Guard warning prints behindverboseto avoid log flooding.Lines 27-32 always print on unknown/read attempts. In high-frequency paths this can add avoidable latency and noisy logs.
Suggested refactor
else: - print(f"[{self.name}] unknown pin: {key}") + if self.verbose: + print(f"[{self.name}] unknown pin: {key}") def get(self, key: str) -> int: - print(f"[{self.name}] cannot read pin {key} - pin is write-only") + if self.verbose: + print(f"[{self.name}] cannot read pin {key} - pin is write-only") return 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/output_pin.py` around lines 27 - 32, The prints in the unknown-pin branch and in get(self, key) unconditionally spam output; guard them behind a verbosity flag (e.g. self.verbose) or a logger check to avoid noise. Update the else branch that currently does print(f"[{self.name}] unknown pin: {key}") and the get(self, key) method that does print(f"[{self.name}] cannot read pin {key} - pin is write-only") to only emit those messages when self.verbose is truthy (or use an existing logger like self.logger.debug/info when available), leaving the existing return behavior intact.config/supervisord.conf (1)
39-46: Replace fixed sleep with readiness check for hardware socket.Using
sleep 10is brittle; startup can race if the simulator is slower/faster. Prefer waiting until/tmp/printer_hook.sockexists before launchingklipper.elf.Suggested patch
-[program:klipper_mcu] -command=bash -c 'sleep 10 && /home/printer/klipper/out/klipper.elf' +[program:klipper_mcu] +command=bash -c 'until [ -S /tmp/printer_hook.sock ]; do sleep 0.2; done; exec /home/printer/klipper/out/klipper.elf' user=printer process_name=klipper_mcu directory=/home/printer environment=FAKE_HARDWARE_SOCKET="/tmp/printer_hook.sock",LD_PRELOAD="/home/printer/printer_simulator/hardware_bridge_hook.so" autostart=true autorestart=true redirect_stderr=true🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/supervisord.conf` around lines 39 - 46, The supervisord command currently uses a fixed sleep ("sleep 10 && /home/printer/klipper/out/klipper.elf") which is brittle; change the startup command to poll/wait for the FAKE_HARDWARE_SOCKET path (/tmp/printer_hook.sock) to exist before executing /home/printer/klipper/out/klipper.elf so klipper_mcu won't race with the simulator. Update the command in supervisord.conf to run a small loop or a wait-for-file mechanism that checks for /tmp/printer_hook.sock (respecting environment variable FAKE_HARDWARE_SOCKET if used) and only execs the klipper.elf once the socket is present; keep user=printer, directory=/home/printer and existing environment and autostart/autorestart settings unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@PINS.md`:
- Line 54: Update the phrasing in the optional config sentence that currently
reads "optional addon configs" to the clearer wording "optional add-on configs"
(the sentence referencing pins in optional add-on configs and
printer_simulator.py); edit the line in PINS.md so the sentence uses the
hyphenated form and reads smoothly while preserving the original meaning.
In `@printer_simulator/hardware_bridge_hook.c`:
- Around line 237-239: Check for negative offsets at the start of the pread-like
routine and return an error instead of performing tmp + offset pointer
arithmetic; specifically, before the existing "if (offset >= len) return 0;" add
an explicit guard "if (offset < 0) { errno = EINVAL; return -1; }" (or the
project's canonical error return) so variables offset, len, count, buf, tmp and
the subsequent memcpy are never used with a negative offset.
- Around line 250-259: The ioctl wrapper unconditionally consumes a vararg
(va_arg) which is undefined for requests without a third argument; update ioctl
to inspect the request first and only call va_arg(args, void*) when the request
encodes an argument (use the standard ioctl direction/argument macros such as
_IOC_DIR(request) or equivalent from <sys/ioctl.h> to detect if the request
carries data), then proceed to call find_fake_fd(fd) and either handle the fake
FD or call real_ioctl(fd, request, arg) when an arg exists and real_ioctl(fd,
request) when it does not; ensure va_start/va_end are still used and args are
not consumed when unnecessary so find_fake_fd and real_ioctl behave correctly.
- Around line 169-187: The current broad strstr checks can match unrelated
paths; tighten them by matching explicit patterns: for ADC, replace the if
(strstr(path, "in_voltage") && strstr(path, "_raw")) with a robust parse that
uses sscanf(path, "%*[^in]in_voltage%d_raw", &channel) or better yet use
sscanf(path, "in_voltage%d_raw", &channel) after isolating the filename (e.g.
with strrchr(path,'/') to get basename) so you only accept true
"in_voltageN_raw" filenames before allocating the fake FD (keep using
make_dummy_fd and alloc_fake_fd, set f->channel). For GPIO, require the path to
start with "/dev/gpiochip" (use strncmp(path, "/dev/gpiochip",
strlen("/dev/gpiochip")) == 0) and then sscanf(path, "/dev/gpiochip%d", &chip)
to extract chip number before creating the dummy fd. Ensure you return -1 on
parse failure and preserve existing error handling around make_dummy_fd and
alloc_fake_fd.
In `@printer_simulator/printer_hardware/input_pin.py`:
- Around line 21-23: The set method always logs "pin is read-only" even when the
caller passed a key that doesn't refer to this pin, hiding routing mistakes;
update set(self, key: str, value: int) in input_pin.py to first check whether
the supplied key matches this pin's identifier (compare key to self.name or the
pin's canonical id used elsewhere), and if it does not match, log or raise an
"unknown pin" / "invalid key" error so callers know they addressed the wrong
pin; only when the key matches this pin should you keep the existing read-only
message for attempts to write to this input pin.
In `@printer_simulator/printer_hardware/single_extruder.py`:
- Around line 7-29: Update the module docstring in single_extruder.py to fix
typos and clarify wording: change "invertions" to "inversions", replace
"according cooling" with "corresponding cooling", and apply any small wording
tweaks for readability in the top-of-file docstring and sample config comments
so the description of temperature updates, cooling, and pin usage reads clearly.
In `@README.md`:
- Line 47: The README contains an inconsistent Docker Compose command using the
legacy form "docker-compose -f docker-compose.build.yml -f docker-compose.yml up
-d --build"; replace that instance with the plugin form "docker compose -f
docker-compose.build.yml -f docker-compose.yml up -d --build" to match the rest
of the document and ensure systems that only have the "docker compose" plugin
work; verify other occurrences of "docker-compose" in the README and make them
consistent to the chosen style.
- Around line 58-69: Replace the slash-delimited placeholder verbs with
distinct, runnable command examples: change occurrences like `docker container
start/restart/stop printer` to three separate commands (`docker container start
printer`, `docker container restart printer`, `docker container stop printer`)
and similarly expand `supervisorctl start/restart/stop all` and `supervisorctl
start/restart/stop <service_name>` into their individual forms (`supervisorctl
start all`, `supervisorctl restart all`, `supervisorctl stop all` and
`supervisorctl start <service_name>`, `supervisorctl restart <service_name>`,
`supervisorctl stop <service_name>`), ensuring the README shows each executable
command instead of slash-separated placeholders.
---
Duplicate comments:
In `@Dockerfile`:
- Line 25: The Dockerfile's ephemeral git clones (the RUN git clone --depth 1
${KLIPPER_REPO} /home/printer/klipper and the other similar RUN git clone lines)
must be pinned to immutable refs/tags/commit SHAs to make builds reproducible;
update each clone invocation to fetch a specific tag or commit specified via
build-arg (e.g., KLIPPER_REF) and then checkout that ref (or use git clone
--branch <ref> --depth 1 <repo>) so the image builds always use the same commit
instead of the moving default branch.
In `@printer_simulator/printer_hardware/axis.py`:
- Around line 26-39: The constructor for Axis should validate the step
configuration to prevent division by zero/invalid motion—check that
full_steps_per_rotation is an int > 0 (and optionally that
distance_mm_per_rotation > 0) inside __init__ and raise a clear ValueError if
not; update Axis.__init__ to perform these guards before any computation that
uses full_steps_per_rotation (e.g., where it’s later used in a division) so
invalid configs are rejected early with an explanatory error message.
In `@printer_simulator/printer_hardware/server.py`:
- Around line 75-89: Guard against empty or malformed incoming commands before
indexing: after reading data from conn in server.py, check if data (or parts
after split) is empty and if so send "ERR empty command\n" and continue. Replace
direct indexing of parts[0] with a check that parts has at least 1 element, and
validate lengths for GET and SET cases before accessing parts[1]/parts[2]. When
handling SET (where code calls int(parts[2]) and self.set), catch ValueError for
non-integer values and respond with "ERR invalid value\n" rather than letting an
exception propagate; also ensure IndexError falls back to "ERR malformed
command\n". Keep using self.get and self.set but make all error paths send
explicit protocol responses to conn.
In `@printer_simulator/printer_hardware/thermals.py`:
- Around line 70-73: The current set_heater method sets self._is_active before
calling __update_temperature(), causing the temperature update to use the new
heater state for the elapsed interval; change the order so
__update_temperature() is called first to advance temperature using the previous
state, then assign self._is_active = on. Update the method body in set_heater to
call self.__update_temperature() before flipping the _is_active flag (refer to
set_heater and __update_temperature).
- Around line 39-53: The constructor (__init__) must validate thermal inputs to
avoid runtime errors in heat_capacity and other math: check that mass_kg and
cooling_coefficient are positive (>0) and finite (use math.isfinite), and ensure
heating_power_watts, ambient_temp, and initial_temp are finite numbers; raise
ValueError with clear message if any check fails. Add these guards before
assigning to self._mass_kg, self._cooling_coefficient,
self._heating_power_watts, self._ambient_temp, and self._temperature so
downstream uses (e.g., heat_capacity and any thermal update methods) always
receive valid numeric inputs.
---
Nitpick comments:
In `@config/supervisord.conf`:
- Around line 39-46: The supervisord command currently uses a fixed sleep
("sleep 10 && /home/printer/klipper/out/klipper.elf") which is brittle; change
the startup command to poll/wait for the FAKE_HARDWARE_SOCKET path
(/tmp/printer_hook.sock) to exist before executing
/home/printer/klipper/out/klipper.elf so klipper_mcu won't race with the
simulator. Update the command in supervisord.conf to run a small loop or a
wait-for-file mechanism that checks for /tmp/printer_hook.sock (respecting
environment variable FAKE_HARDWARE_SOCKET if used) and only execs the
klipper.elf once the socket is present; keep user=printer,
directory=/home/printer and existing environment and autostart/autorestart
settings unchanged.
In `@example-configs/addons/heater_bed.cfg`:
- Around line 1-6: The ADC temperature mapping in the [adc_temperature
heater_bed_adc] section currently uses a linear mapping (temperature1/voltage1
and temperature2/voltage2) which is a simulator simplification; decide whether
to keep this for predictability or replace it with realistic thermistor behavior
by implementing a Steinhart-Hart mapping or lookup table and wire it into the
same input path used by the ThermalMass class; update the heater_bed_adc config
(temperature1/voltage1, temperature2/voltage2) or replace with a thermistor
table/coefficients and adjust any parser that reads heater_bed_adc accordingly
so the emulator uses the new mapping.
In `@printer_simulator/printer_hardware/output_pin.py`:
- Around line 27-32: The prints in the unknown-pin branch and in get(self, key)
unconditionally spam output; guard them behind a verbosity flag (e.g.
self.verbose) or a logger check to avoid noise. Update the else branch that
currently does print(f"[{self.name}] unknown pin: {key}") and the get(self, key)
method that does print(f"[{self.name}] cannot read pin {key} - pin is
write-only") to only emit those messages when self.verbose is truthy (or use an
existing logger like self.logger.debug/info when available), leaving the
existing return behavior intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 97bfaa56-89b5-4d5e-9f9a-fc943bde82a2
📒 Files selected for processing (41)
.dockerignore.gitignore.pre-commit-config.yamlDockerfilePINS.mdREADME.mdconfig/linux.configconfig/simulavr.configconfig/supervisord.confdocker-compose.build.ymldocker-compose.ymlexample-configs/addons/basic_cartesian_kinematics.cfgexample-configs/addons/dual_extruder.cfgexample-configs/addons/dual_extruder_stepper.cfgexample-configs/addons/heater_bed.cfgexample-configs/addons/led_neopixel.cfgexample-configs/addons/miscellaneous.cfgexample-configs/addons/single_extruder.cfgexample-configs/addons/temp_sensors.cfgexample-configs/printer.cfgprinter_simulator/.gitignoreprinter_simulator/Makefileprinter_simulator/hardware_bridge_hook.cprinter_simulator/printer_hardware/__init__.pyprinter_simulator/printer_hardware/axis.pyprinter_simulator/printer_hardware/heatbed.pyprinter_simulator/printer_hardware/input_pin.pyprinter_simulator/printer_hardware/output_pin.pyprinter_simulator/printer_hardware/pin_handler.pyprinter_simulator/printer_hardware/renderer.pyprinter_simulator/printer_hardware/server.pyprinter_simulator/printer_hardware/single_extruder.pyprinter_simulator/printer_hardware/state_io.pyprinter_simulator/printer_hardware/thermals.pyprinter_simulator/printer_simulator.pyprinter_simulator/pyproject.tomlprinter_simulator/requirements-dev.txtprinter_simulator/requirements.txtprinter_simulator/webcam_renderer.pyscripts/fix_venvs.shscripts/service_control.sh
💤 Files with no reviewable changes (2)
- scripts/fix_venvs.sh
- config/simulavr.config
🚧 Files skipped from review as they are similar to previous changes (15)
- printer_simulator/pyproject.toml
- printer_simulator/webcam_renderer.py
- printer_simulator/printer_hardware/state_io.py
- example-configs/addons/miscellaneous.cfg
- docker-compose.build.yml
- printer_simulator/requirements-dev.txt
- printer_simulator/printer_simulator.py
- printer_simulator/.gitignore
- .gitignore
- docker-compose.yml
- example-configs/addons/dual_extruder.cfg
- example-configs/addons/temp_sensors.cfg
- printer_simulator/printer_hardware/renderer.py
- scripts/service_control.sh
- example-configs/addons/led_neopixel.cfg
|
|
||
| ## Optional Example Config Pins | ||
|
|
||
| The following pins appear in optional addon configs but are not instantiated by default in `printer_simulator.py`: |
There was a problem hiding this comment.
Polish wording in optional config section.
Line 54 reads a bit awkwardly; consider “optional add-on configs” for clearer wording.
🧰 Tools
🪛 LanguageTool
[grammar] ~54-~54: Ensure spelling is correct
Context: ... The following pins appear in optional addon configs but are not instantiated by def...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@PINS.md` at line 54, Update the phrasing in the optional config sentence that
currently reads "optional addon configs" to the clearer wording "optional add-on
configs" (the sentence referencing pins in optional add-on configs and
printer_simulator.py); edit the line in PINS.md so the sentence uses the
hyphenated form and reads smoothly while preserving the original meaning.
| if (strstr(path, "in_voltage") && strstr(path, "_raw")) { | ||
| int channel = 0; | ||
| const char *p = strstr(path, "in_voltage"); | ||
| if (p) channel = atoi(p + strlen("in_voltage")); | ||
|
|
||
| int fd = make_dummy_fd(); | ||
| if (fd < 0) return -1; | ||
| FakeFd *f = alloc_fake_fd(fd, FD_ADC); | ||
| if (!f) { real_close(fd); errno = ENOMEM; return -1; } | ||
| f->channel = channel; | ||
| fprintf(stderr, "[fake_hw] open ADC channel %d -> fd %d\n", channel, fd); | ||
| return fd; | ||
| } | ||
|
|
||
| // GPIO: /dev/gpiochipN | ||
| if (strstr(path, "/dev/gpiochip")) { | ||
| int chip = 0; | ||
| sscanf(path, "/dev/gpiochip%d", &chip); | ||
| int fd = make_dummy_fd(); |
There was a problem hiding this comment.
Tighten device path matching to avoid false interceptions.
Line 169 and Line 184 rely on broad strstr checks; this can hijack unrelated paths that merely contain those substrings.
🔧 Proposed fix
- if (strstr(path, "in_voltage") && strstr(path, "_raw")) {
- int channel = 0;
- const char *p = strstr(path, "in_voltage");
- if (p) channel = atoi(p + strlen("in_voltage"));
+ int channel = -1;
+ char tail = '\0';
+ if (path && sscanf(path,
+ "/sys/bus/iio/devices/iio:device0/in_voltage%d_raw%c",
+ &channel, &tail) == 1) {
@@
- if (strstr(path, "/dev/gpiochip")) {
- int chip = 0;
- sscanf(path, "/dev/gpiochip%d", &chip);
+ int chip = -1;
+ tail = '\0';
+ if (path && sscanf(path, "/dev/gpiochip%d%c", &chip, &tail) == 1) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 169 - 187, The current
broad strstr checks can match unrelated paths; tighten them by matching explicit
patterns: for ADC, replace the if (strstr(path, "in_voltage") && strstr(path,
"_raw")) with a robust parse that uses sscanf(path, "%*[^in]in_voltage%d_raw",
&channel) or better yet use sscanf(path, "in_voltage%d_raw", &channel) after
isolating the filename (e.g. with strrchr(path,'/') to get basename) so you only
accept true "in_voltageN_raw" filenames before allocating the fake FD (keep
using make_dummy_fd and alloc_fake_fd, set f->channel). For GPIO, require the
path to start with "/dev/gpiochip" (use strncmp(path, "/dev/gpiochip",
strlen("/dev/gpiochip")) == 0) and then sscanf(path, "/dev/gpiochip%d", &chip)
to extract chip number before creating the dummy fd. Ensure you return -1 on
parse failure and preserve existing error handling around make_dummy_fd and
alloc_fake_fd.
| if (offset >= len) return 0; | ||
| size_t n = (size_t)(len - offset) < count ? (size_t)(len - offset) : count; | ||
| memcpy(buf, tmp + offset, n); |
There was a problem hiding this comment.
Handle negative pread offsets explicitly.
Line 238 computes len - offset without guarding offset < 0; a negative offset can lead to invalid pointer arithmetic on tmp + offset.
🛡️ Proposed fix
- if (offset >= len) return 0;
+ if (offset < 0) { errno = EINVAL; return -1; }
+ if (offset >= len) return 0;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (offset >= len) return 0; | |
| size_t n = (size_t)(len - offset) < count ? (size_t)(len - offset) : count; | |
| memcpy(buf, tmp + offset, n); | |
| if (offset < 0) { errno = EINVAL; return -1; } | |
| if (offset >= len) return 0; | |
| size_t n = (size_t)(len - offset) < count ? (size_t)(len - offset) : count; | |
| memcpy(buf, tmp + offset, n); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 237 - 239, Check for
negative offsets at the start of the pread-like routine and return an error
instead of performing tmp + offset pointer arithmetic; specifically, before the
existing "if (offset >= len) return 0;" add an explicit guard "if (offset < 0) {
errno = EINVAL; return -1; }" (or the project's canonical error return) so
variables offset, len, count, buf, tmp and the subsequent memcpy are never used
with a negative offset.
| int ioctl(int fd, unsigned long request, ...) { | ||
| init_real_funcs(); | ||
| va_list args; | ||
| va_start(args, request); | ||
| void *arg = va_arg(args, void*); | ||
| va_end(args); | ||
|
|
||
| FakeFd *f = find_fake_fd(fd); | ||
| if (!f) return real_ioctl(fd, request, arg); | ||
|
|
There was a problem hiding this comment.
Fix ioctl varargs handling to avoid undefined behavior.
Line 254 unconditionally reads va_arg(args, void*). That is unsafe for ioctl requests that have no third argument.
🐛 Proposed fix
int ioctl(int fd, unsigned long request, ...) {
init_real_funcs();
- va_list args;
- va_start(args, request);
- void *arg = va_arg(args, void*);
- va_end(args);
+ void *arg = NULL;
+ int has_arg = (_IOC_DIR(request) != _IOC_NONE);
+ if (has_arg) {
+ va_list args;
+ va_start(args, request);
+ arg = va_arg(args, void*);
+ va_end(args);
+ }
FakeFd *f = find_fake_fd(fd);
- if (!f) return real_ioctl(fd, request, arg);
+ if (!f)
+ return has_arg ? real_ioctl(fd, request, arg)
+ : real_ioctl(fd, request);#!/bin/bash
set -euo pipefail
FILE="printer_simulator/hardware_bridge_hook.c"
echo "Inspect ioctl wrapper around varargs handling:"
nl -ba "$FILE" | sed -n '248,262p'
echo
echo "Locate unconditional va_arg usage in ioctl:"
rg -nP --type=c 'va_arg\s*\(\s*args\s*,\s*void\*\s*\)' "$FILE"Expected result: the current code shows unconditional va_arg in ioctl, confirming the issue.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 250 - 259, The ioctl
wrapper unconditionally consumes a vararg (va_arg) which is undefined for
requests without a third argument; update ioctl to inspect the request first and
only call va_arg(args, void*) when the request encodes an argument (use the
standard ioctl direction/argument macros such as _IOC_DIR(request) or equivalent
from <sys/ioctl.h> to detect if the request carries data), then proceed to call
find_fake_fd(fd) and either handle the fake FD or call real_ioctl(fd, request,
arg) when an arg exists and real_ioctl(fd, request) when it does not; ensure
va_start/va_end are still used and args are not consumed when unnecessary so
find_fake_fd and real_ioctl behave correctly.
| def set(self, key: str, value: int) -> None: | ||
| print(f"[{self.name}] cannot set pin {key} to {value} - pin is read-only") | ||
|
|
There was a problem hiding this comment.
Differentiate unknown key from read-only write attempts.
At Line 22, set() always reports read-only even when key is not this pin, which can hide routing mistakes during debugging.
Suggested fix
def set(self, key: str, value: int) -> None:
- print(f"[{self.name}] cannot set pin {key} to {value} - pin is read-only")
+ if key == self.pin:
+ print(f"[{self.name}] cannot set pin {key} to {value} - pin is read-only")
+ else:
+ print(f"[{self.name}] unknown pin: {key}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def set(self, key: str, value: int) -> None: | |
| print(f"[{self.name}] cannot set pin {key} to {value} - pin is read-only") | |
| def set(self, key: str, value: int) -> None: | |
| if key == self.pin: | |
| print(f"[{self.name}] cannot set pin {key} to {value} - pin is read-only") | |
| else: | |
| print(f"[{self.name}] unknown pin: {key}") |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/input_pin.py` around lines 21 - 23, The
set method always logs "pin is read-only" even when the caller passed a key that
doesn't refer to this pin, hiding routing mistakes; update set(self, key: str,
value: int) in input_pin.py to first check whether the supplied key matches this
pin's identifier (compare key to self.name or the pin's canonical id used
elsewhere), and if it does not match, log or raise an "unknown pin" / "invalid
key" error so callers know they addressed the wrong pin; only when the key
matches this pin should you keep the existing read-only message for attempts to
write to this input pin.
| Simulates a single extruder with a thermal mass. Temperature is updated based on heater state and cooling over time. | ||
| Extrusion and according cooling isn't implemented | ||
|
|
||
| Sample Config: | ||
| [adc_temperature extruder_adc] | ||
| temperature1: 0 | ||
| voltage1: 0 | ||
| temperature2: 500 | ||
| voltage2: 3.3 | ||
|
|
||
| [extruder] | ||
| step_pin: gpiochip4/gpio0 | ||
| dir_pin: gpiochip4/gpio1 | ||
| enable_pin: gpiochip4/gpio2 | ||
| heater_pin: gpiochip4/gpio3 | ||
| sensor_pin: analog1 | ||
| sensor_type: extruder_adc | ||
| adc_voltage: 3.3 | ||
| control: watermark | ||
| microsteps: 1 | ||
|
|
||
| Don't use any invertions for pins | ||
| """ |
There was a problem hiding this comment.
Fix docstring typos for clarity.
Small wording fixes would help readability (e.g., “inversions” instead of “invertions”, and “corresponding cooling” instead of “according cooling”).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/single_extruder.py` around lines 7 - 29,
Update the module docstring in single_extruder.py to fix typos and clarify
wording: change "invertions" to "inversions", replace "according cooling" with
"corresponding cooling", and apply any small wording tweaks for readability in
the top-of-file docstring and sample config comments so the description of
temperature updates, cooling, and pin usage reads clearly.
| 5. Run `docker compose up -d --build` to build the docker image and start the | ||
| container in detached mode | ||
|
|
||
| Alternatively you can execute `docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build` |
There was a problem hiding this comment.
Use a consistent Docker Compose command form.
Line 47 uses docker-compose ... while the rest of the doc uses docker compose .... Keeping one style avoids setup confusion on systems that only have the plugin command.
Suggested doc fix
-Alternatively you can execute `docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build`
+Alternatively you can execute `docker compose -f docker-compose.build.yml -f docker-compose.yml up -d --build`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Alternatively you can execute `docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build` | |
| Alternatively you can execute `docker compose -f docker-compose.build.yml -f docker-compose.yml up -d --build` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 47, The README contains an inconsistent Docker Compose
command using the legacy form "docker-compose -f docker-compose.build.yml -f
docker-compose.yml up -d --build"; replace that instance with the plugin form
"docker compose -f docker-compose.build.yml -f docker-compose.yml up -d --build"
to match the rest of the document and ensure systems that only have the "docker
compose" plugin work; verify other occurrences of "docker-compose" in the README
and make them consistent to the chosen style.
| * Get all container IDs and status: `docker ps -a` | ||
| * Get only the ID of running containers: `docker ps` | ||
| * Access a containers shell: `docker exec -it <CONTAINER ID> bash` | ||
| * Start/Restart/Stop a container: `docker container start/restart/stop <CONTAINER ID>` | ||
| * Remove a container: `docker container rm <CONTAINER ID>` | ||
| * Access a containers shell: `docker exec -it printer bash` | ||
| * Start/Restart/Stop a container: `docker container start/restart/stop printer` | ||
| * Remove a container: `docker container rm printer` | ||
|
|
||
| ## Using supervisorctl | ||
|
|
||
| 1. Access the containers shell: `docker exec -it printer bash` | ||
| 2. Use `supervisorctl status` to inspect service status | ||
| 3. Start/restart/stop all services `supervisorctl start/restart/stop all` | ||
| 4. Start/restart/stop a specific service `supervisorctl start/restart/stop <service_name>` |
There was a problem hiding this comment.
Replace slash-delimited placeholders with executable commands.
Lines 61-69 show combined verbs (start/restart/stop) that are not runnable commands as written.
Suggested doc fix
-* Start/Restart/Stop a container: `docker container start/restart/stop printer`
+* Start a container: `docker container start printer`
+* Restart a container: `docker container restart printer`
+* Stop a container: `docker container stop printer`
@@
-3. Start/restart/stop all services `supervisorctl start/restart/stop all`
-4. Start/restart/stop a specific service `supervisorctl start/restart/stop <service_name>`
+3. Start all services: `supervisorctl start all`
+4. Restart all services: `supervisorctl restart all`
+5. Stop all services: `supervisorctl stop all`
+6. Start one service: `supervisorctl start <service_name>`
+7. Restart one service: `supervisorctl restart <service_name>`
+8. Stop one service: `supervisorctl stop <service_name>`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| * Get all container IDs and status: `docker ps -a` | |
| * Get only the ID of running containers: `docker ps` | |
| * Access a containers shell: `docker exec -it <CONTAINER ID> bash` | |
| * Start/Restart/Stop a container: `docker container start/restart/stop <CONTAINER ID>` | |
| * Remove a container: `docker container rm <CONTAINER ID>` | |
| * Access a containers shell: `docker exec -it printer bash` | |
| * Start/Restart/Stop a container: `docker container start/restart/stop printer` | |
| * Remove a container: `docker container rm printer` | |
| ## Using supervisorctl | |
| 1. Access the containers shell: `docker exec -it printer bash` | |
| 2. Use `supervisorctl status` to inspect service status | |
| 3. Start/restart/stop all services `supervisorctl start/restart/stop all` | |
| 4. Start/restart/stop a specific service `supervisorctl start/restart/stop <service_name>` | |
| * Get all container IDs and status: `docker ps -a` | |
| * Get only the ID of running containers: `docker ps` | |
| * Access a containers shell: `docker exec -it printer bash` | |
| * Start a container: `docker container start printer` | |
| * Restart a container: `docker container restart printer` | |
| * Stop a container: `docker container stop printer` | |
| * Remove a container: `docker container rm printer` | |
| ## Using supervisorctl | |
| 1. Access the containers shell: `docker exec -it printer bash` | |
| 2. Use `supervisorctl status` to inspect service status | |
| 3. Start all services: `supervisorctl start all` | |
| 4. Restart all services: `supervisorctl restart all` | |
| 5. Stop all services: `supervisorctl stop all` | |
| 6. Start one service: `supervisorctl start <service_name>` | |
| 7. Restart one service: `supervisorctl restart <service_name>` | |
| 8. Stop one service: `supervisorctl stop <service_name>` |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 58 - 69, Replace the slash-delimited placeholder
verbs with distinct, runnable command examples: change occurrences like `docker
container start/restart/stop printer` to three separate commands (`docker
container start printer`, `docker container restart printer`, `docker container
stop printer`) and similarly expand `supervisorctl start/restart/stop all` and
`supervisorctl start/restart/stop <service_name>` into their individual forms
(`supervisorctl start all`, `supervisorctl restart all`, `supervisorctl stop
all` and `supervisorctl start <service_name>`, `supervisorctl restart
<service_name>`, `supervisorctl stop <service_name>`), ensuring the README shows
each executable command instead of slash-separated placeholders.
Disclaimer upfront:
This MR is big and completely changes the way, how the virtual-klipper-printer works.
I completely understand if these changes are not accepted, but I see a real benefit in this alternative implementation and want to share it with you.
Implementation reasoning
The current implementation utilizes
simulavremulating the AVR version of klipper.simulavr doesn't simulate in realtime, observations show slowdowns of at least factor 6. (target speed is 20MHz, real execution speed is 3MHz on a top-spec'd Macbook M4).
This leads to timing issues like #43.
Additionally executing simulavr requires 100% CPU for one CPU-core.
In an attempt in fixing the
Timer too closeissue I stumbled over the possibility of compiling klipper with Linux as a Build-Target (I guess for Raspberry Pi or other SoC). The compiled output can be executed directly inside the Docker-Container. Because of missing GPIO/analog devices, this doesn't work out of the box though.On Linux GPIO pins or Analog inputs are mounted into the Linux Directory Tree and are access via syscall functions, i.e. "open", "read", "write". Example https://github.com/Klipper3d/klipper/blob/master/src/linux/analog.c#L30
With a Shared Object written in C and using LD_PRELOAD the execution of the syscalls can be intercepted and in case of an access to an analog channel or gpio device, the calls are redirected to a Printer Simulator implemented in Python.
The Printer Simulator simulates the Movement of the Stepper Motors/Endstops, making e.g. a realistic Axis Homing possible. Additionally Heatbeds and Extruders simulate heating/cooling. Information extracted from the Printer Simulator is used to generate a Webcam Image.
The complete system necesitates practically no CPU, as the extra emulation step isn't necessary.
See this mainsail screenshot showing the virtual klipper printer in action:

Alternative solutions considered/evaluated:
Isn't fully implemented and communication couldn't be established between klippy and klipper
Complexity is significantly higher, Python is more intuitive to program in
Introduces a Maintenance dependency when klipper source code changes
Necesitates privileged docker container access opening a potential security risk
Coding agent generated commit message
Add printer hardware simulation components and server
Open todos:
:latestisn't best practice. In case a user is not on latest main but pulls a new built docker image if this MR gets merged, this will break the setup