feat: motion-graphics video pipeline (HTML/GSAP → MP4 + GitTools + RenderBackendProtocol)#27
Conversation
There was a problem hiding this comment.
MervinPraison has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 22 minutes and 11 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (20)
📝 WalkthroughWalkthroughAdds a new motion-graphics subsystem: RenderBackend protocol and HTML/GSAP Playwright+FFmpeg backend, safe GitTools, an agent factory and team preset, a bounded render retry loop, examples, tests, and optional video-motion extras. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Motion Graphics<br/>Agent
participant RenderTools as RenderTools
participant RenderIter as render_iterate
participant Backend as HtmlRenderBackend
participant Browser as Playwright<br/>Browser
participant FFmpeg as FFmpeg
Agent->>RenderTools: render_composition(output_name,fps,quality)
RenderTools->>RenderIter: render_iterate(write,lint,render,patch,max_retries)
RenderIter->>Backend: lint(workspace, strict)
alt lint ok
RenderIter->>Backend: render(workspace, opts)
Backend->>Browser: launch headless Chromium\nload file://index.html\nwait for window.__timelines
Browser->>Browser: compute duration\nseek timelines per frame\ncapture screenshots
Backend->>FFmpeg: encode frames to MP4 (ffmpeg subprocess)
FFmpeg-->>Backend: return MP4 bytes + path
Backend-->>RenderIter: RenderResult(ok=True,...)
else lint/render failure
RenderIter->>RenderIter: patch_fn(error) and retry if attempts remain
end
RenderIter-->>RenderTools: final RenderResult
RenderTools-->>Agent: {ok, output_path, size_kb, bytes_}
sequenceDiagram
participant Coordinator as Coordinator<br/>Agent
participant Researcher as Researcher<br/>Agent
participant CodeExplorer as CodeExplorer<br/>Agent
participant Animator as Animator<br/>Agent
participant GitTools as GitTools
participant Backend as HtmlRenderBackend
participant SearchWeb as Search Web
Coordinator->>Researcher: request web research
Researcher->>SearchWeb: query
SearchWeb-->>Researcher: results
Researcher-->>Coordinator: findings
Coordinator->>CodeExplorer: request code context
CodeExplorer->>GitTools: clone_repo / read_file / git_log
GitTools-->>CodeExplorer: repo contents
CodeExplorer-->>Coordinator: code insights
Coordinator->>Animator: create composition + instruct
Animator->>Backend: lint + render (via RenderTools/render_iterate)
Backend-->>Animator: RenderResult(ok, path, bytes)
Coordinator->>Coordinator: validate concrete output path\n(leader guard)
alt valid path
Coordinator-->>User: approved render path + bytes
else invalid / fabricated
Coordinator-->>User: reject / surface error
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@claude — PR is open. Local test run: 74 / 78 unit tests pass. Please fix the 4 remaining failures below (all minor test/impl coordination issues) and push to this same branch. Failures to fix1.
|
|
Claude finished @MervinPraison's task —— View job Test Fixes Completed ✅Fixed all 4 failing unit tests as requested. Changes have been pushed to the existing branch Todo List:
Fixes Applied:Fix #1 - Fix #2 - Fix #3 - Fix #4 - All fixes verified to work correctly. The 4 originally failing tests should now pass:
Commit: |
There was a problem hiding this comment.
Code Review
This pull request introduces a motion graphics video pipeline to the praisonai-tools library, enabling the generation of videos from natural language prompts using HTML/GSAP animations. The implementation includes a new GitTools utility for repository exploration, an agent-centric team structure, and a rendering backend powered by Playwright and FFmpeg. The review feedback identifies a critical path traversal vulnerability in the GitTools path validation logic and a security placeholder in the HTML backend that requires a functional implementation. Additionally, it is recommended to return file paths instead of full video bytes in the rendering tool to improve memory efficiency and prevent potential context window issues.
| try: | ||
| from pathlib import Path | ||
| resolved = Path(safe_path).resolve() | ||
| # If it tries to go outside current dir, reject | ||
| if str(resolved).startswith("/") and not str(resolved).startswith(str(Path.cwd())): | ||
| # This is an absolute path that escapes - but we allow relative paths | ||
| pass | ||
| except Exception: | ||
| # If path resolution fails, reject | ||
| raise ValueError("Invalid file path") | ||
|
|
There was a problem hiding this comment.
The additional safety check for path traversal is flawed. The pass statement on line 457 allows potentially malicious paths to be considered valid. If a path, after resolving symlinks, points outside the current working directory, it should be rejected by raising a ValueError. This could lead to a path traversal vulnerability.
| try: | |
| from pathlib import Path | |
| resolved = Path(safe_path).resolve() | |
| # If it tries to go outside current dir, reject | |
| if str(resolved).startswith("/") and not str(resolved).startswith(str(Path.cwd())): | |
| # This is an absolute path that escapes - but we allow relative paths | |
| pass | |
| except Exception: | |
| # If path resolution fails, reject | |
| raise ValueError("Invalid file path") | |
| # Additional safety: resolve path and check it doesn't escape | |
| try: | |
| from pathlib import Path | |
| # Resolve the path, which will handle symlinks | |
| resolved_path = Path(safe_path).resolve() | |
| # If the resolved path is not within the current working directory, reject it. | |
| if not str(resolved_path).startswith(str(Path.cwd())): | |
| raise ValueError("Invalid file path: potential path traversal") | |
| except Exception: | |
| # If path resolution fails for any reason, reject it. | |
| raise ValueError("Invalid file path") |
| def _is_safe_workspace(self, workspace: Path) -> bool: | ||
| """Check if workspace path is safe (prevents path traversal).""" | ||
| try: | ||
| workspace_abs = workspace.resolve() | ||
| # Basic check - workspace should be under a temp directory or user's project | ||
| return True # Add more sophisticated checks as needed | ||
| except Exception: | ||
| return False |
There was a problem hiding this comment.
The _is_safe_workspace method is a security placeholder and currently does not perform any validation, always returning True. This could allow rendering from sensitive directories on the filesystem. A proper implementation should validate that the workspace path is within an expected, safe location (e.g., a temporary directory or a user-configured project directory). For example, you could check if the resolved workspace path is a subdirectory of tempfile.gettempdir().
| return { | ||
| "ok": result.ok, | ||
| "output_path": str(result.output_path) if result.output_path else None, | ||
| "size_kb": result.size_kb, | ||
| "stderr": result.stderr, | ||
| "bytes": result.bytes_ | ||
| } |
There was a problem hiding this comment.
The render_composition tool returns the entire video content as bytes. For large videos, this can lead to high memory consumption and potentially exceed the context window limits of the agent framework. It would be more memory-efficient to return only the output_path and have other tools read the file from disk if needed.
return {
"ok": result.ok,
"output_path": str(result.output_path) if result.output_path else None,
"size_kb": result.size_kb,
"stderr": result.stderr
}There was a problem hiding this comment.
MervinPraison has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
There was a problem hiding this comment.
Pull request overview
Implements the new praisonai_tools.video.motion_graphics pipeline (HTML/GSAP → MP4 via Playwright+FFmpeg) plus a read-only GitTools toolkit, packaged behind a new optional [video-motion] extra.
Changes:
- Added motion-graphics rendering protocol + HTML render backend + agent/team presets under
praisonai_tools/video/motion_graphics/. - Added
GitToolsfor safe-ish read-only git operations with on-demand clone/pull. - Added unit/integration/smoke tests and examples; introduced
[video-motion]optional dependencies inpyproject.toml.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
pyproject.toml |
Adds [video-motion] optional extra (playwright, imageio-ffmpeg). |
praisonai_tools/video/__init__.py |
Exposes motion_graphics under the video namespace via lazy import. |
praisonai_tools/video/motion_graphics/__init__.py |
New package entrypoint with lazy exports for protocol/backend/agent/team. |
praisonai_tools/video/motion_graphics/protocols.py |
Defines RenderBackendProtocol + RenderOpts/LintResult/RenderResult. |
praisonai_tools/video/motion_graphics/backend_html.py |
Implements Playwright-driven frame capture + FFmpeg encoding backend. |
praisonai_tools/video/motion_graphics/agent.py |
Adds create_motion_graphics_agent() and RenderTools wrapper. |
praisonai_tools/video/motion_graphics/team.py |
Adds motion_graphics_team() preset wiring coordinator/research/code/animator agents. |
praisonai_tools/video/motion_graphics/skill.py |
Adds compact HTML/GSAP authoring skill text used by the animator agent. |
praisonai_tools/video/motion_graphics/_render_loop.py |
Adds bounded write→lint→render→patch retry helper. |
praisonai_tools/tools/git_tools.py |
Adds GitTools with clone/update + read-only git inspection utilities. |
tests/unit/video/test_motion_graphics_protocols.py |
Unit tests for protocol/dataclasses (currently assumes optional deps). |
tests/unit/video/test_motion_graphics_agent.py |
Unit tests for agent factory/backend resolution (currently assumes optional deps). |
tests/unit/video/test_html_backend.py |
Unit tests for HTML backend lint/render helpers (dependency-mocked). |
tests/unit/tools/test_git_tools.py |
Unit tests for GitTools parsing/safety/git operations. |
tests/integration/test_motion_graphics_team.py |
Integration tests for motion_graphics_team() wiring. |
tests/smoke/test_motion_graphics_smoke.py |
Smoke tests for imports/basic behaviors with dependency-based skips. |
examples/motion_graphics_example.py |
End-to-end usage examples for backend, agent/team, and GitTools. |
examples/motion_graphics_team.yaml |
YAML example showing how to configure a motion-graphics team. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| # Check for absolute paths | ||
| if file_path.startswith("/"): | ||
| raise ValueError("Invalid file path") | ||
|
|
||
| # Remove any leading slashes | ||
| safe_path = file_path.lstrip("/") | ||
|
|
There was a problem hiding this comment.
_validate_file_path() currently raises on leading "/" (treating it as absolute) but later tries to support stripping leading slashes. This contradicts the unit test expecting "/src/main.py" to be accepted and normalized. Consider stripping leading slashes first (or using a proper relative-path check via PurePosixPath/PurePath) and then rejecting truly-absolute paths / traversal segments.
| # Build GitHub URL with token if available | ||
| if self.github_token: | ||
| repo_url = f"https://{self.github_token}@github.com/{owner}/{repo}.git" | ||
| else: | ||
| repo_url = f"https://github.com/{owner}/{repo}.git" |
There was a problem hiding this comment.
When building the clone URL from owner/repo, the token is embedded directly in the URL (https://<token>@github.com/...). This can leak credentials via process listings, logs, or error messages. Prefer passing credentials via GIT_ASKPASS / GIT_TERMINAL_PROMPT=0, git -c http.extraheader=..., or relying on an existing credential helper instead of putting the token in the URL.
| def git_show(self, repo_name: str, commit: str, path: Optional[str] = None) -> str: | ||
| """Show git commit or file content. | ||
|
|
||
| Args: | ||
| repo_name: Repository name | ||
| commit: Commit hash | ||
| path: Optional file path | ||
|
|
||
| Returns: | ||
| Commit or file content | ||
| """ | ||
| repo_path = self._get_repo_path(repo_name) | ||
|
|
||
| if path: | ||
| safe_path = self._validate_file_path(path) | ||
| ref = f"{commit}:{safe_path}" | ||
| else: | ||
| ref = commit | ||
|
|
||
| try: | ||
| return self._run_git_command(["show", ref], repo_path) | ||
| except subprocess.CalledProcessError as e: | ||
| raise RuntimeError(f"Failed to show git object: {e}") | ||
|
|
There was a problem hiding this comment.
Several methods accept user-controlled commit / commit1 / commit2 strings and pass them straight to git (e.g., git_show() builds ref and executes git show <ref>). A value starting with "-" (or other option-like patterns) can be interpreted as a git option rather than a revision, which is a form of argument injection. Please validate commit-ish inputs (e.g., reject leading '-' and restrict to an allowlist of characters/patterns such as hashes, HEAD, HEAD~N, refs/...) before constructing git commands.
| import json | ||
| import shutil | ||
| import subprocess | ||
| import tempfile | ||
| from pathlib import Path | ||
| from typing import Optional |
There was a problem hiding this comment.
There are unused imports at the top of this module (json, subprocess, and Optional are never referenced). Please remove them to keep the backend lightweight and avoid lint warnings.
| import json | |
| import shutil | |
| import subprocess | |
| import tempfile | |
| from pathlib import Path | |
| from typing import Optional | |
| import shutil | |
| import tempfile | |
| from pathlib import Path |
| # Lint failed - try to patch | ||
| lint_error = "; ".join(lint_result.messages) | ||
| await patch_fn(f"Lint errors: {lint_error}") | ||
| continue |
There was a problem hiding this comment.
If linting fails, last_error is never updated, so the final failure message can end with an empty "Last error" even though lint errors occurred. Also, on the final attempt the code still calls patch_fn(...) and continue, even though no retries remain. Consider setting last_error on lint failures and only patching when attempt < max_retries.
| # Lint failed - try to patch | |
| lint_error = "; ".join(lint_result.messages) | |
| await patch_fn(f"Lint errors: {lint_error}") | |
| continue | |
| # Lint failed - record error and retry if possible | |
| lint_error = "; ".join(lint_result.messages) | |
| last_error = f"Lint errors: {lint_error}" | |
| if attempt < max_retries: | |
| await patch_fn(last_error) | |
| continue |
| await page.route("**/*", self._handle_network_request) | ||
|
|
||
| # Load the HTML file | ||
| file_url = f"file://{workspace.absolute()}/index.html" |
There was a problem hiding this comment.
The file_url is constructed via string formatting (file://{workspace.absolute()}/index.html). This is not portable on Windows (backslashes) and can mis-handle special characters. Prefer using index_html.resolve().as_uri() to build a correct file:// URL.
| file_url = f"file://{workspace.absolute()}/index.html" | |
| index_html = (workspace / "index.html").resolve() | |
| file_url = index_html.as_uri() |
| backend = _resolve_backend("html") | ||
|
|
||
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | ||
| assert isinstance(backend, HtmlRenderBackend) |
There was a problem hiding this comment.
_resolve_backend("html") returns a real HtmlRenderBackend() instance, which requires optional Playwright/imageio-ffmpeg. These unit tests will fail in environments that don’t install the optional [video-motion] extra. Consider patching HtmlRenderBackend (or its imports) in praisonai_tools.video.motion_graphics.agent during the test, or using a stub backend instance for _resolve_backend coverage.
| backend = _resolve_backend("html") | |
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | |
| assert isinstance(backend, HtmlRenderBackend) | |
| with patch("praisonai_tools.video.motion_graphics.agent.HtmlRenderBackend") as mock_backend_cls: | |
| backend = _resolve_backend("html") | |
| assert backend is mock_backend_cls.return_value | |
| mock_backend_cls.assert_called_once_with() |
| # Check code explorer | ||
| code_explorer = team.agents[2] | ||
| assert code_explorer.name == "code_explorer" | ||
| assert "code exploration specialist" in researcher.instructions.lower() |
There was a problem hiding this comment.
In the default team configuration test, the assertion for the code explorer’s instructions is checking researcher.instructions instead of code_explorer.instructions, so it can pass/fail incorrectly. Update the assertion to validate the code_explorer agent’s instructions.
| assert "code exploration specialist" in researcher.instructions.lower() | |
| assert "code exploration specialist" in code_explorer.instructions.lower() |
| // Animate SVG paths | ||
| tl.fromTo("#path", | ||
| { drawSVG: "0%" }, | ||
| { duration: 2, drawSVG: "100%", ease: "power2.inOut" } |
There was a problem hiding this comment.
The skill’s SVG example uses drawSVG, but the required HTML snippet only loads core gsap.min.js and does not include the (paid/club) DrawSVGPlugin. This guidance can cause the Animator to generate compositions that fail at runtime. Consider replacing the example with a plugin-free SVG stroke animation pattern (e.g., stroke-dasharray/stroke-dashoffset) or explicitly documenting the additional dependency (and allowlisting its source) if you intend to support it.
| // Animate SVG paths | |
| tl.fromTo("#path", | |
| { drawSVG: "0%" }, | |
| { duration: 2, drawSVG: "100%", ease: "power2.inOut" } | |
| // Animate SVG paths without extra plugins | |
| // Set the SVG path's stroke-dasharray and stroke-dashoffset in markup/CSS | |
| tl.fromTo("#path", | |
| { strokeDasharray: 1000, strokeDashoffset: 1000 }, | |
| { duration: 2, strokeDashoffset: 0, ease: "power2.inOut" } |
| browser = await p.chromium.launch( | ||
| headless=True, | ||
| args=[ | ||
| "--disable-dev-shm-usage", | ||
| "--no-sandbox", | ||
| "--disable-setuid-sandbox", | ||
| "--disable-background-timer-throttling", | ||
| "--disable-backgrounding-occluded-windows", | ||
| "--disable-renderer-backgrounding" |
There was a problem hiding this comment.
Chromium is launched with --no-sandbox / --disable-setuid-sandbox unconditionally. This weakens the security boundary if untrusted HTML/JS is rendered. Consider enabling the sandbox by default and only adding these flags when required (e.g., when running as root in a container), or clearly documenting that callers must provide an external sandbox.
There was a problem hiding this comment.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (7)
praisonai_tools/tools/git_tools.py-315-317 (1)
315-317:⚠️ Potential issue | 🟡 MinorFix the Ruff E713 lint error.
Use
not indirectly here.🧹 Proposed fix
- if line and not "origin/HEAD" in line: + if line and "origin/HEAD" not in line:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/tools/git_tools.py` around lines 315 - 317, The conditional in the loop uses the deprecated pattern 'not "origin/HEAD" in line' which triggers Ruff E713; update the check in the loop (inside the function that iterates over branch lines in git_tools.py) to use the proper operator order like: check that line is truthy and that "origin/HEAD" not in line (e.g., replace the existing if line and not "origin/HEAD" in line: with an equivalent that uses "origin/HEAD" not in line). Ensure the rest of the logic that strips "origin/" and assigns to branch stays unchanged.examples/motion_graphics_example.py-34-35 (1)
34-35:⚠️ Potential issue | 🟡 MinorDo not render the demo artifact into an auto-deleted directory.
The example prints the MP4 path as a successful output, but the file is removed as soon as the
TemporaryDirectoryblock exits. Use a persistent demo output directory, or explicitly warn that the artifact is temporary.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/motion_graphics_example.py` around lines 34 - 35, The example currently renders the MP4 into a TemporaryDirectory (tempfile.TemporaryDirectory()) assigned to workspace = Path(tmpdir), which is deleted when the context exits; change it to use a persistent demo output directory (e.g., create a "demo_outputs" folder with Path(...).mkdir(parents=True, exist_ok=True)) and assign workspace to that persistent Path, or if you must keep a temp directory, explicitly print/log a clear warning that the printed MP4 path is temporary and will be deleted when the context manager exits so callers aren’t misled.examples/motion_graphics_team.yaml-103-118 (1)
103-118:⚠️ Potential issue | 🟡 MinorUse one render output workspace consistently.
RenderToolswrites to./renders, while the top-level output workspace says./motion_graphics_output; the task text also examples/renders/.... Pick one location so coordinator validation checks paths that the renderer can actually produce.Suggested alignment
- name: "RenderTools" config: backend: "html" workspace: "./renders" max_retries: 3 @@ output: format: "mp4" quality: "standard" fps: 30 - workspace: "./motion_graphics_output" + workspace: "./renders"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/motion_graphics_team.yaml` around lines 103 - 118, The config uses two different output workspaces (RenderTools.config.workspace = "./renders" vs top-level output.workspace = "./motion_graphics_output") which will break coordinator validation; pick a single canonical workspace and update all references: change RenderTools.config.workspace, the top-level output.workspace, and any task text paths that reference "/renders/..." to the chosen path so the renderer, coordinator, and FileTools (if it uses base_dir) all point to the same directory (refer to RenderTools, FileTools, and the top-level output keys to locate where to edit).tests/unit/video/test_html_backend.py-230-239 (1)
230-239:⚠️ Potential issue | 🟡 Minor
test_is_safe_workspaceis tautological against the current stub.Both assertions pass only because
_is_safe_workspaceunconditionally returnsTrue(seebackend_html.pyLines 128–135). Once the traversal check is actually implemented, this test will provide no signal — and, more importantly, it does not currently cover any negative case (e.g., a path with..escaping the allowed root, a symlink pointing outside, a non-existent workspace).Please add negative cases once the containment check lands, for example:
def test_is_safe_workspace_rejects_escape(self, tmp_path): outside = tmp_path / "a" / ".." / ".." / "etc" assert self.backend._is_safe_workspace(outside) is False def test_is_safe_workspace_rejects_nonexistent(self): assert self.backend._is_safe_workspace(Path("/does/not/exist/xyz")) is False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/video/test_html_backend.py` around lines 230 - 239, The current test only asserts True because _is_safe_workspace is a stub; add negative cases to assert False so the test will catch a real containment check: add tests that call self.backend._is_safe_workspace with (1) a path that escapes via parent refs (e.g., tmp_path / "a" / ".." / ".." / "etc"), (2) a non-existent path (e.g., Path("/does/not/exist/xyz") or tmp_path / "nope"), and (3) a symlink inside the workspace that points outside (create a temp dir, create a symlink inside pointing to outside target) and assert False for each; keep using the same test file and the existing self.backend._is_safe_workspace symbol.praisonai_tools/video/motion_graphics/backend_html.py-266-276 (1)
266-276:⚠️ Potential issue | 🟡 MinorFFmpeg input pattern assumes frame-0 alignment and sequential parent dir.
-i frame_paths[0].parent / "frame_%06d.png"relies on the fact that frames were written with contiguous zero-padded indices starting at000000in one directory. That is true for_render_with_playwrighttoday, but it is an implicit coupling that will silently break if a caller (or future refactor) supplies a sparse or differently-named frame list — ffmpeg will stop at the first gap and produce a truncated video with returncode 0, so the caller won't notice.Consider either adding
-start_number 0and asserting contiguity, or feeding frames viaconcatdemuxer / stdin pipe so the contract matches theframe_pathsargument you actually receive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 266 - 276, The ffmpeg command in cmd uses a numeric pattern built from frame_paths (frame_%06d.png) which assumes contiguous, zero-padded frames starting at 000000; update the code that builds cmd to (1) assert the frame_paths list is contiguous and starts at index 0 by checking the sorted filenames or extracted numeric indices from frame_paths, failing early with a clear error if gaps or non-matching names are found, and (2) add the ffmpeg flag "-start_number", "0" to cmd to explicitly set the start index; alternatively, if you need to support sparse or arbitrary filenames, replace the pattern approach with ffmpeg's concat demuxer (create a temporary list file from frame_paths and use "-f", "concat", "-safe", "0", "-i", listfile) or feed frames via stdin — implement the contiguity/assertion near where cmd is constructed (referencing cmd, frame_paths, output_path and the surrounding function) so the behavior matches the frame_paths contract.praisonai_tools/video/motion_graphics/backend_html.py-285-295 (1)
285-295:⚠️ Potential issue | 🟡 MinorTimeout path leaks the ffmpeg process and loses the
from errchain.Two issues on the timeout branch:
process.kill()is issued but the process is never awaited. On POSIX this leaves a zombie; more importantly,communicate()is abandoned mid-flight and its pipes are not drained, which can cause resource warnings and occasional hangs at interpreter shutdown.- The
raise RuntimeError(...)inside theexceptloses the originalTimeoutError— also flagged by Ruff B904.🛡️ Proposed fix
- try: - stdout, stderr = await asyncio.wait_for( - process.communicate(), - timeout=opts.timeout - ) - except asyncio.TimeoutError: - process.kill() - raise RuntimeError("FFmpeg encoding timed out") + try: + _stdout, stderr = await asyncio.wait_for( + process.communicate(), + timeout=opts.timeout, + ) + except asyncio.TimeoutError as err: + process.kill() + try: + await process.wait() + except Exception: + pass + raise RuntimeError("FFmpeg encoding timed out") from err🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 285 - 295, The timeout branch currently calls process.kill() and raises a new RuntimeError, which leaks the ffmpeg process and drops the original exception chain; update the asyncio.wait_for except block around the call to process.communicate() to (1) cancel/kill the process and then await the process termination (e.g., await process.wait() or re-await process.communicate() to drain pipes) to avoid zombie processes and resource leaks, and (2) re-raise a RuntimeError using "raise ... from exc" to preserve the original asyncio.TimeoutError; reference the existing use of asyncio.wait_for(process.communicate()), process.kill(), and the subsequent RuntimeError(f"FFmpeg encoding timed out") to locate and fix the code.praisonai_tools/video/motion_graphics/backend_html.py-163-165 (1)
163-165:⚠️ Potential issue | 🟡 MinorFile URL construction must use
Path.as_uri()for cross-platform compatibility.The current code
f"file://{workspace.absolute()}/index.html"produces malformed file URLs on Windows (uses backslashes and only two slashes) and fails to percent-encode special characters like spaces on any OS. Python'sPath.as_uri()correctly generates RFC-compliant file URLs with proper escaping across all platforms.♻️ Proposed fix
- # Load the HTML file - file_url = f"file://{workspace.absolute()}/index.html" - await page.goto(file_url, wait_until="networkidle") + # Load the HTML file + file_url = (workspace / "index.html").resolve().as_uri() + await page.goto(file_url, wait_until="networkidle")Note:
route.abort()terminates requests synchronously (counts as finished network activity), sowait_until="networkidle"works correctly even with aborted non-allowlisted resources.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 163 - 165, The file URL is built incorrectly using string formatting which produces backslashes on Windows and doesn't percent-encode special characters; replace the manual f-string with Path.as_uri() to generate a proper RFC-compliant file URL (e.g., call as_uri() on the Path for the index.html file built from the existing workspace Path) and then pass that URI to page.goto (refer to the workspace variable and the page.goto(...) call).
🧹 Nitpick comments (2)
tests/unit/video/test_html_backend.py (1)
313-348:test_encode_frames_successdoesn't verify the-ipattern argument.The success test asserts presence of
-y,-framerate,-crf, etc., but never checks that the-ivalue is theframe_%06d.pngpattern under the correct parent directory. Given the implicit coupling noted onbackend_html.pyLines 266–276, asserting this here would catch regressions where a refactor changes the directory assumption or indexing.assert str(frame_dir / "frame_%06d.png") in call_args🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/video/test_html_backend.py` around lines 313 - 348, Update the test_encode_frames_success to assert that the ffmpeg input pattern is the expected frame pattern in the correct parent directory: after calling self.backend._encode_frames_to_mp4 and retrieving mock_subprocess.call_args, check that the string form of (frame_dir / "frame_%06d.png") appears as the value for the '-i' argument (using the same call_args variable and mock_subprocess) so the test verifies the input pattern used by the encoder.praisonai_tools/video/motion_graphics/backend_html.py (1)
297-311: Redundantif imageio_ffmpeg:guard.
__init__already raisesImportErrorwhenimageio_ffmpeg is None, so by the time_get_ffmpeg_pathruns the module is guaranteed non-None. Theif imageio_ffmpeg:test is dead code. Also, the silentexcept Exception: pass(Ruff S110) masks real failures fromget_ffmpeg_exe()(e.g., permission errors downloading the binary). Consider logging at debug level before falling through toshutil.which.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 297 - 311, Remove the redundant "if imageio_ffmpeg:" guard in _get_ffmpeg_path (imageio_ffmpeg is guaranteed non-None), call imageio_ffmpeg.get_ffmpeg_exe() directly, and stop silently swallowing all exceptions; instead catch the exception as e and emit a debug log containing the error before falling back to shutil.which("ffmpeg") so failures (e.g., download/permission errors from imageio_ffmpeg.get_ffmpeg_exe) are visible; keep the FileNotFoundError("FFmpeg not found") raise if neither path is available.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@praisonai_tools/tools/git_tools.py`:
- Around line 268-293: The loop currently treats any non-tab metadata line as a
blame header and calls int(parts[2]), which breaks on lines like "previous ...".
Update the parsing to only accept real porcelain header lines by matching the
header pattern before converting parts[2]: validate that parts[0] is a 40-char
hex commit and parts[2] is all digits (or use a regex like r'^[0-9a-fA-F]{40}
\d+ \d+') and skip the line if it doesn't match; then extract commit_hash and
line_num and continue building blame_data using the existing variables (lines,
i, parts, blame_data, content).
- Around line 186-242: Reject refs/branches that begin with a dash before
calling Git: add a small validator (e.g. _validate_ref_not_option or reuse
_validate_file_path pattern) that checks commit/branch strings do not start with
'-' and raise a clear ValueError/RuntimeError for invalid refs, then call it
from git_diff (validate commit1 and commit2), git_show (validate commit),
read_file (validate commit), _git_clone (validate branch) and _git_pull
(validate branch) so no option-like token is passed to _run_git_command.
In `@praisonai_tools/video/motion_graphics/_render_loop.py`:
- Around line 45-49: When lint fails inside the render loop (where lint_result
is checked), preserve the failure by assigning last_error = lint_error
(constructed from lint_result.messages) so the final stderr contains the
actionable cause, and only invoke patch_fn(f"Lint errors: {lint_error}") if
there will be another render attempt (i.e., do not call patch_fn on the last
iteration); apply the same preservation-and-no-patch-on-final-attempt logic to
the other similar failure block referenced around lines 72-78. Ensure you
reference and update the variables lint_result, lint_error, last_error, and
patch_fn in the render loop so final error output contains the last failure.
In `@praisonai_tools/video/motion_graphics/agent.py`:
- Line 16: The top-level import of HtmlRenderBackend causes backend_html (and
its heavy deps) to be imported on module load; remove the module-level "from
.backend_html import HtmlRenderBackend" and instead import HtmlRenderBackend
lazily inside the _resolve_backend() function's "html" branch where it is
instantiated (the branch that currently constructs HtmlRenderBackend). Update
_resolve_backend() to perform a local import (e.g., "from .backend_html import
HtmlRenderBackend") just before creating the HtmlRenderBackend instance so the
optional dependency is only loaded when "html" is requested.
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 186-215: The screenshots use page.screenshot(full_page=True) which
yields varying image sizes and breaks ffmpeg; change to viewport-only
screenshots by removing full_page=True (or supply an explicit clip matching the
viewport) when capturing frames in the loop that builds frame_paths (see
page.screenshot and temp_path/frame_path usage), and make frame_count
deterministic by computing frames = math.ceil(duration * opts.fps) (or round)
and when seeking use time = min(frame / opts.fps, duration) so the final frame
is included before calling self._encode_frames_to_mp4.
- Around line 86-89: The current strict-mode check uses a raw substring scan of
content (checking "visibility" in content or "display" in content) which yields
many false positives; update the check inside the same strict branch to only
match actual property keys in script/CSS contexts—either (preferred) detect GSAP
tween property objects by scanning for tl.to/tl.from/tl.fromTo(...) call
argument objects and check for keys "visibility" or "display" there, or at
minimum use a regex that enforces word boundaries and a CSS/JSON property
context (e.g., require surrounding quotes and a colon or object-key pattern)
before appending the message to messages; keep the check gated by strict and
operate on the existing content variable so other logic is unchanged.
- Around line 128-135: The _is_safe_workspace implementation currently always
returns True; replace it with a real containment and symlink check: resolve
workspace and the allowed root (use a caller-provided allowed_root or class
attribute), then verify os.path.commonpath([str(allowed_root.resolve()),
str(workspace.resolve())]) == str(allowed_root.resolve()) to ensure workspace is
inside the allowed root, and additionally walk the workspace path (and its
parents up to allowed_root) checking for unexpected symlinks (Path.is_symlink())
to reject escapes; update render() to pass/validate the allowed_root and adjust
test_is_safe_workspace to assert failures for paths that resolve outside or
through symlinks.
- Around line 235-250: The GSAP allowlist in _handle_network_request is insecure
because it uses substring checks on request.url; parse the URL with
urllib.parse.urlparse(request.url) and validate parsed.hostname ==
"cdnjs.cloudflare.com" and the path prefix matches the CDN GSAP path (e.g.
parsed.path.startswith("/ajax/libs/gsap") or an appropriate exact path prefix)
before calling await route.continue_(), keep the existing file:// check for
local files, and call await route.abort() for all other requests (use the parsed
hostname/path checks on the request object referenced in the function).
In `@praisonai_tools/video/motion_graphics/protocols.py`:
- Around line 31-38: The RenderResult dataclass (class RenderResult) is missing
a MIME type field required by the new contract; add a new attribute (e.g.,
mime_type: str | None = None) to the dataclass so every backend returns the MIME
type alongside bytes_ and output_path, update its annotation and default to None
(or an empty string) to keep backwards compatibility, and ensure any code
constructing RenderResult (factories/return sites) is updated to pass the MIME
value.
In `@praisonai_tools/video/motion_graphics/skill.py`:
- Around line 40-47: Update the HTML linter (the backend_html.py validation
logic that currently checks for window.__timelines, data-duration, Math.random,
and repeat: -1) to also detect uses of Date.now(), enforce that timelines are
created paused (look for timeline creation objects containing paused: true or {
paused: true }), verify the stage viewport is fixed to 1920x1080 (e.g.,
width/height attributes or style on the stage element), and ensure data-duration
is present and parseable as a number (reject non-numeric/empty values); add
explicit lint error messages for Date.now, missing/false paused flag, wrong
viewport size, and unparseable duration so the checks become as strict as the
comment requires.
- Around line 73-78: The snippet uses DrawSVGPlugin (drawSVG) which isn't loaded
by the template; replace it with a core-compatible stroke animation: compute the
target path element (selector "#path"), read its getTotalLength(), set
strokeDasharray and strokeDashoffset to that length, initialize strokeDashoffset
to the length, then animate strokeDashoffset to 0 with tl.to (or tl.fromTo)
using duration and ease to replicate the same timing; alternatively, explicitly
load and register DrawSVGPlugin before calling tl.fromTo if you prefer the
plugin approach.
In `@praisonai_tools/video/motion_graphics/team.py`:
- Around line 76-88: The coordinator's validation requires outputs under
"/renders/..." but the Animator writes to workspace / "animations", causing
valid renders to be rejected; update the code so both sides use the same render
workspace: either change the Animator's output path to write to render_workspace
(e.g., change Animator.render()/Animator.render_path to use workspace /
"renders") or change the coordinator's validation in
validate_render_path/Coordinator to accept workspace / "animations" (or read a
shared render_workspace variable). Find references to render_workspace,
validate_render_path, Coordinator, and Animator in team.py and make them use one
canonical path (preferably workspace / "renders"), and ensure the success check
only accepts concrete file paths produced by Animator.
In `@tests/integration/test_motion_graphics_team.py`:
- Around line 79-82: The test is asserting the wrong agent variable: after
selecting code_explorer = team.agents[2] it still checks
researcher.instructions; update the assertion to inspect code_explorer
instead—replace the assertion that references researcher.instructions with one
that checks code_explorer.instructions.lower() contains "code exploration
specialist" so the assertion matches the selected agent (look for the variables
code_explorer and researcher in the test to adjust the correct line).
In `@tests/unit/tools/test_git_tools.py`:
- Around line 164-167: The test for _validate_file_path should assert that an
absolute path is rejected rather than normalized; update
test_validate_file_path_removes_leading_slash to call
git_tools._validate_file_path("/src/main.py") and assert it indicates rejection
(e.g., returns None or False or raises the same validation exception used
elsewhere) — match the existing suite’s convention for invalid paths by
inspecting other tests for _validate_file_path and use the same expectation.
In `@tests/unit/video/test_motion_graphics_agent.py`:
- Around line 39-43: The mock render method in the test returns a fixed path
"test.mp4" causing the test to fail; update the async render(self, workspace,
opts) implementation to honor opts.output_name by returning
RenderResult(output_path=workspace / opts.output_name) when opts.output_name is
provided (falling back to "test.mp4" or ensuring a default filename if absent),
and keep bytes_ and ok unchanged; reference the render method and RenderResult
to locate and modify the mock.
In `@tests/unit/video/test_motion_graphics_protocols.py`:
- Around line 115-120: The test test_protocol_runtime_checkable should skip when
optional render dependencies are missing: before importing HtmlRenderBackend
call pytest.importorskip("playwright") and pytest.importorskip("imageio_ffmpeg")
(or the actual import names used by HtmlRenderBackend) so the test exits early
instead of importing/instantiating HtmlRenderBackend; ensure the import of
HtmlRenderBackend and the isinstance check against RenderBackendProtocol remain
after the importorskip calls.
---
Minor comments:
In `@examples/motion_graphics_example.py`:
- Around line 34-35: The example currently renders the MP4 into a
TemporaryDirectory (tempfile.TemporaryDirectory()) assigned to workspace =
Path(tmpdir), which is deleted when the context exits; change it to use a
persistent demo output directory (e.g., create a "demo_outputs" folder with
Path(...).mkdir(parents=True, exist_ok=True)) and assign workspace to that
persistent Path, or if you must keep a temp directory, explicitly print/log a
clear warning that the printed MP4 path is temporary and will be deleted when
the context manager exits so callers aren’t misled.
In `@examples/motion_graphics_team.yaml`:
- Around line 103-118: The config uses two different output workspaces
(RenderTools.config.workspace = "./renders" vs top-level output.workspace =
"./motion_graphics_output") which will break coordinator validation; pick a
single canonical workspace and update all references: change
RenderTools.config.workspace, the top-level output.workspace, and any task text
paths that reference "/renders/..." to the chosen path so the renderer,
coordinator, and FileTools (if it uses base_dir) all point to the same directory
(refer to RenderTools, FileTools, and the top-level output keys to locate where
to edit).
In `@praisonai_tools/tools/git_tools.py`:
- Around line 315-317: The conditional in the loop uses the deprecated pattern
'not "origin/HEAD" in line' which triggers Ruff E713; update the check in the
loop (inside the function that iterates over branch lines in git_tools.py) to
use the proper operator order like: check that line is truthy and that
"origin/HEAD" not in line (e.g., replace the existing if line and not
"origin/HEAD" in line: with an equivalent that uses "origin/HEAD" not in line).
Ensure the rest of the logic that strips "origin/" and assigns to branch stays
unchanged.
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 266-276: The ffmpeg command in cmd uses a numeric pattern built
from frame_paths (frame_%06d.png) which assumes contiguous, zero-padded frames
starting at 000000; update the code that builds cmd to (1) assert the
frame_paths list is contiguous and starts at index 0 by checking the sorted
filenames or extracted numeric indices from frame_paths, failing early with a
clear error if gaps or non-matching names are found, and (2) add the ffmpeg flag
"-start_number", "0" to cmd to explicitly set the start index; alternatively, if
you need to support sparse or arbitrary filenames, replace the pattern approach
with ffmpeg's concat demuxer (create a temporary list file from frame_paths and
use "-f", "concat", "-safe", "0", "-i", listfile) or feed frames via stdin —
implement the contiguity/assertion near where cmd is constructed (referencing
cmd, frame_paths, output_path and the surrounding function) so the behavior
matches the frame_paths contract.
- Around line 285-295: The timeout branch currently calls process.kill() and
raises a new RuntimeError, which leaks the ffmpeg process and drops the original
exception chain; update the asyncio.wait_for except block around the call to
process.communicate() to (1) cancel/kill the process and then await the process
termination (e.g., await process.wait() or re-await process.communicate() to
drain pipes) to avoid zombie processes and resource leaks, and (2) re-raise a
RuntimeError using "raise ... from exc" to preserve the original
asyncio.TimeoutError; reference the existing use of
asyncio.wait_for(process.communicate()), process.kill(), and the subsequent
RuntimeError(f"FFmpeg encoding timed out") to locate and fix the code.
- Around line 163-165: The file URL is built incorrectly using string formatting
which produces backslashes on Windows and doesn't percent-encode special
characters; replace the manual f-string with Path.as_uri() to generate a proper
RFC-compliant file URL (e.g., call as_uri() on the Path for the index.html file
built from the existing workspace Path) and then pass that URI to page.goto
(refer to the workspace variable and the page.goto(...) call).
In `@tests/unit/video/test_html_backend.py`:
- Around line 230-239: The current test only asserts True because
_is_safe_workspace is a stub; add negative cases to assert False so the test
will catch a real containment check: add tests that call
self.backend._is_safe_workspace with (1) a path that escapes via parent refs
(e.g., tmp_path / "a" / ".." / ".." / "etc"), (2) a non-existent path (e.g.,
Path("/does/not/exist/xyz") or tmp_path / "nope"), and (3) a symlink inside the
workspace that points outside (create a temp dir, create a symlink inside
pointing to outside target) and assert False for each; keep using the same test
file and the existing self.backend._is_safe_workspace symbol.
---
Nitpick comments:
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 297-311: Remove the redundant "if imageio_ffmpeg:" guard in
_get_ffmpeg_path (imageio_ffmpeg is guaranteed non-None), call
imageio_ffmpeg.get_ffmpeg_exe() directly, and stop silently swallowing all
exceptions; instead catch the exception as e and emit a debug log containing the
error before falling back to shutil.which("ffmpeg") so failures (e.g.,
download/permission errors from imageio_ffmpeg.get_ffmpeg_exe) are visible; keep
the FileNotFoundError("FFmpeg not found") raise if neither path is available.
In `@tests/unit/video/test_html_backend.py`:
- Around line 313-348: Update the test_encode_frames_success to assert that the
ffmpeg input pattern is the expected frame pattern in the correct parent
directory: after calling self.backend._encode_frames_to_mp4 and retrieving
mock_subprocess.call_args, check that the string form of (frame_dir /
"frame_%06d.png") appears as the value for the '-i' argument (using the same
call_args variable and mock_subprocess) so the test verifies the input pattern
used by the encoder.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: eac924ee-9bf4-428d-b00f-8b664bd2cf86
📒 Files selected for processing (18)
examples/motion_graphics_example.pyexamples/motion_graphics_team.yamlpraisonai_tools/tools/git_tools.pypraisonai_tools/video/__init__.pypraisonai_tools/video/motion_graphics/__init__.pypraisonai_tools/video/motion_graphics/_render_loop.pypraisonai_tools/video/motion_graphics/agent.pypraisonai_tools/video/motion_graphics/backend_html.pypraisonai_tools/video/motion_graphics/protocols.pypraisonai_tools/video/motion_graphics/skill.pypraisonai_tools/video/motion_graphics/team.pypyproject.tomltests/integration/test_motion_graphics_team.pytests/smoke/test_motion_graphics_smoke.pytests/unit/tools/test_git_tools.pytests/unit/video/test_html_backend.pytests/unit/video/test_motion_graphics_agent.pytests/unit/video/test_motion_graphics_protocols.py
| def git_diff( | ||
| self, | ||
| repo_name: str, | ||
| commit1: str, | ||
| commit2: Optional[str] = None, | ||
| path: Optional[str] = None | ||
| ) -> str: | ||
| """Get git diff between commits. | ||
|
|
||
| Args: | ||
| repo_name: Repository name | ||
| commit1: First commit hash | ||
| commit2: Second commit hash (if None, compares with working tree) | ||
| path: Only diff for this path | ||
|
|
||
| Returns: | ||
| Diff output | ||
| """ | ||
| repo_path = self._get_repo_path(repo_name) | ||
|
|
||
| cmd = ["diff"] | ||
|
|
||
| if commit2: | ||
| cmd.extend([commit1, commit2]) | ||
| else: | ||
| cmd.append(commit1) | ||
|
|
||
| if path: | ||
| safe_path = self._validate_file_path(path) | ||
| cmd.extend(["--", safe_path]) | ||
|
|
||
| try: | ||
| return self._run_git_command(cmd, repo_path) | ||
| except subprocess.CalledProcessError as e: | ||
| raise RuntimeError(f"Failed to get git diff: {e}") | ||
|
|
||
| def git_show(self, repo_name: str, commit: str, path: Optional[str] = None) -> str: | ||
| """Show git commit or file content. | ||
|
|
||
| Args: | ||
| repo_name: Repository name | ||
| commit: Commit hash | ||
| path: Optional file path | ||
|
|
||
| Returns: | ||
| Commit or file content | ||
| """ | ||
| repo_path = self._get_repo_path(repo_name) | ||
|
|
||
| if path: | ||
| safe_path = self._validate_file_path(path) | ||
| ref = f"{commit}:{safe_path}" | ||
| else: | ||
| ref = commit | ||
|
|
||
| try: | ||
| return self._run_git_command(["show", ref], repo_path) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "git_tools.py" | head -5Repository: MervinPraison/PraisonAI-Tools
Length of output: 109
🏁 Script executed:
wc -l ./praisonai_tools/tools/git_tools.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 113
🏁 Script executed:
cat -n ./praisonai_tools/tools/git_tools.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 20644
🏁 Script executed:
sed -n '186,220p' ./praisonai_tools/tools/git_tools.py
sed -n '222,244p' ./praisonai_tools/tools/git_tools.py
sed -n '464,485p' ./praisonai_tools/tools/git_tools.py
sed -n '325,337p' ./praisonai_tools/tools/git_tools.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 3161
🏁 Script executed:
cat > /tmp/test_git_options.md << 'EOF'
# Git Option Injection Analysis
Methods affected:
1. git_diff(commit1, commit2) - lines 186-220
- Command: ["diff", commit1, commit2]
- Risk: git diff --no-index could access files outside repo
2. git_show(commit) - lines 222-244
- Command: ["show", ref]
- Risk: git show --output=file could redirect output
3. _git_clone(branch) - lines 464-476
- Command: ["git", "clone", "-b", branch]
- Risk: branch like "-v" could alter behavior
4. _git_pull(branch) - lines 478-485
- Command: ["checkout", branch]
- Risk: branch like "--force" could force operations
5. read_file(commit) - lines 325-337
- Passes commit to git_show, inherits vulnerability
All use subprocess.run with list (not shell=True), but that only prevents shell
injection, not git option parsing. Options starting with "-" are still parsed by git.
EOF
cat /tmp/test_git_options.mdRepository: MervinPraison/PraisonAI-Tools
Length of output: 929
🏁 Script executed:
rg "self\._run_git_command|subprocess\.run" ./praisonai_tools/tools/git_tools.py -A 2Repository: MervinPraison/PraisonAI-Tools
Length of output: 1989
Validate refs and branches before passing them to Git.
commit1, commit2, commit, and branch parameters can start with -, allowing Git to parse them as options instead of refs. For example, git diff --no-index or git checkout --force could bypass intended constraints. Reject option-like refs before building commands.
Affected methods:
git_diff()(lines 189–191): validatecommit1andcommit2git_show()(line 222): validatecommitread_file()(line 331): validatecommitparameter_git_clone()(line 468): validatebranch_git_pull()(line 480): validatebranch
Proposed fix
+ def _validate_git_ref(self, ref: str) -> str:
+ """Validate a git ref-like argument cannot be parsed as an option."""
+ if not ref or ref.startswith("-") or any(ch in ref for ch in "\0\r\n"):
+ raise ValueError("Invalid git ref")
+ return ref
+
def git_diff(
self,
repo_name: str,
commit1: str,
commit2: Optional[str] = None,
@@
repo_path = self._get_repo_path(repo_name)
+ commit1 = self._validate_git_ref(commit1)
+ if commit2:
+ commit2 = self._validate_git_ref(commit2)
cmd = ["diff"]
@@
def git_show(self, repo_name: str, commit: str, path: Optional[str] = None) -> str:
@@
"""
repo_path = self._get_repo_path(repo_name)
+ commit = self._validate_git_ref(commit)
if path:
safe_path = self._validate_file_path(path)
ref = f"{commit}:{safe_path}" def _git_clone(self, repo_url: str, repo_path: Path, branch: Optional[str] = None):
"""Clone repository."""
cmd = ["git", "clone"]
if branch:
+ branch = self._validate_git_ref(branch)
cmd.extend(["-b", branch]) def _git_pull(self, repo_path: Path, branch: Optional[str] = None):
"""Pull latest changes."""
if branch:
+ branch = self._validate_git_ref(branch)
# Checkout specific branch
self._run_git_command(["checkout", branch], repo_path)🧰 Tools
🪛 Ruff (0.15.10)
[warning] 220-220: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@praisonai_tools/tools/git_tools.py` around lines 186 - 242, Reject
refs/branches that begin with a dash before calling Git: add a small validator
(e.g. _validate_ref_not_option or reuse _validate_file_path pattern) that checks
commit/branch strings do not start with '-' and raise a clear
ValueError/RuntimeError for invalid refs, then call it from git_diff (validate
commit1 and commit2), git_show (validate commit), read_file (validate commit),
_git_clone (validate branch) and _git_pull (validate branch) so no option-like
token is passed to _run_git_command.
| while i < len(lines): | ||
| line = lines[i] | ||
| if not line or line.startswith("\t"): | ||
| i += 1 | ||
| continue | ||
|
|
||
| # Parse blame entry | ||
| parts = line.split(" ", 3) | ||
| if len(parts) >= 3: | ||
| commit_hash = parts[0] | ||
| line_num = int(parts[2]) | ||
|
|
||
| # Find content line | ||
| content = "" | ||
| for j in range(i + 1, len(lines)): | ||
| if lines[j].startswith("\t"): | ||
| content = lines[j][1:] # Remove leading tab | ||
| break | ||
|
|
||
| blame_data.append({ | ||
| "commit": commit_hash, | ||
| "line_number": line_num, | ||
| "content": content | ||
| }) | ||
|
|
||
| i += 1 |
There was a problem hiding this comment.
Parse only porcelain blame headers.
The loop treats every non-tab metadata line as a blame record. git blame --porcelain can include metadata like previous <sha> <filename>, which makes int(parts[2]) fail or emits bogus entries. Match only actual header lines.
🐛 Proposed fix
- parts = line.split(" ", 3)
- if len(parts) >= 3:
+ parts = line.split(" ", 3)
+ if len(parts) >= 3 and re.fullmatch(r"[0-9a-fA-F]{40}", parts[0]):
commit_hash = parts[0]
line_num = int(parts[2])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@praisonai_tools/tools/git_tools.py` around lines 268 - 293, The loop
currently treats any non-tab metadata line as a blame header and calls
int(parts[2]), which breaks on lines like "previous ...". Update the parsing to
only accept real porcelain header lines by matching the header pattern before
converting parts[2]: validate that parts[0] is a 40-char hex commit and parts[2]
is all digits (or use a regex like r'^[0-9a-fA-F]{40} \d+ \d+') and skip the
line if it doesn't match; then extract commit_hash and line_num and continue
building blame_data using the existing variables (lines, i, parts, blame_data,
content).
| # Build GitHub URL with token if available | ||
| if self.github_token: | ||
| repo_url = f"https://{self.github_token}@github.com/{owner}/{repo}.git" | ||
| else: | ||
| repo_url = f"https://github.com/{owner}/{repo}.git" |
There was a problem hiding this comment.
Avoid persisting GitHub tokens in clone URLs.
Line 405 writes the PAT into the remote URL, which Git persists in .git/config; later repo_summary() can expose it via remote_url, and clone errors may echo it. Keep the remote URL token-free and pass auth through transient environment/config, with stderr redaction.
🔒 Proposed fix
- # Build GitHub URL with token if available
- if self.github_token:
- repo_url = f"https://{self.github_token}@github.com/{owner}/{repo}.git"
- else:
- repo_url = f"https://github.com/{owner}/{repo}.git"
+ repo_url = f"https://github.com/{owner}/{repo}.git" def _git_clone(self, repo_url: str, repo_path: Path, branch: Optional[str] = None):
"""Clone repository."""
cmd = ["git", "clone"]
+ env = os.environ.copy()
+
+ if self.github_token and repo_url.startswith("https://github.com/"):
+ env.update({
+ "GIT_CONFIG_COUNT": "1",
+ "GIT_CONFIG_KEY_0": "http.https://github.com/.extraheader",
+ "GIT_CONFIG_VALUE_0": f"AUTHORIZATION: bearer {self.github_token}",
+ })
if branch:
cmd.extend(["-b", branch])
cmd.extend([repo_url, str(repo_path)])
try:
- subprocess.run(cmd, check=True, capture_output=True, text=True)
+ subprocess.run(cmd, check=True, capture_output=True, text=True, timeout=30, env=env)
except subprocess.CalledProcessError as e:
- raise RuntimeError(f"Failed to clone repository: {e.stderr}")
+ stderr = e.stderr or ""
+ if self.github_token:
+ stderr = stderr.replace(self.github_token, "***")
+ raise RuntimeError(f"Failed to clone repository: {stderr}") from eAlso applies to: 464-476
| if not lint_result.ok: | ||
| # Lint failed - try to patch | ||
| lint_error = "; ".join(lint_result.messages) | ||
| await patch_fn(f"Lint errors: {lint_error}") | ||
| continue |
There was a problem hiding this comment.
Preserve lint failures and don’t patch after retries are exhausted.
If lint keeps failing, last_error remains empty and the final stderr loses the actionable cause. Also avoid calling patch_fn on the last attempt when there is no next render attempt.
🐛 Proposed fix
lint_result = await lint_fn()
if not lint_result.ok:
# Lint failed - try to patch
lint_error = "; ".join(lint_result.messages)
- await patch_fn(f"Lint errors: {lint_error}")
+ last_error = f"Lint errors: {lint_error}"
+ if attempt < max_retries:
+ await patch_fn(last_error)
continueAlso applies to: 72-78
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@praisonai_tools/video/motion_graphics/_render_loop.py` around lines 45 - 49,
When lint fails inside the render loop (where lint_result is checked), preserve
the failure by assigning last_error = lint_error (constructed from
lint_result.messages) so the final stderr contains the actionable cause, and
only invoke patch_fn(f"Lint errors: {lint_error}") if there will be another
render attempt (i.e., do not call patch_fn on the last iteration); apply the
same preservation-and-no-patch-on-final-attempt logic to the other similar
failure block referenced around lines 72-78. Ensure you reference and update the
variables lint_result, lint_error, last_error, and patch_fn in the render loop
so final error output contains the last failure.
| FileTools = None | ||
|
|
||
| from .protocols import RenderBackendProtocol | ||
| from .backend_html import HtmlRenderBackend |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import ast
from pathlib import Path
agent_path = Path("praisonai_tools/video/motion_graphics/agent.py")
backend_path = Path("praisonai_tools/video/motion_graphics/backend_html.py")
for path in (agent_path, backend_path):
tree = ast.parse(path.read_text())
print(f"\n{path}: top-level imports")
for node in tree.body:
if isinstance(node, ast.Import):
print(" import", ", ".join(alias.name for alias in node.names))
elif isinstance(node, ast.ImportFrom):
print(" from", node.module, "import", ", ".join(alias.name for alias in node.names))
print("\nExpected: agent.py should not have a top-level `from .backend_html import HtmlRenderBackend`.")
PYRepository: MervinPraison/PraisonAI-Tools
Length of output: 744
🏁 Script executed:
cat -n praisonai_tools/video/motion_graphics/agent.py | head -30Repository: MervinPraison/PraisonAI-Tools
Length of output: 1060
🏁 Script executed:
cat -n praisonai_tools/video/motion_graphics/backend_html.py | head -50Repository: MervinPraison/PraisonAI-Tools
Length of output: 1863
🏁 Script executed:
rg "(playwright|imageio|ffmpeg)" praisonai_tools/video/motion_graphics/Repository: MervinPraison/PraisonAI-Tools
Length of output: 2051
🏁 Script executed:
sed -n '150,165p' praisonai_tools/video/motion_graphics/agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 501
🏁 Script executed:
grep -n "_resolve_backend" praisonai_tools/video/motion_graphics/agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 218
Move the HTML backend import into the lazy-load path.
The HtmlRenderBackend import at line 16 is evaluated at module load time, causing agent.py to always import backend_html.py even when users only need the protocol or agent factory symbols. Since HtmlRenderBackend is instantiated only in the "html" branch of _resolve_backend() (lines 155–159), move the import there to preserve the optional-dependency boundary and avoid unnecessary loads of playwright and imageio-ffmpeg.
Proposed fix
from .protocols import RenderBackendProtocol
-from .backend_html import HtmlRenderBackend
from .skill import MOTION_GRAPHICS_SKILL def _resolve_backend(backend: Union[RenderBackendProtocol, str]) -> RenderBackendProtocol:
"""Resolve backend specification to backend instance."""
if isinstance(backend, str):
if backend == "html":
+ from .backend_html import HtmlRenderBackend
+
return HtmlRenderBackend()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from .backend_html import HtmlRenderBackend | |
| from .protocols import RenderBackendProtocol | |
| from .skill import MOTION_GRAPHICS_SKILL |
| from .backend_html import HtmlRenderBackend | |
| def _resolve_backend(backend: Union[RenderBackendProtocol, str]) -> RenderBackendProtocol: | |
| """Resolve backend specification to backend instance.""" | |
| if isinstance(backend, str): | |
| if backend == "html": | |
| from .backend_html import HtmlRenderBackend | |
| return HtmlRenderBackend() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@praisonai_tools/video/motion_graphics/agent.py` at line 16, The top-level
import of HtmlRenderBackend causes backend_html (and its heavy deps) to be
imported on module load; remove the module-level "from .backend_html import
HtmlRenderBackend" and instead import HtmlRenderBackend lazily inside the
_resolve_backend() function's "html" branch where it is instantiated (the branch
that currently constructs HtmlRenderBackend). Update _resolve_backend() to
perform a local import (e.g., "from .backend_html import HtmlRenderBackend")
just before creating the HtmlRenderBackend instance so the optional dependency
is only loaded when "html" is requested.
| # Check code explorer | ||
| code_explorer = team.agents[2] | ||
| assert code_explorer.name == "code_explorer" | ||
| assert "code exploration specialist" in researcher.instructions.lower() |
There was a problem hiding this comment.
Assert against code_explorer, not researcher.
This is the current CI failure: the test selects code_explorer but checks researcher.instructions.
Proposed test fix
# Check code explorer
code_explorer = team.agents[2]
assert code_explorer.name == "code_explorer"
- assert "code exploration specialist" in researcher.instructions.lower()
+ assert "code exploration specialist" in code_explorer.instructions.lower()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Check code explorer | |
| code_explorer = team.agents[2] | |
| assert code_explorer.name == "code_explorer" | |
| assert "code exploration specialist" in researcher.instructions.lower() | |
| # Check code explorer | |
| code_explorer = team.agents[2] | |
| assert code_explorer.name == "code_explorer" | |
| assert "code exploration specialist" in code_explorer.instructions.lower() |
🧰 Tools
🪛 GitHub Actions: Python package
[error] 82-82: Pytest failure in TestMotionGraphicsTeam.test_team_default_configuration: AssertionError because expected string 'code exploration specialist' is not found in researcher.instructions.lower().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/integration/test_motion_graphics_team.py` around lines 79 - 82, The
test is asserting the wrong agent variable: after selecting code_explorer =
team.agents[2] it still checks researcher.instructions; update the assertion to
inspect code_explorer instead—replace the assertion that references
researcher.instructions with one that checks code_explorer.instructions.lower()
contains "code exploration specialist" so the assertion matches the selected
agent (look for the variables code_explorer and researcher in the test to adjust
the correct line).
| def test_protocol_runtime_checkable(self): | ||
| """Test that protocol is runtime checkable.""" | ||
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | ||
|
|
||
| backend = HtmlRenderBackend() | ||
| assert isinstance(backend, RenderBackendProtocol) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Read the test file to confirm the code
cat -n tests/unit/video/test_motion_graphics_protocols.py | sed -n '110,125p'Repository: MervinPraison/PraisonAI-Tools
Length of output: 805
🏁 Script executed:
# Find and read the HtmlRenderBackend implementation
fd HtmlRenderBackend --type f | head -5Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Check the project's setup configuration for optional extras
fd -e setup.py -e pyproject.toml --type f | head -5Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Search for existing pytest.importorskip or skip patterns in video-related tests
rg "pytest.importorskip|@pytest.mark.skip" tests/unit/video/ -A 2Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Search for HtmlRenderBackend class definition
rg "class HtmlRenderBackend" --type pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 151
🏁 Script executed:
# Find setup files with broader search
find . -maxdepth 3 -name "setup.py" -o -name "pyproject.toml" -o -name "setup.cfg" 2>/dev/nullRepository: MervinPraison/PraisonAI-Tools
Length of output: 89
🏁 Script executed:
# Search for backend_html.py file
fd backend_htmlRepository: MervinPraison/PraisonAI-Tools
Length of output: 126
🏁 Script executed:
# Look for the video motion_graphics module structure
fd -t d motion_graphicsRepository: MervinPraison/PraisonAI-Tools
Length of output: 111
🏁 Script executed:
# Read the HtmlRenderBackend implementation to check for ImportError
cat -n praisonai_tools/video/motion_graphics/backend_html.py | head -50Repository: MervinPraison/PraisonAI-Tools
Length of output: 1863
🏁 Script executed:
# Read pyproject.toml to check optional extras
cat -n pyproject.toml | grep -A 20 "optional-dependencies\|extras"Repository: MervinPraison/PraisonAI-Tools
Length of output: 507
🏁 Script executed:
# Check what imports happen in backend_html
rg "^import|^from" praisonai_tools/video/motion_graphics/backend_html.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 283
🏁 Script executed:
# Search pyproject.toml for video-motion or video extras
cat pyproject.toml | grep -A 5 -B 5 "video\|motion"Repository: MervinPraison/PraisonAI-Tools
Length of output: 212
🏁 Script executed:
# Check all optional dependencies in pyproject.toml
rg "^\[project.optional-dependencies\]" -A 50 pyproject.tomlRepository: MervinPraison/PraisonAI-Tools
Length of output: 949
🏁 Script executed:
# Look for skip patterns or test conventions in video tests
rg "importorskip|@pytest.mark.skip|pytest.skip" tests/unit/video/ -B 2 -A 2Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Check if there are other tests that skip optional dependencies
rg "importorskip" tests/ -lRepository: MervinPraison/PraisonAI-Tools
Length of output: 55
Gate backend protocol test with pytest.importorskip() for optional render dependencies.
The test instantiates HtmlRenderBackend, which raises ImportError in its __init__ if playwright or imageio-ffmpeg are missing. Since video-motion is an optional extra not included in the base install, the test will fail despite the intentional optional-dependency contract. Add pytest.importorskip() calls to skip the test gracefully when dependencies are absent.
🧪 Proposed fix
def test_protocol_runtime_checkable(self):
"""Test that protocol is runtime checkable."""
+ pytest.importorskip("playwright.async_api")
+ pytest.importorskip("imageio_ffmpeg")
from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend
backend = HtmlRenderBackend()
assert isinstance(backend, RenderBackendProtocol)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_protocol_runtime_checkable(self): | |
| """Test that protocol is runtime checkable.""" | |
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | |
| backend = HtmlRenderBackend() | |
| assert isinstance(backend, RenderBackendProtocol) | |
| def test_protocol_runtime_checkable(self): | |
| """Test that protocol is runtime checkable.""" | |
| pytest.importorskip("playwright.async_api") | |
| pytest.importorskip("imageio_ffmpeg") | |
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | |
| backend = HtmlRenderBackend() | |
| assert isinstance(backend, RenderBackendProtocol) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/video/test_motion_graphics_protocols.py` around lines 115 - 120,
The test test_protocol_runtime_checkable should skip when optional render
dependencies are missing: before importing HtmlRenderBackend call
pytest.importorskip("playwright") and pytest.importorskip("imageio_ffmpeg") (or
the actual import names used by HtmlRenderBackend) so the test exits early
instead of importing/instantiating HtmlRenderBackend; ensure the import of
HtmlRenderBackend and the isinstance check against RenderBackendProtocol remain
after the importorskip calls.
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
praisonai_tools/video/motion_graphics/backend_html.py (4)
235-250:⚠️ Potential issue | 🔴 CriticalParse URLs for the network allowlist.
The GSAP allowlist is substring-based, so attacker-controlled hosts or query strings containing
cdnjs.cloudflare.comandgsapcan pass. Match scheme, hostname, and path explicitly.🛡️ Proposed allowlist hardening
+from urllib.parse import urlparseasync def _handle_network_request(self, route, request): """Handle network requests with allowlist.""" - url = request.url + parsed = urlparse(request.url) - # Allow GSAP CDN - if "cdnjs.cloudflare.com" in url and "gsap" in url: + # Allow local files + if parsed.scheme == "file": await route.continue_() return - # Allow local files - if url.startswith("file://"): + # Allow GSAP CDN + if ( + parsed.scheme in {"http", "https"} + and parsed.hostname == "cdnjs.cloudflare.com" + and parsed.path.startswith("/ajax/libs/gsap/") + ): await route.continue_() return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 235 - 250, The _handle_network_request function currently allows requests by substring match; update it to parse request.url (use urllib.parse.urlparse) and enforce scheme and hostname checks: require scheme == "https" and hostname == "cdnjs.cloudflare.com" before allowing GSAP, and check the parsed path (not query/hostname) for gsap (e.g., path segments like "/ajax/libs/gsap" or presence of "gsap" in the path); keep the existing file:// allow for local files (check parsed.scheme == "file"), and otherwise call route.abort(); apply these checks inside async def _handle_network_request(route, request) to replace the substring-based logic.
99-105:⚠️ Potential issue | 🔴 CriticalEnforce workspace and output-path containment before rendering.
_is_safe_workspace()always returnsTrue, andworkspace / opts.output_namecan still escape the workspace whenoutput_nameis absolute or contains... Because FFmpeg is invoked with-y, this can overwrite files outside the render workspace.🛡️ Proposed containment check
+import os- if not self._is_safe_workspace(workspace): + if not self._is_safe_workspace(workspace): return RenderResult( ok=False, output_path=None, bytes_=None, stderr="Unsafe workspace path" )def _is_safe_workspace(self, workspace: Path) -> bool: """Check if workspace path is safe (prevents path traversal).""" try: - workspace_abs = workspace.resolve() - # Basic check - workspace should be under a temp directory or user's project - return True # Add more sophisticated checks as needed - except Exception: + workspace_abs = workspace.resolve(strict=True) + allowed_roots = [Path(tempfile.gettempdir()).resolve(), Path.cwd().resolve()] + return any( + os.path.commonpath([str(root), str(workspace_abs)]) == str(root) + for root in allowed_roots + ) + except (OSError, ValueError): return Falseasync def _render_with_playwright(self, workspace: Path, opts: RenderOpts) -> RenderResult: """Render using Playwright + FFmpeg.""" - output_path = workspace / opts.output_name + output_path = self._resolve_output_path(workspace, opts.output_name)+ def _resolve_output_path(self, workspace: Path, output_name: str) -> Path: + """Resolve output path while keeping it inside the workspace.""" + workspace_abs = workspace.resolve() + output_path = (workspace_abs / output_name).resolve() + if os.path.commonpath([str(workspace_abs), str(output_path)]) != str(workspace_abs): + raise ValueError("Output path escapes workspace") + return output_pathAlso applies to: 128-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 99 - 105, The workspace containment check is insufficient because _is_safe_workspace() currently always returns True and output paths like opts.output_name can be absolute or contain '..' and escape the workspace; update the code that builds the output path (where workspace and opts.output_name are combined and where RenderResult is returned) to: normalize and resolve the candidate output path using pathlib.Path.resolve(strict=False), reject any absolute or resolved paths that are not a child of the resolved workspace (use resolved_output_path.samefile/resolved_output_path.relative_to(resolved_workspace) semantics or an equivalent containment check), and return a failing RenderResult (ok=False, stderr message) if containment fails; apply this same containment check to the other render branch referenced around lines 128-139 so no ffmpeg output can overwrite files outside the workspace.
182-210:⚠️ Potential issue | 🟠 MajorCapture viewport-sized frames consistently.
full_page=Truecan produce PNGs with changing dimensions as the page layout/animation changes, which can break FFmpeg’s image-sequence encoding. Capture the fixed 1920×1080 viewport and avoid truncating fractional frame counts.🐛 Proposed fix
+import math# Calculate frame count - frame_count = int(duration * opts.fps) + frame_count = math.ceil(duration * opts.fps)# Capture frames for frame in range(frame_count): - time = frame / opts.fps + time = min(frame / opts.fps, duration)# Capture frame frame_path = temp_path / f"frame_{frame:06d}.png" - await page.screenshot(path=str(frame_path), full_page=True) + await page.screenshot(path=str(frame_path), full_page=False) frame_paths.append(frame_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 182 - 210, The current capture loop uses full_page=True and int(duration * opts.fps) which can produce variable-size PNGs and drop fractional frames; change frame_count calculation to use math.ceil(duration * opts.fps) (reference: frame_count, opts.fps) and ensure a fixed 1920x1080 capture by calling page.set_viewport_size({"width":1920,"height":1080}) before the loop and using page.screenshot with full_page=False and an explicit clip {'x':0,'y':0,'width':1920,'height':1080} (reference: page.screenshot, temp_path, frame_paths) so every frame PNG is 1920×1080 and FFmpeg-friendly.
86-89:⚠️ Potential issue | 🟠 MajorScope strict visibility/display linting to animated properties.
This still scans the full HTML document, so normal CSS like
display: flexfails strict mode even when it is not animated. Limit this check to GSAP tween property objects.♻️ Possible tightening
if strict: # Additional strict checks - if "visibility" in content or "display" in content: - messages.append("Avoid animating visibility/display properties - use opacity instead") + import re + + tween_calls = re.findall( + r"\.(?:to|from|fromTo|set)\s*\([^)]*\)", + content, + flags=re.DOTALL, + ) + if any(re.search(r"\b(?:visibility|display)\s*:", call) for call in tween_calls): + messages.append("Avoid animating visibility/display properties - use opacity instead")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 86 - 89, The current strict-mode check flags any occurrence of "visibility" or "display" in content, which catches non-animated CSS; change the condition so it only triggers when those tokens appear inside GSAP tween property objects (i.e., actual tween property argument objects), by testing that content matches a GSAP tween usage pattern (e.g. gsap.to/gsap.from/gsap.fromTo with a subsequent property object containing visibility or display) before appending the message; update the conditional in the strict block that references strict, content, and messages to require this GSAP-specific pattern match rather than a plain substring check.
🧹 Nitpick comments (2)
tests/unit/tools/test_git_tools.py (1)
274-302: Globally patchingPath.existsis overly broad.
patch.object(Path, 'exists', return_value=...)replacesPath.existsfor allPathinstances during the test. This happens to be safe here because_get_safe_repo_pathis mocked, but it's a footgun if anything in the call path (e.g., logging, imports, or future code inclone_repo) inspects other paths. Consider mocking the specific path'sexistsor the.existsbehavior on the returnedPath("/tmp/owner_repo")instance only.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/tools/test_git_tools.py` around lines 274 - 302, The tests globally patch Path.exists which affects all Path instances; instead, set the .exists return value on the specific Path object returned by the _get_safe_repo_path mock (mock_path.return_value.exists.return_value = False for test_clone_repo_new and = True for test_clone_repo_existing) and remove the with patch.object(Path, 'exists', ...) blocks so only the target Path used by GitTools.clone_repo is controlled; this references GitTools.clone_repo and the _get_safe_repo_path mock (mock_path) in the tests.praisonai_tools/video/motion_graphics/backend_html.py (1)
44-47: Don’t make the system-FFmpeg fallback unreachable.Line 44 raises when
imageio_ffmpegis missing, so_get_ffmpeg_path()can never fall back toshutil.which("ffmpeg"). If system FFmpeg is supported, only require Playwright at construction and let_get_ffmpeg_path()decide.♻️ Proposed adjustment
- if imageio_ffmpeg is None: - raise ImportError( - "imageio-ffmpeg not installed. Install with: pip install imageio-ffmpeg" - ) + # imageio-ffmpeg is optional here because _get_ffmpeg_path() can fall + # back to a system ffmpeg binary.raise FileNotFoundError("FFmpeg not found")Also applies to: 297-311
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 44 - 47, The current import-time ImportError prevents falling back to system ffmpeg; instead, remove the hard raise and make _get_ffmpeg_path() try imageio_ffmpeg.get_ffmpeg_exe() when imageio_ffmpeg is available and otherwise use shutil.which("ffmpeg"); also ensure only Playwright is required at construction (e.g. in the class __init__ or wherever Playwright is validated) and update the duplicate block around lines 297-311 similarly so both occurrences use the same fallback logic in the _get_ffmpeg_path helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 285-292: On asyncio.TimeoutError in the asyncio.wait_for around
process.communicate() (the block using process.communicate(), asyncio.wait_for
and opts.timeout), after calling process.kill() also await process.wait() to
ensure the FFmpeg subprocess is reaped, and re-raise a RuntimeError("FFmpeg
encoding timed out") chaining the original TimeoutError (raise RuntimeError(...)
from e) so the timeout is preserved as the cause; update the except
asyncio.TimeoutError handler accordingly to await process.wait() and use
exception chaining.
In `@tests/unit/tools/test_git_tools.py`:
- Around line 178-182: The setup_method creates a TemporaryDirectory that is
deleted as soon as the context manager exits, leaving self.base_dir pointing at
a removed path; change setup_method to create a persistent per-test temp dir
(e.g., use tempfile.mkdtemp() or accept pytest's tmp_path fixture) and assign it
to self.base_dir before instantiating GitTools(base_dir=...), then add a
teardown_method that removes the directory (e.g., shutil.rmtree(self.base_dir))
to ensure cleanup; update references to setup_method, teardown_method, and
GitTools in the test class accordingly.
In `@tests/unit/video/test_motion_graphics_agent.py`:
- Around line 86-91: The test instantiates the real HtmlRenderBackend via
_resolve_backend in test_resolve_string_backend and other tests use
backend="html" when calling create_motion_graphics_agent(), pulling in
Playwright/imageio_ffmpeg; instead, patch or monkeypatch HtmlRenderBackend in
test_resolve_string_backend to return a lightweight MockBackend and update tests
in TestCreateMotionGraphicsAgent (all tests except
test_create_agent_custom_backend) to call
create_motion_graphics_agent(backend=MockBackend()) so they use the mock
backend; ensure the mock implements the same minimal interface used by the tests
and reference HtmlRenderBackend, _resolve_backend, create_motion_graphics_agent,
TestCreateMotionGraphicsAgent, test_resolve_string_backend,
test_create_agent_custom_backend, and MockBackend when locating the changes.
---
Duplicate comments:
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 235-250: The _handle_network_request function currently allows
requests by substring match; update it to parse request.url (use
urllib.parse.urlparse) and enforce scheme and hostname checks: require scheme ==
"https" and hostname == "cdnjs.cloudflare.com" before allowing GSAP, and check
the parsed path (not query/hostname) for gsap (e.g., path segments like
"/ajax/libs/gsap" or presence of "gsap" in the path); keep the existing file://
allow for local files (check parsed.scheme == "file"), and otherwise call
route.abort(); apply these checks inside async def
_handle_network_request(route, request) to replace the substring-based logic.
- Around line 99-105: The workspace containment check is insufficient because
_is_safe_workspace() currently always returns True and output paths like
opts.output_name can be absolute or contain '..' and escape the workspace;
update the code that builds the output path (where workspace and
opts.output_name are combined and where RenderResult is returned) to: normalize
and resolve the candidate output path using pathlib.Path.resolve(strict=False),
reject any absolute or resolved paths that are not a child of the resolved
workspace (use
resolved_output_path.samefile/resolved_output_path.relative_to(resolved_workspace)
semantics or an equivalent containment check), and return a failing RenderResult
(ok=False, stderr message) if containment fails; apply this same containment
check to the other render branch referenced around lines 128-139 so no ffmpeg
output can overwrite files outside the workspace.
- Around line 182-210: The current capture loop uses full_page=True and
int(duration * opts.fps) which can produce variable-size PNGs and drop
fractional frames; change frame_count calculation to use math.ceil(duration *
opts.fps) (reference: frame_count, opts.fps) and ensure a fixed 1920x1080
capture by calling page.set_viewport_size({"width":1920,"height":1080}) before
the loop and using page.screenshot with full_page=False and an explicit clip
{'x':0,'y':0,'width':1920,'height':1080} (reference: page.screenshot, temp_path,
frame_paths) so every frame PNG is 1920×1080 and FFmpeg-friendly.
- Around line 86-89: The current strict-mode check flags any occurrence of
"visibility" or "display" in content, which catches non-animated CSS; change the
condition so it only triggers when those tokens appear inside GSAP tween
property objects (i.e., actual tween property argument objects), by testing that
content matches a GSAP tween usage pattern (e.g. gsap.to/gsap.from/gsap.fromTo
with a subsequent property object containing visibility or display) before
appending the message; update the conditional in the strict block that
references strict, content, and messages to require this GSAP-specific pattern
match rather than a plain substring check.
---
Nitpick comments:
In `@praisonai_tools/video/motion_graphics/backend_html.py`:
- Around line 44-47: The current import-time ImportError prevents falling back
to system ffmpeg; instead, remove the hard raise and make _get_ffmpeg_path() try
imageio_ffmpeg.get_ffmpeg_exe() when imageio_ffmpeg is available and otherwise
use shutil.which("ffmpeg"); also ensure only Playwright is required at
construction (e.g. in the class __init__ or wherever Playwright is validated)
and update the duplicate block around lines 297-311 similarly so both
occurrences use the same fallback logic in the _get_ffmpeg_path helper.
In `@tests/unit/tools/test_git_tools.py`:
- Around line 274-302: The tests globally patch Path.exists which affects all
Path instances; instead, set the .exists return value on the specific Path
object returned by the _get_safe_repo_path mock
(mock_path.return_value.exists.return_value = False for test_clone_repo_new and
= True for test_clone_repo_existing) and remove the with patch.object(Path,
'exists', ...) blocks so only the target Path used by GitTools.clone_repo is
controlled; this references GitTools.clone_repo and the _get_safe_repo_path mock
(mock_path) in the tests.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 99c7aeb6-8163-42bd-9b97-95bfb9f625b1
📒 Files selected for processing (3)
praisonai_tools/video/motion_graphics/backend_html.pytests/unit/tools/test_git_tools.pytests/unit/video/test_motion_graphics_agent.py
| try: | ||
| stdout, stderr = await asyncio.wait_for( | ||
| process.communicate(), | ||
| timeout=opts.timeout | ||
| ) | ||
| except asyncio.TimeoutError: | ||
| process.kill() | ||
| raise RuntimeError("FFmpeg encoding timed out") |
There was a problem hiding this comment.
Wait after killing FFmpeg on timeout.
process.kill() without await process.wait() can leave a zombie process until the event loop reaps it. Also preserve the timeout as the cause.
🔧 Proposed cleanup
+import contextlib- except asyncio.TimeoutError:
+ except asyncio.TimeoutError as err:
process.kill()
- raise RuntimeError("FFmpeg encoding timed out")
+ with contextlib.suppress(ProcessLookupError):
+ await process.wait()
+ raise RuntimeError("FFmpeg encoding timed out") from err🧰 Tools
🪛 Ruff (0.15.10)
[warning] 286-286: Unpacked variable stdout is never used
Prefix it with an underscore or any other dummy variable pattern
(RUF059)
[warning] 292-292: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@praisonai_tools/video/motion_graphics/backend_html.py` around lines 285 -
292, On asyncio.TimeoutError in the asyncio.wait_for around
process.communicate() (the block using process.communicate(), asyncio.wait_for
and opts.timeout), after calling process.kill() also await process.wait() to
ensure the FFmpeg subprocess is reaped, and re-raise a RuntimeError("FFmpeg
encoding timed out") chaining the original TimeoutError (raise RuntimeError(...)
from e) so the timeout is preserved as the cause; update the except
asyncio.TimeoutError handler accordingly to await process.wait() and use
exception chaining.
| def setup_method(self): | ||
| """Set up test fixtures.""" | ||
| with tempfile.TemporaryDirectory() as tmpdir: | ||
| self.base_dir = tmpdir | ||
| self.git_tools = GitTools(base_dir=tmpdir) |
There was a problem hiding this comment.
TemporaryDirectory is deleted before tests run.
The with tempfile.TemporaryDirectory() context manager exits at the end of setup_method, so tmpdir is cleaned up before any test body executes. self.base_dir then points to a non-existent directory. This mostly works today only because subprocess.run is mocked in the operations tests, but GitTools(base_dir=tmpdir) may also recreate/reference a path that no longer exists, and any test that hits the filesystem will silently misbehave. Use a per-test fixture or tempfile.mkdtemp() with teardown cleanup.
Proposed fix
- def setup_method(self):
- """Set up test fixtures."""
- with tempfile.TemporaryDirectory() as tmpdir:
- self.base_dir = tmpdir
- self.git_tools = GitTools(base_dir=tmpdir)
+ def setup_method(self):
+ """Set up test fixtures."""
+ self._tmpdir = tempfile.mkdtemp()
+ self.base_dir = self._tmpdir
+ self.git_tools = GitTools(base_dir=self._tmpdir)
+
+ def teardown_method(self):
+ import shutil
+ shutil.rmtree(self._tmpdir, ignore_errors=True)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def setup_method(self): | |
| """Set up test fixtures.""" | |
| with tempfile.TemporaryDirectory() as tmpdir: | |
| self.base_dir = tmpdir | |
| self.git_tools = GitTools(base_dir=tmpdir) | |
| def setup_method(self): | |
| """Set up test fixtures.""" | |
| self._tmpdir = tempfile.mkdtemp() | |
| self.base_dir = self._tmpdir | |
| self.git_tools = GitTools(base_dir=self._tmpdir) | |
| def teardown_method(self): | |
| import shutil | |
| shutil.rmtree(self._tmpdir, ignore_errors=True) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/tools/test_git_tools.py` around lines 178 - 182, The setup_method
creates a TemporaryDirectory that is deleted as soon as the context manager
exits, leaving self.base_dir pointing at a removed path; change setup_method to
create a persistent per-test temp dir (e.g., use tempfile.mkdtemp() or accept
pytest's tmp_path fixture) and assign it to self.base_dir before instantiating
GitTools(base_dir=...), then add a teardown_method that removes the directory
(e.g., shutil.rmtree(self.base_dir)) to ensure cleanup; update references to
setup_method, teardown_method, and GitTools in the test class accordingly.
| def test_resolve_string_backend(self): | ||
| """Test resolving string backend specification.""" | ||
| backend = _resolve_backend("html") | ||
|
|
||
| from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend | ||
| assert isinstance(backend, HtmlRenderBackend) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the test file
find . -type f -name "test_motion_graphics_agent.py" | head -5Repository: MervinPraison/PraisonAI-Tools
Length of output: 121
🏁 Script executed:
# Search for the test file and read relevant sections
fd -type f "test_motion_graphics_agent.py" -x wc -l {}Repository: MervinPraison/PraisonAI-Tools
Length of output: 243
🏁 Script executed:
# Get an overview of the test structure
fd -type f "test_motion_graphics_agent.py" -x head -100 {}Repository: MervinPraison/PraisonAI-Tools
Length of output: 243
🏁 Script executed:
cat -n ./tests/unit/video/test_motion_graphics_agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 10325
🏁 Script executed:
find . -type f -name "agent.py" | grep motion_graphicsRepository: MervinPraison/PraisonAI-Tools
Length of output: 121
🏁 Script executed:
cat -n ./praisonai_tools/video/motion_graphics/agent.py | head -150Repository: MervinPraison/PraisonAI-Tools
Length of output: 5605
🏁 Script executed:
cat -n ./praisonai_tools/video/motion_graphics/agent.py | tail -30Repository: MervinPraison/PraisonAI-Tools
Length of output: 1189
🏁 Script executed:
grep -n "_resolve_backend" ./praisonai_tools/video/motion_graphics/agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 218
🏁 Script executed:
cat -n ./praisonai_tools/video/motion_graphics/backend_html.py | head -100Repository: MervinPraison/PraisonAI-Tools
Length of output: 4018
🏁 Script executed:
find . -name "setup.py" -o -name "pyproject.toml" | head -5Repository: MervinPraison/PraisonAI-Tools
Length of output: 89
🏁 Script executed:
cat -n ./pyproject.toml | grep -A 50 "dependencies\|extras"Repository: MervinPraison/PraisonAI-Tools
Length of output: 1448
🏁 Script executed:
grep -n "ImportError\|playwright\|imageio_ffmpeg" ./tests/unit/video/test_motion_graphics_agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 156
🏁 Script executed:
# Check if there are any pytest markers or skip decorators related to optional dependencies
grep -n "pytest.mark\|skipif\|importorskip" ./tests/unit/video/test_motion_graphics_agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 128
🏁 Script executed:
grep -rn "importorskip\|pytest.mark.skipif" ./tests/unit/video/ --include="*.py"Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Check if the test suite requires video-motion dependencies
cat ./pyproject.toml | grep -A 5 "test.*require"Repository: MervinPraison/PraisonAI-Tools
Length of output: 55
🏁 Script executed:
# Check test configuration and if video-motion is included in test dependencies
cat ./pyproject.toml | head -80Repository: MervinPraison/PraisonAI-Tools
Length of output: 1818
🏁 Script executed:
# Check which tests pass backend parameter or patch HtmlRenderBackend
grep -n "backend=" ./tests/unit/video/test_motion_graphics_agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 113
🏁 Script executed:
# Check if any tests mock or patch HtmlRenderBackend
grep -n "patch.*HtmlRenderBackend\|HtmlRenderBackend" ./tests/unit/video/test_motion_graphics_agent.pyRepository: MervinPraison/PraisonAI-Tools
Length of output: 221
🏁 Script executed:
# Trace the flow: create_motion_graphics_agent -> _resolve_backend -> HtmlRenderBackend()
grep -B 5 -A 10 "create_motion_graphics_agent\|_resolve_backend" ./praisonai_tools/video/motion_graphics/agent.py | head -40Repository: MervinPraison/PraisonAI-Tools
Length of output: 1200
🏁 Script executed:
# Count how many tests in TestCreateMotionGraphicsAgent don't specify backend parameter
grep -n "def test_" ./tests/unit/video/test_motion_graphics_agent.py | awk -F: '$1 >= 111 && $1 <= 220 {print}'Repository: MervinPraison/PraisonAI-Tools
Length of output: 547
Unit tests require optional motion graphics dependencies even when backend is not validated.
The test test_resolve_string_backend (lines 86–91) instantiates the real HtmlRenderBackend, which requires Playwright and imageio_ffmpeg. Similarly, most tests in TestCreateMotionGraphicsAgent (lines 111–220) use the default backend="html", triggering the same hard dependency. Since video-motion is optional and not included in dev dependencies, the test suite fails without explicit installation.
Fix by patching HtmlRenderBackend in test_resolve_string_backend and passing MockBackend() to create_motion_graphics_agent() in tests that don't specifically validate the real backend (all except test_create_agent_custom_backend).
Example isolation pattern
def test_resolve_string_backend(self):
"""Test resolving string backend specification."""
- backend = _resolve_backend("html")
-
- from praisonai_tools.video.motion_graphics.backend_html import HtmlRenderBackend
- assert isinstance(backend, HtmlRenderBackend)
+ with patch(
+ "praisonai_tools.video.motion_graphics.agent.HtmlRenderBackend",
+ MockBackend,
+ ):
+ backend = _resolve_backend("html")
+
+ assert isinstance(backend, MockBackend)For tests not asserting real backend resolution:
- agent = create_motion_graphics_agent(workspace=tmpdir)
+ agent = create_motion_graphics_agent(workspace=tmpdir, backend=MockBackend())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/video/test_motion_graphics_agent.py` around lines 86 - 91, The
test instantiates the real HtmlRenderBackend via _resolve_backend in
test_resolve_string_backend and other tests use backend="html" when calling
create_motion_graphics_agent(), pulling in Playwright/imageio_ffmpeg; instead,
patch or monkeypatch HtmlRenderBackend in test_resolve_string_backend to return
a lightweight MockBackend and update tests in TestCreateMotionGraphicsAgent (all
tests except test_create_agent_custom_backend) to call
create_motion_graphics_agent(backend=MockBackend()) so they use the mock
backend; ensure the mock implements the same minimal interface used by the tests
and reference HtmlRenderBackend, _resolve_backend, create_motion_graphics_agent,
TestCreateMotionGraphicsAgent, test_resolve_string_backend,
test_create_agent_custom_backend, and MockBackend when locating the changes.
|
@claude — Tests pass but deep integration review surfaced 3 runtime showstoppers the current test suite cannot catch (because it mocks 🔴 Blocker 1 —
|
|
Claude finished @MervinPraison's task —— View job Runtime Showstoppers Fixed ✅All 3 critical runtime blockers have been resolved and pushed to the existing branch. Todo List:
Fixes Implemented🔴 Blocker 1 - AgentTeam instantiation:
🔴 Blocker 2 - Workspace safety:
🔴 Blocker 3 - Bounded retries:
🔧 Minor cleanups:
🧪 Added tests:
Verification ResultsSince the test environment lacks
Commit DetailsCommit: The motion graphics pipeline should now instantiate without runtime errors and enforce proper security boundaries. The bounded retry system is wired and ready for use. |
🔴 Fix AgentTeam instantiation error - Change from leader= to process='hierarchical' with manager_llm= - Remove unused protocols import in team.py 🔴 Implement proper workspace safety validation - Add base_dir parameter to HtmlRenderBackend constructor - Implement real containment check using Path.relative_to() - Validate against base directory and temp directories - Reject path traversal attempts 🔴 Wire render_iterate into bounded retry system - Add max_retries parameter to RenderTools - Implement retry logic in render_composition() - Add render_with_bounded_retries() method exposing render_iterate - Fix workspace path mismatch (coordinator expects /renders) 🔧 Minor cleanups - Remove dead code in git_tools.py _validate_file_path - Align render workspace paths between coordinator and animator 🧪 Add comprehensive unit tests - test_backend_safety.py for workspace containment validation - test_render_retries.py for bounded retry functionality Fixes #27 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
There was a problem hiding this comment.
MervinPraison has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
@claude — All 3 showstoppers from round 2 are fixed correctly. Thank you. Test pass rate: 85 / 87. Two minor items before merge: 1. Two new test-design bugs (not impl bugs)Your
|
|
Claude finished @MervinPraison's task —— View job Test Fixes and Merge Conflict Resolution ✅Successfully completed both requested fixes and force-pushed to the existing branch. Todo List:
Changes Made:Fix #1 - test_backend_safety.py:44-52: Updated temp_workspace = Path(tempfile.gettempdir()) / "test_workspace_mg"
temp_workspace.mkdir(exist_ok=True)
try:
assert self.backend._is_safe_workspace(temp_workspace)
finally:
temp_workspace.rmdir()Fix #2 - test_html_backend.py:230-240: Updated def test_is_safe_workspace(self):
"""Test workspace safety validation."""
# Test with actual path that exists
with tempfile.TemporaryDirectory() as tmpdir:
workspace = Path(tmpdir)
assert self.backend._is_safe_workspace(workspace) is True
# Test with subdirectory that exists
sub_workspace = Path(tmpdir) / "test_workspace"
sub_workspace.mkdir()
assert self.backend._is_safe_workspace(sub_workspace) is TrueMerge Conflict Resolution: Rebased on latest main and resolved the The implementation correctly preserves the strict security behavior of Commit: |
…ng (fixes #26) This commit implements a comprehensive motion-graphics video pipeline that enables users to create deterministic MP4 videos from natural language prompts using HTML/CSS/GSAP animations, without relying on paid generative video APIs. - **RenderBackendProtocol**: Extensible protocol for different rendering engines - **HtmlRenderBackend**: Playwright + FFmpeg implementation for HTML/GSAP compositions - **GitTools**: Safe read-only git operations with path validation - **Motion Graphics Agent Factory**: Creates specialized agents for video authoring - **Team Preset**: Coordinator + Researcher + CodeExplorer + Animator team - `praisonai_tools/video/motion_graphics/` - Complete motion graphics pipeline - `praisonai_tools/tools/git_tools.py` - Safe git repository operations - Path traversal protection in GitTools - Network request filtering in HtmlRenderBackend (only GSAP CDN allowed) - Workspace-scoped file operations - Bounded retry loops with timeout limits - Deterministic animation enforcement (no Math.random, no infinite loops) - 11 unit tests for protocols and dataclasses - 35+ unit tests for HTML backend functionality - 32 unit tests for GitTools safety and operations - 20 smoke tests for end-to-end functionality - Integration tests for team coordination - Working motion graphics example with HTML/GSAP composition - YAML team configuration example - Comprehensive inline documentation - Installation instructions for optional dependencies - Added `video-motion` optional extra with `playwright>=1.40`, `imageio-ffmpeg>=0.5` - Zero new dependencies for core package (lazy loading) - Full backward compatibility maintained ```python from praisonai_tools.video.motion_graphics import motion_graphics_team team = motion_graphics_team() team.start("Animate Dijkstra's algorithm on a small weighted graph, 30s.") ``` Fixes #26. Implements all requirements from the comprehensive specification. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- Fix MockRenderBackend to honor opts.output_name - Rename git tools test and fix path validation assertion - Fix CalledProcessError constructor kwargs (output vs stdout) - Improve lint error message to include flagged token All 4 originally failing tests now fixed: - test_render_composition - test_validate_file_path_rejects_absolute_paths - test_run_git_command_failure - test_lint_problematic_patterns 🤖 Generated with [Claude Code](https://claude.ai/code) Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
🔴 Fix AgentTeam instantiation error - Change from leader= to process='hierarchical' with manager_llm= - Remove unused protocols import in team.py 🔴 Implement proper workspace safety validation - Add base_dir parameter to HtmlRenderBackend constructor - Implement real containment check using Path.relative_to() - Validate against base directory and temp directories - Reject path traversal attempts 🔴 Wire render_iterate into bounded retry system - Add max_retries parameter to RenderTools - Implement retry logic in render_composition() - Add render_with_bounded_retries() method exposing render_iterate - Fix workspace path mismatch (coordinator expects /renders) 🔧 Minor cleanups - Remove dead code in git_tools.py _validate_file_path - Align render workspace paths between coordinator and animator 🧪 Add comprehensive unit tests - test_backend_safety.py for workspace containment validation - test_render_retries.py for bounded retry functionality Fixes #27 Co-authored-by: Mervin Praison <MervinPraison@users.noreply.github.com>
- test_workspace_in_temp_dir_allowed: create/cleanup temp dir - test_is_safe_workspace: use existing temp dir pattern
65728fa to
9f0cfe3
Compare
There was a problem hiding this comment.
MervinPraison has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
@claude create a issue in MervinPraison/PraisonAIDocs to create docs include all the features , mainly client facing examples |
|
Claude finished @MervinPraison's task —— View job Motion Graphics Pipeline Documentation Issue ✅I've analyzed the merged motion graphics pipeline features and prepared a comprehensive documentation issue for PraisonAIDocs. Todo List:
📋 Documentation Requirements SummaryThe motion graphics pipeline includes these major components that need documentation: Core Features
Client-Facing APIs
Installation Requirements# Basic installation
pip install praisonai-tools
# With motion graphics support
pip install praisonai-tools[video-motion]
# Required browser setup
playwright install chromium🎯 Recommended Documentation Structure📝 Action RequiredDue to GitHub API permissions, I cannot create the issue directly. Please create an issue in MervinPraison/PraisonAIDocs with: Title: Body: I've prepared the complete issue content in 🔗 Key Examples for DocumentationThe documentation should include these essential client-facing examples:
This will provide users with complete guidance from basic usage to advanced customization of the motion graphics pipeline. ** |
Post-merge fixes for #27 caught via live end-to-end run of motion_graphics_team: - agent.py / team.py: import FileTools from praisonaiagents.tools.file_tools (not praisonaiagents.tools). FileTools is not exported via the tools package __getattr__, so 'from praisonaiagents.tools import FileTools' raises ImportError in production and causes motion_graphics_team() and create_motion_graphics_agent() to both fail with the generic 'praisonaiagents not available' ImportError. - agent.py: drop the base_dir kwarg when instantiating FileTools — the class takes no constructor args (methods are bound directly). - team.py: also catch AttributeError alongside ImportError so lazy attribute failures fall back cleanly to None stubs. - pyproject.toml: bump version 0.2.24 -> 0.2.25 Unit tests continue to pass because they mock out praisonaiagents. The bug only surfaces in live runs. Follow-up: add one non-mocked smoke test that actually instantiates motion_graphics_team().
….2.25) (#28) Post-merge fixes for #27 caught via live end-to-end run of motion_graphics_team: - agent.py / team.py: import FileTools from praisonaiagents.tools.file_tools (not praisonaiagents.tools). FileTools is not exported via the tools package __getattr__, so 'from praisonaiagents.tools import FileTools' raises ImportError in production and causes motion_graphics_team() and create_motion_graphics_agent() to both fail with the generic 'praisonaiagents not available' ImportError. - agent.py: drop the base_dir kwarg when instantiating FileTools — the class takes no constructor args (methods are bound directly). - team.py: also catch AttributeError alongside ImportError so lazy attribute failures fall back cleanly to None stubs. - pyproject.toml: bump version 0.2.24 -> 0.2.25 Unit tests continue to pass because they mock out praisonaiagents. The bug only surfaces in live runs. Follow-up: add one non-mocked smoke test that actually instantiates motion_graphics_team().
Summary
Implements the programmatic motion-graphics video pipeline specified in #26.
praisonai_tools/video/motion_graphics/— new submodule extending the existingvideo/namespacepraisonai_tools/tools/git_tools.py— read-onlyGitToolstoolkit with on-demand clone[video-motion]optional extra inpyproject.toml(playwright,imageio-ffmpeg)praisonaiagentscore SDK — verified viagit diff --name-onlyCloses #26.
What's included
RenderBackendProtocol+ dataclassespraisonai_tools/video/motion_graphics/protocols.pyHtmlRenderBackend(Playwright + ffmpeg)praisonai_tools/video/motion_graphics/backend_html.pycreate_motion_graphics_agent()factorypraisonai_tools/video/motion_graphics/agent.pymotion_graphics_team()presetpraisonai_tools/video/motion_graphics/team.pypraisonai_tools/video/motion_graphics/skill.pypraisonai_tools/video/motion_graphics/_render_loop.pyGitToolstoolkitpraisonai_tools/tools/git_tools.pytests/unit/**,tests/integration/**,tests/smoke/**examples/motion_graphics_example.py,examples/motion_graphics_team.yamlTest results
74 / 78 unit tests pass (94.9%). Import time 486 ms (module itself is lazy; transitive cost is from
praisonaiagents).4 failures remain — all minor test/impl coordination issues, no architectural or security concerns. Listed in a follow-up comment for @claude to resolve.
Architecture compliance
praisonaiagents.rendermodule)Agentsubclass[video-motion]optional extraHtmlRenderBackendandGitToolspraisonaiagentsUsage
Summary by CodeRabbit
New Features
Documentation
Tests