Skip to content

fix(motion-graphics): agent tool wiring + ffmpeg frame clipping (v0.2.26)#29

Merged
MervinPraison merged 1 commit intomainfrom
hotfix/motion-graphics-agent-loop-0.2.26
Apr 19, 2026
Merged

fix(motion-graphics): agent tool wiring + ffmpeg frame clipping (v0.2.26)#29
MervinPraison merged 1 commit intomainfrom
hotfix/motion-graphics-agent-loop-0.2.26

Conversation

@MervinPraison
Copy link
Copy Markdown
Owner

Hotfix — agent loop works end-to-end with real LLMs (v0.2.26)

Live-tested Example 03 (create_motion_graphics_agent) against gpt-4o-mini. It now produces a real MP4 from an LLM-authored HTML/GSAP composition.

3 real bugs found by live testing (all invisible to mocked unit tests)

# File Bug Fix
1 agent.py Passing FileTools() / RenderTools() class instances as tools → OpenAI adapter logs Tool ... not recognized and skips tool calls Expose bound methods individually: read_file, write_file, list_files, lint_composition, render_composition
2 agent.py lint_composition / render_composition are async → sync path fails with Object of type coroutine is not JSON serializable Wrap each in a sync function that uses asyncio.run(...); also strip unserialisable bytes from render return
3 backend_html.py page.screenshot(full_page=True) captured 1920×1167 when SVG overflowed → libx264: height not divisible by 2 (1920x1167) Clip explicitly: clip={x:0,y:0,width:1920,height:1080}

Live verification

PRAISONAI_AUTO_APPROVE=true MOTION_LLM=gpt-4o-mini \
  python examples/python/video/03_motion_graphics_agent_factory.py

Produces:

index.html   1031 bytes  (LLM-authored by gpt-4o-mini)
intro.mp4    13545 bytes

ffprobe on intro.mp4:

Duration: 00:00:01.50, start: 0.000000, bitrate: 72 kb/s
Stream #0:0: Video: h264 (High) yuv420p, 1920x1080, 64 kb/s, 30 fps

Tests

87 / 87 unit tests pass (was 87/87). Updates to test_motion_graphics_agent.py:

  • MockFileTools now exposes read_file / write_file / list_files methods so the factory can reference them as bound methods
  • Tool-count assertions updated from 2 (class instances) to 5 (callables)

Follow-ups that are not in this PR (keeps scope tight)

  • Example 04 (motion_graphics_team) still hits a separate SDK-level bug in hierarchical process ('Agent' object has no attribute 'execution') — needs a one-line fix in praisonaiagents/agent/chat_mixin.py (self.executiongetattr(self, 'execution', None)). Filed separately.
  • Add one non-mocked smoke test that actually runs create_motion_graphics_agent against a cheap real model with PRAISONAI_AUTO_APPROVE=true.

Version

0.2.250.2.26

….26)

End-to-end agent loop now works with real LLMs (OpenAI gpt-4o-mini
verified). Caught by running the Example 03 agent factory live against
gpt-4o-mini and capturing the full failure chain. Unit tests previously
mocked these paths so the bugs were invisible.

Three real bugs fixed:

1. Tool registration (agent.py)
   Passing class instances (FileTools(), RenderTools()) as tools caused the
   OpenAI adapter to log 'Tool ... not recognized' and the agent to skip
   tool calls entirely. Now we expose individual bound methods:
     - file_tools.read_file / write_file / list_files
     - lint_composition / render_composition  (sync wrappers, see #2)

2. Async tools in sync agent path (agent.py)
   RenderTools.lint_composition / render_composition are async but the
   Agent sync path does not await coroutines. Result:
     'Object of type coroutine is not JSON serializable'
   Fix: wrap each async tool with a local sync function that uses
   asyncio.run(...). Bytes are also stripped from the render_composition
   return (unserializable, and the file already lives at output_path).

3. Odd-height screenshots break libx264 (backend_html.py)
   page.screenshot(full_page=True) captured 1920x1167 for the LLM-authored
   composition (SVG overflowed viewport). libx264 rejects odd height:
     'height not divisible by 2 (1920x1167)'
   Fix: clip to the exact 1920x1080 viewport via
     page.screenshot(clip={x:0,y:0,width:1920,height:1080})

Tests updated (test_motion_graphics_agent.py):
  - MockFileTools now exposes read_file/write_file/list_files methods
  - tool-count assertions updated from 2 class instances to 5 callables

Verification (live, not mocked):

  PRAISONAI_AUTO_APPROVE=true MOTION_LLM=gpt-4o-mini \
    python examples/python/video/03_motion_graphics_agent_factory.py

  Produces:
    index.html  (LLM-authored, 1031 bytes)
    intro.mp4   (1920x1080 H.264 yuv420p, 30fps, 1.5s, 13.5KB)

87/87 unit tests pass. Version bump 0.2.25 -> 0.2.26.
Copilot AI review requested due to automatic review settings April 19, 2026 00:31
@MervinPraison MervinPraison merged commit eaa9292 into main Apr 19, 2026
0 of 3 checks passed
Copy link
Copy Markdown

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MervinPraison has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 19, 2026

Warning

Rate limit exceeded

@MervinPraison has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 44 minutes and 45 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 44 minutes and 45 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1b5b46e2-8c3e-4bce-acac-1ee1ed9bf3f2

📥 Commits

Reviewing files that changed from the base of the PR and between ddc77bc and f950863.

📒 Files selected for processing (4)
  • praisonai_tools/video/motion_graphics/agent.py
  • praisonai_tools/video/motion_graphics/backend_html.py
  • pyproject.toml
  • tests/unit/video/test_motion_graphics_agent.py
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch hotfix/motion-graphics-agent-loop-0.2.26

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@MervinPraison MervinPraison deleted the hotfix/motion-graphics-agent-loop-0.2.26 branch April 19, 2026 00:31
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the motion graphics agent to expose tools as individual callables, improving compatibility with various LLM adapters and resolving JSON serialization issues with async tools. It also addresses a video encoding failure by enforcing a fixed 1920x1080 viewport for screenshots to ensure dimensions are compatible with libx264. Feedback highlights potential RuntimeError risks when using asyncio.run in async environments, a potential TypeError due to duplicate tools arguments in the Agent constructor, and a suggestion to replace hardcoded dimensions with constants.

# "Object of type coroutine is not JSON serializable".
def lint_composition(strict: bool = False) -> dict:
"""Lint the motion graphics composition for common issues."""
return asyncio.run(render_tools.lint_composition(strict=strict))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using asyncio.run inside a tool function can cause a RuntimeError if the agent is executed within an existing event loop (e.g., in a Jupyter notebook, a FastAPI application, or any async environment). While this bridges the sync tool path to the async backend, it is a known pitfall for library users. Consider if the Agent class can support async tools directly or if a more robust sync-to-async bridge (like checking for a running loop) is needed. This also applies to the render_composition wrapper.

agent = Agent(
instructions=base_instructions + "\n\n" + MOTION_GRAPHICS_SKILL,
tools=[file_tools, render_tools],
tools=tool_callables,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

If agent_kwargs contains a tools key, this call will raise a TypeError because tools is already passed as a keyword argument. It is safer to extract and merge any tools provided in agent_kwargs to allow users to extend the agent's capabilities.

Suggested change
tools=tool_callables,
tools=tool_callables + agent_kwargs.pop("tools", []),

await page.screenshot(path=str(frame_path), full_page=True)
await page.screenshot(
path=str(frame_path),
clip={"x": 0, "y": 0, "width": 1920, "height": 1080},
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The dimensions 1920x1080 are hardcoded here and in the viewport configuration. Consider defining these as constants (e.g., DEFAULT_WIDTH, DEFAULT_HEIGHT) at the module level to ensure consistency and make future adjustments easier, especially since the motion graphics skill guide also relies on these specific dimensions.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Hotfix to make the motion-graphics agent loop work end-to-end with real LLM adapters and to prevent FFmpeg/libx264 failures caused by odd-height Playwright screenshots; bumps package version to 0.2.26.

Changes:

  • Expose tool callables as individual bound methods + sync wrappers in create_motion_graphics_agent (instead of passing tool class instances).
  • Clip Playwright screenshots to a fixed 1920×1080 region to avoid odd-height frames that libx264 rejects.
  • Update unit tests to reflect the new tool registration surface; bump project version.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
praisonai_tools/video/motion_graphics/agent.py Switch tools wiring to individual callables and add sync wrappers around async render/lint functions.
praisonai_tools/video/motion_graphics/backend_html.py Force screenshot clipping to 1920×1080 to prevent odd-height PNGs and FFmpeg encode failures.
tests/unit/video/test_motion_graphics_agent.py Adjust mocks and assertions for the new callable-based tool list.
pyproject.toml Bump version 0.2.250.2.26.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +204 to +210
# Sync wrappers around the async render tools — the Agent's sync call path
# does not await coroutines automatically and would otherwise fail with
# "Object of type coroutine is not JSON serializable".
def lint_composition(strict: bool = False) -> dict:
"""Lint the motion graphics composition for common issues."""
return asyncio.run(render_tools.lint_composition(strict=strict))

Copy link

Copilot AI Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using asyncio.run(...) inside these tool wrappers will raise RuntimeError: asyncio.run() cannot be called from a running event loop when the agent is invoked from an environment that already has an active loop (e.g., Jupyter, async web servers, or if the agent framework runs tool calls in async contexts). Consider adding a small helper that detects an existing running loop and, in that case, runs the coroutine in a dedicated thread / separate event loop (or exposes async tools and ensures the agent awaits them) so tool calls work reliably in both sync and async runtimes.

Copilot uses AI. Check for mistakes.
Comment on lines +239 to +242
await page.screenshot(
path=str(frame_path),
clip={"x": 0, "y": 0, "width": 1920, "height": 1080},
)
Copy link

Copilot AI Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new clip values duplicate the viewport size set earlier (1920x1080). To avoid future drift if the viewport changes, consider referencing a single source of truth (e.g., store viewport = {"width": 1920, "height": 1080} and derive the clip from it) rather than repeating magic numbers in multiple places.

Copilot uses AI. Check for mistakes.
Comment on lines +131 to +137
# Tools are now exposed as individual callables (bound methods +
# sync render wrappers): read_file, write_file, list_files,
# lint_composition, render_composition.
assert len(agent.tools) == 5
tool_names = {getattr(t, "__name__", "") for t in agent.tools}
assert {"read_file", "write_file", "list_files",
"lint_composition", "render_composition"} <= tool_names
Copy link

Copilot AI Apr 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These assertions validate tool registration by name/count, but they don't exercise the newly introduced sync wrappers' behavior (the core hotfix): that lint_composition/render_composition are callable from sync code and that render_composition strips the non-JSON-serializable bytes field. Adding a focused unit test that invokes the two wrapper callables from agent.tools and asserts the returned dict is JSON-serializable (and lacks the bytes key) would better protect this fix from regressions.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants