Skip to content

Conversation

@jezekra1
Copy link
Collaborator

Implements: #1160

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jezekra1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request delivers a significant new feature: the ability to build provider images directly within the BeeAI platform from GitHub repositories. This enhancement simplifies the process of deploying agents by automating the image creation step. It includes updates across the CLI for initiating builds, the SDK for interacting with the new build APIs, and the server-side infrastructure (Kubernetes and Helm charts) to manage the entire build lifecycle, from cloning code to pushing the final image to a local registry.

Highlights

  • Server-Side Provider Image Builds: Introduced the capability to build provider images directly on the BeeAI platform from GitHub repositories, enabling a more integrated and streamlined development workflow.
  • New CLI Command: Added an experimental CLI command, beeai build server-side-build <github_url>, allowing users to trigger platform-managed builds of agents from specified GitHub URLs.
  • Enhanced GitHub Integration: Improved GitHub URL parsing and resolution to support both public and private GitHub instances, including authentication via GitHub Personal Access Tokens configured in the platform.
  • Local Docker Registry for Builds: Implemented a local Docker registry within the Kubernetes cluster to facilitate the build process, ensuring efficient image management and pushing of newly built provider images.
  • Improved CLI Error Handling: Refactored connection error handling in the CLI to provide more consistent and helpful troubleshooting hints to users.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant new feature: server-side building of provider images from GitHub repositories. The changes span across the CLI, SDK, and server components, including new API endpoints, database models, Kubernetes job templates, and Helm chart configurations. The implementation is comprehensive, adding support for both public and private GitHub repositories and providing a good user experience through the CLI with log streaming. My review focuses on a few key areas to improve correctness and robustness. I've identified a critical issue in the SDK, a high-severity issue in the Kubernetes job template that would cause builds to hang, and a couple of medium-severity issues related to error handling and code clarity. Overall, this is a great addition to the platform.

Comment on lines 68 to 73
async def delete(self: ProviderBuild | str, *, client: PlatformClient | None = None) -> None:
# `self` has a weird type so that you can call both `instance.delete()` or `ProviderBuild.delete("123")`
provider_id = self if isinstance(self, str) else self.id
async with client or get_platform_client() as client:
_ = (await client.delete(f"/api/v1/provider_builds/{provider_id}")).raise_for_status()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There's a variable naming inconsistency in the delete method that could lead to confusion or bugs. The variable provider_id is being used to store a provider build ID. It should be renamed to provider_build_id for clarity and to match its actual purpose. This is especially important as it's used to construct the deletion URL.

Suggested change
async def delete(self: ProviderBuild | str, *, client: PlatformClient | None = None) -> None:
# `self` has a weird type so that you can call both `instance.delete()` or `ProviderBuild.delete("123")`
provider_id = self if isinstance(self, str) else self.id
async with client or get_platform_client() as client:
_ = (await client.delete(f"/api/v1/provider_builds/{provider_id}")).raise_for_status()
async def delete(self: ProviderBuild | str, *, client: PlatformClient | None = None) -> None:
# `self` has a weird type so that you can call both `instance.delete()` or `ProviderBuild.delete("123")`
provider_build_id = self if isinstance(self, str) else self.id
async with client or get_platform_client() as client:
_ = (await client.delete(f"/api/v1/provider_builds/{provider_build_id}")).raise_for_status()

@jezekra1 jezekra1 force-pushed the feat-build-provider-image-from-github branch from 8caf268 to 6cb731f Compare September 24, 2025 07:27
@jezekra1 jezekra1 marked this pull request as draft September 24, 2025 07:30
@jezekra1 jezekra1 force-pushed the feat-build-provider-image-from-github branch 2 times, most recently from da56d52 to 44454d7 Compare September 24, 2025 08:44
@jezekra1 jezekra1 force-pushed the feat-build-provider-image-from-github branch from 44454d7 to 8fbc212 Compare September 24, 2025 10:49
@jezekra1 jezekra1 marked this pull request as ready for review September 24, 2025 10:50
@jezekra1 jezekra1 requested a review from pilartomas September 24, 2025 11:28
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Radek Ježek <[email protected]>
Comment on lines +79 to +81
# To change UID/GID, you need to rebuild the image
runAsUser: 1000
runAsGroup: 1000
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be problematic, cluster might have a specific range of users/groups to run the container.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is taken directly from
https://github.com/moby/buildkit/blob/master/examples/kubernetes/job.rootless.yaml

there are not really many options :/ user namespaces seem to be a beta feature (likely disabled in existing clusters) and priviledged seems too risky

)


class SqlAlchemyProviderBuildRepository(IProviderBuildRepository):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I'd try to use k8s to track the builds leveraging metadata unless I'd have a good reason not to but this is also fine.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I chose to duplicate the status in database because we are auto-deleting the jobs after 10 minutes (otherwise we might run into maximum pod quotas)

@jezekra1 jezekra1 force-pushed the feat-build-provider-image-from-github branch 2 times, most recently from a577734 to 15c8d5e Compare September 25, 2025 10:12
Signed-off-by: Radek Ježek <[email protected]>
@jezekra1 jezekra1 force-pushed the feat-build-provider-image-from-github branch from 15c8d5e to 1f6c273 Compare September 25, 2025 10:44
@jezekra1 jezekra1 merged commit 78a301e into main Sep 25, 2025
8 of 10 checks passed
@jezekra1 jezekra1 deleted the feat-build-provider-image-from-github branch September 25, 2025 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants