Skip to content

fix(py): add static prompt fixtures and custom evaluator sample to match JS#4670

Draft
MengqinShen wants to merge 2 commits intomainfrom
elisa/fix/add-prompt
Draft

fix(py): add static prompt fixtures and custom evaluator sample to match JS#4670
MengqinShen wants to merge 2 commits intomainfrom
elisa/fix/add-prompt

Conversation

@MengqinShen
Copy link
Contributor

Ported static .prompt test fixtures from JavaScript to Python and created the framework-custom-evaluators sample. Refactored the model configuration to match JavaScript patterns (model-in-code rather than model-in-prompt) and ensured full Dev UI support.

@github-actions github-actions bot added docs Improvements or additions to documentation python Python config fix labels Feb 14, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @MengqinShen, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the Python Genkit framework by integrating static prompt definitions and introducing a robust sample for custom evaluators. The changes aim to bring the Python implementation closer to its JavaScript counterpart, particularly in how models are configured and prompts are managed. This includes porting existing prompt test fixtures and establishing a new sample that showcases various evaluation techniques, all while ensuring full compatibility and visibility within the Genkit Dev UI.

Highlights

  • Static Prompt Fixtures Ported to Python: The pull request ports static .prompt test fixtures from JavaScript to Python, enabling the definition of prompts in external files and enhancing consistency across language implementations.
  • New Custom Evaluator Sample Introduced: A new framework-custom-evaluators sample has been created, showcasing both LLM-based (PII detection, funniness, deliciousness) and regex-based custom evaluators within the Genkit framework.
  • Model Configuration Refactoring: Model configuration within the Python framework has been refactored to align with JavaScript's 'model-in-code' pattern, improving the clarity and maintainability of model definitions.
  • Enhanced Dev UI Support: Full Dev UI support has been ensured for the newly added static prompts and custom evaluators, providing a seamless development experience.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • py/packages/genkit/tests/genkit/blocks/prompt_test.py
    • Added a new test case test_load_static_prompts to validate the loading and rendering of static prompt files, including those with configurations, tools, and subdirectories.
  • py/packages/genkit/tests/genkit/blocks/prompts/badSchemaRef.prompt
    • Added a static prompt file to test schema reference handling.
  • py/packages/genkit/tests/genkit/blocks/prompts/chat_preamble.prompt
    • Added a static prompt file for chat preamble testing.
  • py/packages/genkit/tests/genkit/blocks/prompts/kitchensink.prompt
    • Added a comprehensive static prompt file demonstrating various configurations like model, tools, output format, and templating.
  • py/packages/genkit/tests/genkit/blocks/prompts/output.prompt
    • Added a static prompt file to test output schema.
  • py/packages/genkit/tests/genkit/blocks/prompts/schemaRef.prompt
    • Added a static prompt file to test schema references.
  • py/packages/genkit/tests/genkit/blocks/prompts/sub/test.prompt
    • Added a static prompt file within a subdirectory to test nested prompt loading.
  • py/packages/genkit/tests/genkit/blocks/prompts/test.prompt
    • Added a basic static prompt file for general testing.
  • py/packages/genkit/tests/genkit/blocks/prompts/test.variant.prompt
    • Added a static prompt file to test prompt variants.
  • py/packages/genkit/tests/genkit/blocks/prompts/toolPrompt.prompt
    • Added a static prompt file to test tool integration.
  • py/pyproject.toml
    • Registered the new framework-custom-evaluators sample as a workspace dependency.
  • py/samples/framework-custom-evaluators/README.md
    • Added documentation for the custom evaluators sample, detailing setup, evaluators, and testing instructions.
  • py/samples/framework-custom-evaluators/datasets/deliciousness_dataset.json
    • Added a dataset for testing the deliciousness evaluator.
  • py/samples/framework-custom-evaluators/datasets/funniness_dataset.json
    • Added a dataset for testing the funniness evaluator.
  • py/samples/framework-custom-evaluators/datasets/pii_detection_dataset.json
    • Added a dataset for testing the PII detection evaluator.
  • py/samples/framework-custom-evaluators/datasets/regex_dataset.json
    • Added a dataset for testing the regex evaluators.
  • py/samples/framework-custom-evaluators/local.env.example
    • Added an example environment file for local development.
  • py/samples/framework-custom-evaluators/prompts/deliciousness.prompt
    • Added a prompt definition for the LLM-based deliciousness evaluator.
  • py/samples/framework-custom-evaluators/prompts/funniness.prompt
    • Added a prompt definition for the LLM-based funniness evaluator.
  • py/samples/framework-custom-evaluators/prompts/pii_detection.prompt
    • Added a prompt definition for the LLM-based PII detection evaluator.
  • py/samples/framework-custom-evaluators/pyproject.toml
    • Defined project metadata and dependencies for the custom evaluators sample.
  • py/samples/framework-custom-evaluators/run.sh
    • Added a shell script to run the custom evaluators sample, including dependency installation and auto-restart.
  • py/samples/framework-custom-evaluators/src/init.py
    • Added package initialization for the custom evaluators sample.
  • py/samples/framework-custom-evaluators/src/constants.py
    • Added constants for regex patterns and permissive safety settings used by evaluators.
  • py/samples/framework-custom-evaluators/src/deliciousness_evaluator.py
    • Implemented the LLM-based deliciousness evaluator.
  • py/samples/framework-custom-evaluators/src/funniness_evaluator.py
    • Implemented the LLM-based funniness evaluator.
  • py/samples/framework-custom-evaluators/src/main.py
    • Implemented the main application logic for the custom evaluators sample, initializing Genkit and registering all evaluators.
  • py/samples/framework-custom-evaluators/src/pii_evaluator.py
    • Implemented the LLM-based PII detection evaluator.
  • py/samples/framework-custom-evaluators/src/regex_evaluator.py
    • Implemented a factory for regex-based evaluators.
  • py/samples/framework-prompt-demo/prompts/recipe.robot.prompt
    • Added a new prompt variant for robot-themed recipes.
  • py/samples/framework-prompt-demo/src/main.py
    • Updated the main application to include and demonstrate the new robot_chef_flow and updated the main execution block to call this new flow.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@MengqinShen MengqinShen requested a review from yesudeep February 14, 2026 08:10
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great addition, aligning the Python version of Genkit with the JavaScript version by adding static prompt fixtures and a new sample for custom evaluators. The new tests for static prompts are comprehensive. The custom evaluators sample is well-structured and provides clear examples of both LLM-based and non-LLM evaluators. I've found a few areas for improvement, mainly in the prompt files and configuration of the new sample, to enhance robustness and correctness. My detailed comments are below.

@github-actions github-actions bot added the js label Feb 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

config docs Improvements or additions to documentation fix js python Python

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

2 participants