-
Notifications
You must be signed in to change notification settings - Fork 4.5k
RAG: add OpenAI text embedding model #36081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
RAG: add OpenAI text embedding model #36081
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @mohamedawnallah, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly expands Apache Beam's machine learning capabilities by integrating OpenAI's text embedding models into its RAG (Retrieval Augmented Generation) module. This allows users to seamlessly incorporate powerful, state-of-the-art text embeddings from OpenAI into their Beam pipelines for tasks like semantic search and information retrieval, enhancing the overall intelligence and performance of RAG applications built with Beam.
Highlights
- OpenAI Text Embedding Integration: Introduces
OpenAITextEmbeddingsto the Beam RAG module, enabling the use of OpenAI models for generating text embeddings within Apache Beam pipelines. - New Module and Tests: Adds a new Python module (
open_ai.py) for the OpenAI embedding implementation and a corresponding test file (open_ai_test.py) to ensure functionality. - Refactored Test Utility: Moves a common test utility function (
chunk_approximately_equals) to a sharedtest_utilsmodule, cleaning up existing test files likehuggingface_test.pyandvertex_ai_test.py. - Dependency Management: Updates
setup.pyto includeopenaias a dependency, making it available for themland a newopenaiextra, ensuring proper installation. - OpenAI API Parameter Fix: Corrects the way
dimensionsare passed to the OpenAI API in the existing_OpenAITextEmbeddingHandler, ensuring the parameter is sent as an integer.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
caeb740 to
8a15186
Compare
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
9ecb52f to
e9c45c2
Compare
|
@damccorm – is
The only blocker for this PR as far as I could see is confirming the availability of |
|
I don't think this is enabled - @jrmccluskey have you looked into ways to enable this at all as part of the OpenAI work/reviews you've done? |
|
No, at the moment we do not have an OpenAI API key for Beam. I was looking into what the options were for that, I think it comes down to needing an Apache billing account for OpenAI |
|
This pull request has been marked as stale due to 60 days of inactivity. It will be closed in 1 week if no further activity occurs. If you think that’s incorrect or this pull request requires a review, please simply write any comment. If closed, you can revive the PR at any time and @mention a reviewer or discuss it on the [email protected] list. Thank you for your contributions. |
|
waiting on the author |
Description
Closes #36083.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.