Conversation
9c946f9 to
4e7045b
Compare
4e7045b to
65180f4
Compare
extensions/HuggingFace/python/src/aiconfig_extension_hugging_face/__init__.py
Show resolved
Hide resolved
| "HuggingFaceTextSummarizationTransformer", | ||
| "HuggingFaceTextTranslationTransformer", | ||
| "HuggingFaceText2SpeechTransformer", | ||
| "HuggingFaceAutomaticSpeechRecognition", |
There was a problem hiding this comment.
cc @Ankush-lastmile you may have merge conflicts with your other PR?
There was a problem hiding this comment.
nit: Can we also do these in alphabetical order?
| Returns: | ||
| str: Serialized representation of the prompt and inference settings. | ||
| """ | ||
| await ai_config.callback_manager.run_callbacks( |
There was a problem hiding this comment.
Can you add a TODO linking to #822 to fix later(and add automated testing. I'll do this later
There was a problem hiding this comment.
what's broken?
There was a problem hiding this comment.
We're not using the correct model_id so I need to pass this in so we can re-create the prompt
| inputs = validate_and_retrieve_image_from_attachments(prompt) | ||
|
|
||
| completion_params["inputs"] = inputs |
There was a problem hiding this comment.
cc @Ankush-lastmile when this lands, can you link to the Attachment format/standardizing inputs issue we mentioned? Jonathan you don't need to do any work, just making sure Ankush is aware
...sions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/image_2_text.py
Outdated
Show resolved
Hide resolved
| completion_data = await self.deserialize(prompt, aiconfig, parameters) | ||
| print(f"{completion_data=}") | ||
| inputs = completion_data.pop("inputs") | ||
| model = completion_data.pop("model") |
There was a problem hiding this comment.
This is never used again, why would we have it in completion data in the first place?
If it's never used, pls prefix with _model
There was a problem hiding this comment.
it has to be removed from completion data. Something is definitely off here, I just don't know exactly what. cc @Ankush-lastmile
| model = completion_data.pop("model") | ||
| response = captioner(inputs, **completion_data) | ||
|
|
||
| output = ExecuteResult(output_type="execute_result", data=response, metadata={}) |
There was a problem hiding this comment.
Oh sweet, so response is just purely text? nice! Also let's add "execution_count=0"
| return prompt.outputs | ||
|
|
||
| def get_output_text(self, response: dict[str, Any]) -> str: | ||
| raise NotImplementedError("get_output_text is not implemented for HuggingFaceImage2TextTransformer") |
There was a problem hiding this comment.
Pls update to match others like the ones in text_generation.py
|
|
||
| def validate_attachment_type_is_image(attachment: Attachment): | ||
| if not hasattr(attachment, "mime_type"): | ||
| raise ValueError(f"Attachment has no mime type. Specify the image mimetype in the aiconfig") |
There was a problem hiding this comment.
Nit; add the work "Please" before "Specify"
| # See todo above, but for now only support uri's | ||
| raise ValueError(f"Attachment #{i} data is not a uri. Please specify a uri for the image attachment in prompt {prompt.name}.") |
There was a problem hiding this comment.
Doesn't have to be this diff, but please add support for base64 as well. This is important since if we want to chain prompts, some of our models output in base64 format (ex: text_2_image)
At very least, create an issue to track
| print(f"{completion_data=}") | ||
| inputs = completion_data.pop("inputs") | ||
| model = completion_data.pop("model") | ||
| response = captioner(inputs, **completion_data) |
There was a problem hiding this comment.
Does pipeline only support inputs as URI, or does it also work with base64 encoded? If not, pls make task that we need to convert from base64 --> image URI first
...sions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/image_2_text.py
Show resolved
Hide resolved
test patch #816  python extensions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/run_hf_example.py extensions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/hf_local_example.aiconfig.json -> "red fox in the woods"
65180f4 to
448d52a
Compare
Small fixes from comments from Sarmad + me from these diffs: - #854 - #855 - #821 Main things I did - rename `refine_chat_completion_params` --> `chat_completion_params` - edit `get_text_output` to not check for `OutputDataWithValue` - sorted the init file to be alphabetical - fixed some typos/print statements - made some error messages a bit more intuitive with prompt name - sorted some imports - fixed old class name `HuggingFaceAutomaticSpeechRecognition` --> `HuggingFaceAutomaticSpeechRecognitionTransformer` ## Test Plan These are all small nits and shouldn't change functionality
HF transformers: Small fixes nits Small fixes from comments from Sarmad + me from these diffs: - #854 - #855 - #821 Main things I did - rename `refine_chat_completion_params` --> `chat_completion_params` - edit `get_text_output` to not check for `OutputDataWithValue` - sorted the init file to be alphabetical - fixed some typos/print statements - made some error messages a bit more intuitive with prompt name - sorted some imports - fixed old class name `HuggingFaceAutomaticSpeechRecognition` --> `HuggingFaceAutomaticSpeechRecognitionTransformer` ## Test Plan These are all small nits and shouldn't change functionality
[AIC-py] hf image2text parser
test
patch #816
python extensions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/run_hf_example.py extensions/HuggingFace/python/src/aiconfig_extension_hugging_face/local_inference/hf_local_example.aiconfig.json
-> "red fox in the woods"