Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
59 changes: 59 additions & 0 deletions cookbooks/Gradio/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Gradio Workbook Editor

## Prompt IDE with LastMile AI

This cookbook is meant to demonstrate the capabilities of our AI Workbook editor. It can run inference against locally hosted or remote models from many inference providers, including Hugging Face, OpenAI and others.

It supports text, image and audio model formats, allowing you to easily chain them together in a single notebook session!

With `aiconfig`, it lets you save the state in a single json config file which you can share with others. In addition to editing the `FILE_NAME.aiconfig.json` file through our Editor interface, you can also use the AIConfig SDK to interact with it in application code, providing a single interface to run inference across any model and modality (media formats).

## Tech Stack

What you see here is a "local editor" -- a React frontend and a Flask server which allow you to edit `.aiconfig.json` files in a notebook-like UI.

- Frontend code:

### Gradio custom component

The Gradio custom component is currently WIP.

**Note**: We already have the Gradio backend that corresponds to the Flask server in the [`gradio-workbook`](https://github.com/lastmile-ai/gradio-workbook) repo.

We are working on using `sveltris` to package our React frontend to work with Gradio. Once that works, the same experience you see in this cookbook will be possible inside a Gradio custom component.

## Getting Started

**Instructions**:

- Clone https://github.com/lastmile-ai/aiconfig
- Go back to top-level directory: `cd <aiconfig>`

- Setup an alias for "aiconfig" command:
Go to

```bash
alias aiconfig="python -m 'aiconfig.scripts.aiconfig_cli'"
```

- `cd <aiconfig>/cookbooks/Gradio`

- `pip3 install -r requirements.txt`

- Install `python-aiconfig-test` package from `test-pypi`:

```
pip3 install --index-url https://test.pypi.org/simple --extra-index-url https://pypi.org/simple python-aiconfig-test==1.1.25 --force
```

No run this command to start the AIConfig editor:

```bash
aiconfig edit --aiconfig-path=huggingface.aiconfig.json --parsers-module-path=hf_model_parsers.py
```

## TODO

- Publish new version of aiconfig_extension_hugging_face package
- Update huggingface.aiconfig.json with clean examples
- Add video demo
33 changes: 33 additions & 0 deletions cookbooks/Gradio/hf_model_parsers.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
from aiconfig_extension_hugging_face import (
HuggingFaceAutomaticSpeechRecognitionTransformer,
HuggingFaceImage2TextTransformer,
HuggingFaceTextSummarizationTransformer,
HuggingFaceText2ImageDiffusor,
HuggingFaceText2SpeechTransformer,
HuggingFaceTextGenerationTransformer,
HuggingFaceTextTranslationTransformer,

)
from aiconfig import AIConfigRuntime


def register_model_parsers() -> None:
"""Register model parsers for HuggingFace models.
"""
# Audio --> Text
AIConfigRuntime.register_model_parser(HuggingFaceAutomaticSpeechRecognitionTransformer(), "AutomaticSpeechRecognition")

# Image --> Text
AIConfigRuntime.register_model_parser(HuggingFaceImage2TextTransformer(), "Image2Text")

# Text --> Image
AIConfigRuntime.register_model_parser(HuggingFaceText2ImageDiffusor(), "Text2Image")

# Text --> Audio
AIConfigRuntime.register_model_parser(HuggingFaceText2SpeechTransformer(), "Text2Speech")

# Text --> Text
AIConfigRuntime.register_model_parser(HuggingFaceTextGenerationTransformer(), "TextGeneration")
AIConfigRuntime.register_model_parser(HuggingFaceTextSummarizationTransformer(), "TextSummarization")
AIConfigRuntime.register_model_parser(HuggingFaceTextTranslationTransformer(), "Translation")

Binary file added cookbooks/Gradio/hi.mp3
Binary file not shown.
137 changes: 137 additions & 0 deletions cookbooks/Gradio/huggingface.aiconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
{
"name": "The Tale of the Quick Brown Fox",
"schema_version": "latest",
"metadata": {
"parameters": {},
"models": {
"AudioSpeechRecognition": {
"model": "openai/whisper-small"
},
"Image2Text": {
"model": "Salesforce/blip-image-captioning-base"
},
"Text2Speech": {
"model": "suno/bark"
},
"Text2Image": {
"model": "runwayml/stable-diffusion-v1-5"
},
"TextGeneration": {
"model": "stevhliu/my_awesome_billsum_model",
"min_length": 10,
"max_length": 30
},
"TextSummarization": {
"model": "facebook/bart-large-cnn"
},
"TextTranslation": {
"model": "translation_en_to_fr"
}
},
"default_model": "TextGeneration",
"model_parsers": {
"AudioSpeechRecognition": "HuggingFaceAutomaticSpeechRecognitionTransformer",
"Image2Text": "HuggingFaceImage2TextTransformer",
"Text2Speech": "HuggingFaceText2SpeechTransformer",
"Text2Image": "HuggingFaceText2ImageTransformer",
"TextGeneration": "HuggingFaceTextGenerationTransformer",
"TextSummarization": "HuggingFaceTextSummarizationTransformer",
"TextTranslation": "HuggingFaceTextTranslationTransformer"
}
},
"description": "The Tale of the Quick Brown Fox",
"prompts": [
{
"name": "Generate a story",
"input": "I was walking in {{city}} when all of a sudden",
"metadata": {
"model": {
"name": "TextGeneration",
"settings": {
"max_length": 100,
"min_length": 50
}
},
"parameters": {
"city": "New York"
}
}
},
{
"name": "translate_instruction",
"input": "Tell the tale of {{topic}}",
"metadata": {
"model": {
"name": "TextTranslation",
"settings": {
"min_length": "",
"max_new_tokens": 100
}
},
"parameters": {
"topic": "the quick brown fox"
}
}
},
{
"name": "summarize_story",
"input": "Once upon a time, in a lush and vibrant forest, there lived a magnificent creature known as the Quick Brown Fox. This fox was unlike any other, possessing incredible speed and agility that awed all the animals in the forest. With its fur as golden as the sun and its eyes as sharp as emeralds, the Quick Brown Fox was admired by everyone, from the tiniest hummingbird to the mightiest bear. The fox had a kind heart and would often lend a helping paw to those in need. The Quick Brown Fox had a particular fondness for games and challenges. It loved to test its skills against others, always seeking new adventures to satisfy its boundless curiosity. Its favorite game was called \"The Great Word Hunt,\" where it would embark on a quest to find hidden words scattered across the forest.",
"metadata": {
"model": {
"name": "TextSummarization",
"settings": {}
},
"parameters": {}
}
},
{
"name": "generate_audio_title",
"input": "The Quick Brown Fox was admired by all the animals in the forest.",
"metadata": {
"model": "Text2Speech",
"parameters": {}
}
},
{
"name": "generate_caption",
"input": {
"attachments": [
{
"data": "/Users/jonathan/Desktop/pic.png",
"mime_type": "image/png"
}
]
},
"metadata": {
"model": "Image2Text",
"parameters": {}
}
},
{
"name": "openai_gen_itinerary",
"input": "Generate an itinerary for a 2 day trip to NYC ordered by {{order_by}}.",
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
},
{
"name": "Audio Speech Recognition",
"input": {
"attachments": [
{
"data": "./hi.mp3",
"mime_type": "audio/mpeg"
}
]
},
"metadata": {
"model": "AudioSpeechRecognition",
"parameters": {}
}
}
],
"$schema": "https://json.schemastore.org/aiconfig-1.0"
}
5 changes: 5 additions & 0 deletions cookbooks/Gradio/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# AIConfig
python-aiconfig

# Hugging Face Extension for AIConfig
aiconfig-extension-hugging-face
40 changes: 40 additions & 0 deletions cookbooks/Gradio/travel.aiconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
{
"name": "NYC Trip Planner",
"schema_version": "latest",
"metadata": {
"parameters": {
"": ""
},
"models": {
"gpt-3.5-turbo": {
"model": "gpt-3.5-turbo",
"top_p": 1,
"temperature": 1
},
"gpt-4": {
"model": "gpt-4",
"max_tokens": 3000,
"system_prompt": "You are an expert travel coordinator with exquisite taste."
}
},
"default_model": "gpt-3.5-turbo"
},
"description": "Intrepid explorer with ChatGPT and AIConfig",
"prompts": [
{
"name": "get_activities",
"input": "Tell me 10 fun attractions to do in NYC."
},
{
"name": "gen_itinerary",
"input": "Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.",
"metadata": {
"model": "gpt-4",
"parameters": {
"order_by": "geographic location"
}
}
}
],
"$schema": "https://json.schemastore.org/aiconfig-1.0"
}
21 changes: 21 additions & 0 deletions extensions/HuggingFace/python/huggingface.aiconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"name": "",
"schema_version": "latest",
"metadata": {
"parameters": {},
"models": {}
},
"description": "",
"prompts": [
{
"name": "prompt_1",
"input": "",
"metadata": {
"model": "gpt-4",
"parameters": {}
},
"outputs": []
}
],
"$schema": "https://json.schemastore.org/aiconfig-1.0"
}
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,12 @@ async def run_inference(self, prompt: Prompt, aiconfig: "AIConfigRuntime", optio

model_settings = self.get_model_settings(prompt, aiconfig)
[pipeline_creation_data, _] = refine_pipeline_creation_params(model_settings)
model_name = aiconfig.get_model_name(prompt)

model_name: str = aiconfig.get_model_name(prompt)
# TODO: Clean this up after we allow people in the AIConfig UI to specify their
# own model name for HuggingFace tasks. This isn't great but it works for now
if (model_name == "TextTranslation"):
model_name = self._get_default_model_name()

if isinstance(model_name, str) and model_name not in self.pipelines:
device = self._get_device()
Expand Down Expand Up @@ -139,6 +144,9 @@ def get_output_text(
if isinstance(output_data, str):
return output_data
return ""

def _get_default_model_name(self) -> str:
return "openai/whisper-small"


def validate_attachment_type_is_audio(attachment: Attachment):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,11 @@ async def run_inference(self, prompt: Prompt, aiconfig: "AIConfigRuntime", optio
print(pipeline_building_disclaimer_message)

model_name: str = aiconfig.get_model_name(prompt)
# TODO: Clean this up after we allow people in the AIConfig UI to specify their
# own model name for HuggingFace tasks. This isn't great but it works for now
if (model_name == "Text2Image"):
model_name = self._get_default_model_name()

# TODO (rossdanlm): Figure out a way to save model and re-use checkpoint
# Otherwise right now a lot of these models are taking 5 mins to load with 50
# num_inference_steps (default value). See here for more details:
Expand Down Expand Up @@ -364,6 +369,9 @@ def _get_device(self) -> str:
return "mps"
return "cpu"

def _get_default_model_name(self) -> str:
return "runwayml/stable-diffusion-v1-5"

def _refine_responses(
response_images: List[Image.Image],
nsfw_content_detected: List[bool],
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,11 @@ async def run_inference(self, prompt: Prompt, aiconfig: "AIConfigRuntime", optio
[pipeline_creation_data, _] = refine_pipeline_creation_params(model_settings)

model_name: str = aiconfig.get_model_name(prompt)
# TODO: Clean this up after we allow people in the AIConfig UI to specify their
# own model name for HuggingFace tasks. This isn't great but it works for now
if (model_name == "Text2Speech"):
model_name = self._get_default_model_name()

if isinstance(model_name, str) and model_name not in self.synthesizers:
self.synthesizers[model_name] = pipeline("text-to-speech", model_name)
synthesizer = self.synthesizers[model_name]
Expand Down Expand Up @@ -229,3 +234,6 @@ def get_output_text(
elif isinstance(output.data, str):
return output.data
return ""

def _get_default_model_name(self) -> str:
return "suno/bark"
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,12 @@ async def run_inference(
completion_data = await self.deserialize(prompt, aiconfig, options, parameters)
completion_data["text_inputs"] = completion_data.pop("prompt", None)

model_name : str = aiconfig.get_model_name(prompt)
model_name: str = aiconfig.get_model_name(prompt)
# TODO: Clean this up after we allow people in the AIConfig UI to specify their
# own model name for HuggingFace tasks. This isn't great but it works for now
if (model_name == "TextGeneration"):
model_name = self._get_default_model_name()

if isinstance(model_name, str) and model_name not in self.generators:
self.generators[model_name] = pipeline('text-generation', model=model_name)
generator = self.generators[model_name]
Expand Down Expand Up @@ -303,3 +308,6 @@ def get_output_text(
# calls so shouldn't get here, but just being safe
return json.dumps(output_data.value, indent=2)
return ""

def _get_default_model_name(self) -> str:
return "stevhliu/my_awesome_billsum_model"
Loading