Skip to content

Commit 9d1e9ce

Browse files
committed
new
1 parent 624452e commit 9d1e9ce

File tree

11 files changed

+132
-311
lines changed

11 files changed

+132
-311
lines changed

docs/blog/posts/ai_agent_tutorial.md

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -264,13 +264,3 @@ This makes it easy for AI Agents to recognize and use tools based on text input.
264264
By combining all these pieces, you can build smart AI Agents that think, act, and assist like pros!
265265

266266
In the next tutorial we will discuss the AI ​​Agents workflow.
267-
268-
269-
``` mermaid
270-
graph LR
271-
A[Start] --> B{Error?};
272-
B -->|Yes| C[Hmm...];
273-
C --> D[Debug];
274-
D --> B;
275-
B ---->|No| E[Yay Dieo!];
276-
```

docs/blog/posts/huggingface.md

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
---
2+
title: "Deploy Machine Learning Models on HuggingFace Hub using Gradio?"
3+
date: 2025-02-24
4+
authors:
5+
- diego-init
6+
categories:
7+
- AI Platforms
8+
tags:
9+
- AI
10+
- agents
11+
- LLM
12+
- automation
13+
- HuggingFace
14+
comments: true
15+
---
16+
17+
# Deploy Machine Learning Models on HuggingFace Hub using Gradio?
18+
19+
In this tutorial, we will learn how to deploy models to Hugging Face Spaces. Hugging Face Spaces is a platform designed to create, share, and deploy machine learning (ML) demo applications, like an API (you can also call your app).
20+
21+
Each Spaces environment is limited to 16 GB of RAM, 2 CPU cores, and 50 GB of non-persistent disk space by default, which is available free of charge. If you require more resources, you have the option to upgrade to better hardware.
22+
23+
The first step is to create an account at the [HuggingFace website](https://huggingface.co/).
24+
25+
![alt text](img/image4.png)
26+
27+
Next, we will create a new space by clicking on the upper bar "Spaces" and the button +New Space. Next, we will create a new space by clicking on the upper bar and the button +New Space. We can select the "Free" version for the space hardware and set it up as a "Public" space.
28+
29+
As you can see, we selected Gradio as our web tool.
30+
31+
![alt text](img/image5.png)
32+
33+
This is the result after creating the space:
34+
35+
![alt text](img/image6.png)
36+
37+
We still need to add two files to our project: <code>app.py</code> and <code>requirements.txt</code>.
38+
39+
Let's create the requirements first:
40+
41+
![alt text](img/image7.png)
42+
43+
After creating the "requirements.txt" file, the next step is to create an "app.py" file. To do this, we will copy the code below and paste it into a new file.
44+
45+
```python
46+
import gradio as gr
47+
from transformers import pipeline
48+
49+
# load the image-text pipeline with Blip architecture
50+
pipe = pipeline("image-to-text",
51+
model="Salesforce/blip-image-captioning-base")
52+
53+
# this function receives the input, call the pipeline
54+
# and get the generated text from the output
55+
def launch(input):
56+
out = pipe(input)
57+
return out[0]['generated_text']
58+
59+
# gradio interface, input is an image and output text
60+
iface = gr.Interface(launch,
61+
inputs=gr.Image(type='pil'),
62+
outputs="text")
63+
64+
# if you want to share the link set "share=True"
65+
iface.launch()
66+
```
67+
Once you commit your files and click on the App tab, Space will automatically load the required files and interface. After you finish running the files, you will see the screen below, which serves as our interface for interacting with the app.
68+
69+
![alt text](img/image8.png){width="400"}
70+
71+
Time to test our solution!
72+
73+
![alt text](img/image9.png){width="800"}
74+
![alt text](img/image10.png){width="800"}
75+
76+
At the bottom of your app screen, you can click "Use via API" to see sample code that you can use to use your model with an API call.
77+
78+
![alt text](img/image11.png)
79+
80+
To run the program locally, copy and paste the code snippet onto your machine and execute the file. Remember to specify the path to the input and to call the API using the endpoint "/predict".
81+
82+
If you wish to keep your API private, you must pass your TOKEN to make the call.
83+
84+
85+
## **References**
86+
87+
This tutorial is based on the course "DeepLearning.AI-Open Source Models with Hugging Face."

docs/blog/posts/img/image10.png

7.56 KB
Loading

docs/blog/posts/img/image11.png

145 KB
Loading

docs/blog/posts/img/image4.png

226 KB
Loading

docs/blog/posts/img/image5.png

125 KB
Loading

docs/blog/posts/img/image6.png

53 KB
Loading

docs/blog/posts/img/image7.png

31.9 KB
Loading

docs/blog/posts/img/image8.png

129 KB
Loading

docs/blog/posts/img/image9.png

454 KB
Loading

0 commit comments

Comments
 (0)