Skip to content

Commit db28e0a

Browse files
committed
2 parents 6127f80 + 740c1a8 commit db28e0a

10 files changed

+2433
-136
lines changed

examples/cookbooks/LLaMA3_2_11B_Vision_Model.ipynb

Lines changed: 635 additions & 0 deletions
Large diffs are not rendered by default.

examples/cookbooks/MistralTechAgent.ipynb

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
{
2-
32
"cells": [
43
{
54
"cell_type": "markdown",

examples/cookbooks/Mistral_v0_3_Conversational_Demo.ipynb

Lines changed: 0 additions & 134 deletions
This file was deleted.
Lines changed: 120 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,120 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "468831db",
6+
"metadata": {
7+
"id": "468831db"
8+
},
9+
"source": [
10+
"# Phi-3.5 Mini Conversational Example"
11+
]
12+
},
13+
{
14+
"cell_type": "markdown",
15+
"id": "ecb80fe2",
16+
"metadata": {
17+
"id": "ecb80fe2"
18+
},
19+
"source": [
20+
"**Description:**\n",
21+
"\n",
22+
"Demonstrates lightweight inference using Phi-3.5 Mini, suitable for smaller hardware and educational use cases."
23+
]
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"source": [
28+
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Phi_3_5_Mini_Conversational.ipynb)\n"
29+
],
30+
"metadata": {
31+
"id": "S1erg7-JP6n3"
32+
},
33+
"id": "S1erg7-JP6n3"
34+
},
35+
{
36+
"cell_type": "markdown",
37+
"id": "0822161c",
38+
"metadata": {
39+
"id": "0822161c"
40+
},
41+
"source": [
42+
"**Dependencies**\n",
43+
"\n",
44+
"```python\n",
45+
"!pip install transformers accelerate\n",
46+
"!pip install torch\n",
47+
"```"
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"id": "0df5580b",
53+
"metadata": {
54+
"id": "0df5580b"
55+
},
56+
"source": [
57+
"**Tools Used**\n",
58+
"\n",
59+
"- HuggingFace Transformers\n",
60+
"- PyTorch"
61+
]
62+
},
63+
{
64+
"cell_type": "markdown",
65+
"id": "9d6df43d",
66+
"metadata": {
67+
"id": "9d6df43d"
68+
},
69+
"source": [
70+
"**YAML Prompt**\n",
71+
"\n",
72+
"```yaml\n",
73+
"system: You are a helpful assistant.\n",
74+
"user: Explain what an AI model is.\n",
75+
"```"
76+
]
77+
},
78+
{
79+
"cell_type": "code",
80+
"execution_count": null,
81+
"id": "0e775593",
82+
"metadata": {
83+
"id": "0e775593"
84+
},
85+
"outputs": [],
86+
"source": [
87+
"from transformers import AutoTokenizer, AutoModelForCausalLM\n",
88+
"\n",
89+
"tokenizer = AutoTokenizer.from_pretrained(\"microsoft/phi-3.5-mini\")\n",
90+
"model = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-3.5-mini\")\n",
91+
"\n",
92+
"inputs = tokenizer(\"What is an AI model?\", return_tensors=\"pt\")\n",
93+
"outputs = model.generate(**inputs, max_new_tokens=40)\n",
94+
"print(tokenizer.decode(outputs[0], skip_special_tokens=True))"
95+
]
96+
},
97+
{
98+
"cell_type": "markdown",
99+
"id": "187fe354",
100+
"metadata": {
101+
"id": "187fe354"
102+
},
103+
"source": [
104+
"**Output**\n",
105+
"\n",
106+
"The model gives a basic explanation of what AI models do.\n",
107+
"\n",
108+
"What is an AI model?\n",
109+
"An AI model is a computer program that is trained on large amounts of data to perform tasks that normally require human intelligence.\n"
110+
]
111+
}
112+
],
113+
"metadata": {
114+
"colab": {
115+
"provenance": []
116+
}
117+
},
118+
"nbformat": 4,
119+
"nbformat_minor": 5
120+
}
Lines changed: 122 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,122 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "ddb8fd1c",
6+
"metadata": {
7+
"id": "ddb8fd1c"
8+
},
9+
"source": [
10+
"# Phi-3 Medium Conversational Inference"
11+
]
12+
},
13+
{
14+
"cell_type": "markdown",
15+
"source": [
16+
"[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/DhivyaBharathy-web/PraisonAI/blob/main/examples/cookbooks/Phi_3_Medium_Conversational.ipynb)\n"
17+
],
18+
"metadata": {
19+
"id": "uMIvNFtYQDKO"
20+
},
21+
"id": "uMIvNFtYQDKO"
22+
},
23+
{
24+
"cell_type": "markdown",
25+
"id": "71a9292d",
26+
"metadata": {
27+
"id": "71a9292d"
28+
},
29+
"source": [
30+
"**Description:**\n",
31+
"\n",
32+
"Run a conversational inference using the Phi-3 Medium model with an efficient pipeline. The notebook illustrates basic loading, prompting, and response generation."
33+
]
34+
},
35+
{
36+
"cell_type": "markdown",
37+
"id": "9023e3f2",
38+
"metadata": {
39+
"id": "9023e3f2"
40+
},
41+
"source": [
42+
"**Dependencies**\n",
43+
"\n",
44+
"```python\n",
45+
"!pip install transformers accelerate\n",
46+
"!pip install torch torchvision\n",
47+
"```"
48+
]
49+
},
50+
{
51+
"cell_type": "markdown",
52+
"id": "8ed8c345",
53+
"metadata": {
54+
"id": "8ed8c345"
55+
},
56+
"source": [
57+
"**Tools Used**\n",
58+
"\n",
59+
"- HuggingFace Transformers\n",
60+
"- Accelerate\n",
61+
"- PyTorch"
62+
]
63+
},
64+
{
65+
"cell_type": "markdown",
66+
"id": "68cb4bd6",
67+
"metadata": {
68+
"id": "68cb4bd6"
69+
},
70+
"source": [
71+
"**YAML Prompt**\n",
72+
"\n",
73+
"```yaml\n",
74+
"system: You are a helpful assistant.\n",
75+
"user: What is the capital of France?\n",
76+
"```"
77+
]
78+
},
79+
{
80+
"cell_type": "code",
81+
"execution_count": null,
82+
"id": "be5d0fde",
83+
"metadata": {
84+
"id": "be5d0fde"
85+
},
86+
"outputs": [],
87+
"source": [
88+
"from transformers import AutoTokenizer, AutoModelForCausalLM\n",
89+
"import torch\n",
90+
"\n",
91+
"model = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-3-medium\")\n",
92+
"tokenizer = AutoTokenizer.from_pretrained(\"microsoft/phi-3-medium\")\n",
93+
"\n",
94+
"prompt = \"What is the capital of France?\"\n",
95+
"inputs = tokenizer(prompt, return_tensors=\"pt\")\n",
96+
"outputs = model.generate(**inputs, max_new_tokens=20)\n",
97+
"print(tokenizer.decode(outputs[0], skip_special_tokens=True))"
98+
]
99+
},
100+
{
101+
"cell_type": "markdown",
102+
"id": "4ac8c793",
103+
"metadata": {
104+
"id": "4ac8c793"
105+
},
106+
"source": [
107+
"**Output**\n",
108+
"\n",
109+
"This example shows the model answering a simple geography question.\n",
110+
"\n",
111+
"What is the capital of France? The capital of France is Paris.\n"
112+
]
113+
}
114+
],
115+
"metadata": {
116+
"colab": {
117+
"provenance": []
118+
}
119+
},
120+
"nbformat": 4,
121+
"nbformat_minor": 5
122+
}

0 commit comments

Comments
 (0)