Skip to content

Commit d2b1232

Browse files
committed
Update Blog “build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis”
1 parent d321f13 commit d2b1232

12 files changed

+55
-3
lines changed

content/blog/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis.md

Lines changed: 55 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ HPE MLIS is accessed by clicking on 'HPE MLIS' tile in *Tools & Frameworks / Dat
8282

8383
![](/img/mlis.jpg)
8484

85-
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and 'Deployments'.
85+
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
8686

8787

8888

@@ -92,9 +92,61 @@ Add a new registry of type 'NGC', which can be used to access pre-packaged LLMs.
9292

9393
![](/img/mlis-registry.jpg)
9494

95-
After deployment, access FlowiseAI via the configured endpoint (e.g., `https://chatbot.ingress.pcai0104.ld7.hpecolo.net`). Log in with your admin credentials.
9695

97-
### 2. Build Your Chatbot Flow
96+
97+
### 2. Add 'Packaged Model'
98+
99+
Create a new Packaged Model by clicking 'Add new model' tab, and fill-in the details as shown in screen shots.
100+
101+
![](/img/package-model-1.jpg)
102+
103+
Choose the 'Registry' created in the previous step, and select 'meta/llama-3.1-8b-instruct' for 'NGC Supported Models'
104+
105+
![](/img/package-model-2.jpg)
106+
107+
Set the right resources required for the model, either by choosing the in-built 'Resource Template' or, 'Custom' as shown below.
108+
109+
![](/img/package-model-3.jpg)
110+
111+
![](/img/package-model-4.jpg)
112+
113+
Newly created packaged model appears in the UI.
114+
115+
![](/img/package-model-final.jpg)
116+
117+
118+
119+
### 3. Create 'Deployment'
120+
121+
Using the 'packaged Model' created in previous step, create a new deployment by clicking on 'Create new deployment'
122+
123+
![](/img/deployment-1.jpg)
124+
125+
Give a name to the 'Deployment' and choose the 'Packaged Model' created in the previous step.
126+
127+
![](/img/deployment-2.jpg)
128+
129+
![](/img/deployment-3.jpg)
130+
131+
Set 'Auto scaling' as required. In this example, we have used 'fixed-1' template.
132+
133+
![](/img/deployment-4.jpg)
134+
135+
![](/img/deployment-5.jpg)
136+
137+
The LLM is now deployed and can be accessed using the 'Endpoint', and corresponding 'API Token'.
138+
139+
![](/img/deployment-6.jpg)
140+
141+
142+
143+
144+
145+
146+
147+
148+
149+
98150

99151
Use FlowiseAI’s drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.
100152

static/img/deployment-1.jpg

27.1 KB
Loading

static/img/deployment-2.jpg

24.5 KB
Loading

static/img/deployment-3.jpg

24.7 KB
Loading

static/img/deployment-4.jpg

32.4 KB
Loading

static/img/deployment-5.jpg

32.2 KB
Loading

static/img/deployment-6.jpg

37.9 KB
Loading

static/img/package-model-1.jpg

27.1 KB
Loading

static/img/package-model-2.jpg

39 KB
Loading

static/img/package-model-3.jpg

43.6 KB
Loading

0 commit comments

Comments
 (0)