Skip to content

Commit 5bd565a

Browse files
committed
Update Blog “build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis”
1 parent 908646e commit 5bd565a

File tree

1 file changed

+3
-11
lines changed

1 file changed

+3
-11
lines changed

content/blog/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis.md

Lines changed: 3 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ This blog post walks you through deploying Flowise on HPE PCAI to build a modern
1717

1818
## HPE Private Cloud AI
1919

20-
[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA NIM* LLMs, along with a powerful suite of AI tools and frameworks for *Data Engineering**Analytics*, and *Data Science*.
20+
[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA Inference Microservices (NIM)* LLMs, along with a powerful suite of AI tools and frameworks for *Data Engineering**Analytics*, and *Data Science*.
2121

2222
HPE Machine Learning Inference Software is a user-friendly solution designed to simplify and control the deployment, management, and monitoring of machine learning (ML) models, including LLMs, at any scale.
2323

@@ -87,18 +87,14 @@ HPE MLIS is accessed by clicking on 'HPE MLIS' tile in *Tools & Frameworks / Dat
8787

8888
![](/img/mlis.jpg)
8989

90-
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
91-
92-
90+
To deploy a pre-packaged LLM (Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
9391

9492
### 1. Add 'Registry'
9593

96-
Add a new registry of type 'NGC', which can be used to access pre-packaged LLMs.
94+
Add a new registry of type 'NVIDIA GPU Cloud' (NGC), which can be used to access pre-packaged LLMs.
9795

9896
![](/img/mlis-registry.jpg)
9997

100-
101-
10298
### 2. Add 'Packaged Model'
10399

104100
Create a new Packaged Model by clicking 'Add new model' tab, and fill-in the details as shown in screen shots.
@@ -119,8 +115,6 @@ Newly created packaged model appears in the UI.
119115

120116
![](/img/package-model-final.jpg)
121117

122-
123-
124118
### 3. Create 'Deployment'
125119

126120
Using the 'packaged Model' created in previous step, create a new deployment by clicking on 'Create new deployment'
@@ -143,8 +137,6 @@ The LLM is now deployed and can be accessed using the 'Endpoint', and correspond
143137

144138
![](/img/deployment-6.jpg)
145139

146-
147-
148140
## Create AI Chatbot in Flowise
149141

150142
Use Flowise's drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.

0 commit comments

Comments
 (0)