You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis.md
+3-11Lines changed: 3 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ This blog post walks you through deploying Flowise on HPE PCAI to build a modern
17
17
18
18
## HPE Private Cloud AI
19
19
20
-
[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA NIM* LLMs, along with a powerful suite of AI tools and frameworks for *Data Engineering*, *Analytics*, and *Data Science*.
20
+
[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA Inference Microservices (NIM)* LLMs, along with a powerful suite of AI tools and frameworks for *Data Engineering*, *Analytics*, and *Data Science*.
21
21
22
22
HPE Machine Learning Inference Software is a user-friendly solution designed to simplify and control the deployment, management, and monitoring of machine learning (ML) models, including LLMs, at any scale.
23
23
@@ -87,18 +87,14 @@ HPE MLIS is accessed by clicking on 'HPE MLIS' tile in *Tools & Frameworks / Dat
87
87
88
88

89
89
90
-
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
91
-
92
-
90
+
To deploy a pre-packaged LLM (Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
93
91
94
92
### 1. Add 'Registry'
95
93
96
-
Add a new registry of type 'NGC', which can be used to access pre-packaged LLMs.
94
+
Add a new registry of type 'NVIDIA GPU Cloud' (NGC), which can be used to access pre-packaged LLMs.
97
95
98
96

99
97
100
-
101
-
102
98
### 2. Add 'Packaged Model'
103
99
104
100
Create a new Packaged Model by clicking 'Add new model' tab, and fill-in the details as shown in screen shots.
@@ -119,8 +115,6 @@ Newly created packaged model appears in the UI.
119
115
120
116

121
117
122
-
123
-
124
118
### 3. Create 'Deployment'
125
119
126
120
Using the 'packaged Model' created in previous step, create a new deployment by clicking on 'Create new deployment'
@@ -143,8 +137,6 @@ The LLM is now deployed and can be accessed using the 'Endpoint', and correspond
143
137
144
138

145
139
146
-
147
-
148
140
## Create AI Chatbot in Flowise
149
141
150
142
Use Flowise's drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.
0 commit comments