You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis.md
+55-3Lines changed: 55 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -82,7 +82,7 @@ HPE MLIS is accessed by clicking on 'HPE MLIS' tile in *Tools & Frameworks / Dat
82
82
83
83

84
84
85
-
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and 'Deployments'.
85
+
To deploy a pre-packaged LLM(Meta/Llama3-8b-instruct) in HPE MLIS, Add 'Registry', 'Packaged models' and create 'Deployments'.
86
86
87
87
88
88
@@ -92,9 +92,61 @@ Add a new registry of type 'NGC', which can be used to access pre-packaged LLMs.
92
92
93
93

94
94
95
-
After deployment, access FlowiseAI via the configured endpoint (e.g., `https://chatbot.ingress.pcai0104.ld7.hpecolo.net`). Log in with your admin credentials.
96
95
97
-
### 2. Build Your Chatbot Flow
96
+
97
+
### 2. Add 'Packaged Model'
98
+
99
+
Create a new Packaged Model by clicking 'Add new model' tab, and fill-in the details as shown in screen shots.
100
+
101
+

102
+
103
+
Choose the 'Registry' created in the previous step, and select 'meta/llama-3.1-8b-instruct' for 'NGC Supported Models'
104
+
105
+

106
+
107
+
Set the right resources required for the model, either by choosing the in-built 'Resource Template' or, 'Custom' as shown below.
108
+
109
+

110
+
111
+

112
+
113
+
Newly created packaged model appears in the UI.
114
+
115
+

116
+
117
+
118
+
119
+
### 3. Create 'Deployment'
120
+
121
+
Using the 'packaged Model' created in previous step, create a new deployment by clicking on 'Create new deployment'
122
+
123
+

124
+
125
+
Give a name to the 'Deployment' and choose the 'Packaged Model' created in the previous step.
126
+
127
+

128
+
129
+

130
+
131
+
Set 'Auto scaling' as required. In this example, we have used 'fixed-1' template.
132
+
133
+

134
+
135
+

136
+
137
+
The LLM is now deployed and can be accessed using the 'Endpoint', and corresponding 'API Token'.
138
+
139
+

140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
98
150
99
151
Use FlowiseAI’s drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.
0 commit comments