Skip to content

Commit f06da61

Browse files
committed
Create Blog “build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis”
1 parent f6c8799 commit f06da61

9 files changed

+124
-0
lines changed
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
---
2+
title: Build your first AI Chatbot on 'HPE Private Cloud AI' using Flowise and
3+
HPE MLIS
4+
date: 2025-07-11T13:38:06.049Z
5+
author: Santosh Nagaraj
6+
authorimage: /img/santosh-picture-192.jpg
7+
disable: false
8+
---
9+
In today’s AI-driven landscape, conversational interfaces are transforming how organizations interact with users and automate workflows. Building a secure, scalable, and customizable chatbot solution requires robust infrastructure and flexible AI tooling. HPE Private Cloud AI (PCAI) provides a powerful platform for deploying and managing AI workloads, while Flowise and HPE MLIS (Machine Learning Inference Software) offer the tools to rapidly build, deploy, and manage chatbots powered by large language models (LLMs).
10+
11+
This blog post walks you through deploying Flowise on HPE PCAI to build a modern chatbot solution. By leveraging these technologies, organizations can accelerate chatbot development, ensure data privacy, and maintain full control over their AI lifecycle.
12+
13+
## HPE Private Cloud AI
14+
15+
[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate large language models (LLMs) to efficiently hosting and deploying them. Beyond these core functions, HPE PCAI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA NIM* LLMs, along with a powerful suite of AI tools and frameworks for *Data Engineering**Analytics*, and *Data Science*.
16+
17+
HPE Machine Learning Inference Software is a user-friendly solution designed to simplify and control the deployment, management, and monitoring of machine learning (ML) models, including LLMs, at any scale.
18+
19+
HPE PCAI has pre-integrated NVIDIA NIM LLMs, a suite of AI tools (including HPE MLIS), and a flexible *Import Framework,* that enables organizations *to* deploy their own applications or third-party solutions like FlowiseAI.
20+
21+
![](/img/importframework.jpg)
22+
23+
## What is Flowise?
24+
25+
[Flowise](https://flowiseai.com/) is an open source generative AI development platform for building AI Agents and LLM workflows. It provides a visual interface for designing conversational flows, integrating data sources, and connecting to various LLM endpoints. Flowise provides modular building blocks for you to build any agentic systems, from simple compositional workflows to autonomous agents.
26+
27+
## Deploying Flowise on HPE PCAI via *Import Framework*
28+
29+
### 1. Prepare the Helm Charts
30+
31+
Obtain the helm chart for Flowise v5.1.1 from [artifacthub.io](https://artifacthub.io/packages/helm/cowboysysop/flowise). Following changes to the helm chart are needed to deploy it on HPE PCAI.
32+
33+
Add the following YAML manifest files to *templates/ezua/* directory:
34+
35+
* *virtualService.yaml*: Defines an Istio *VirtualService* to configure routing rules for incoming requests.
36+
* *kyverno.yaml*: A Kyverno *ClusterPolicy* that automatically adds required labels to the deployment.
37+
38+
Updates to *values.yaml* file
39+
40+
* Set resource request/limits.
41+
* Update the PVC size
42+
* Add the following *'ezua'* section to configure the *Istio Gateway* and expose the endpoint.
43+
44+
```
45+
ezua:
46+
virtualService:
47+
endpoint: "flowise.${DOMAIN_NAME}"
48+
istioGateway: "istio-system/ezaf-gateway"
49+
```
50+
51+
Reference document for 'Import Framework' [Prerequisites.](https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie18hen_us&page=ManageClusters/importing-applications.html)
52+
53+
These updates are implemented in the revised *Flowise* Helm charts, is available in the *GitHub* repository [ai-solution-eng/frameworks. ](https://github.com/ai-solution-eng/frameworks/tree/main/flowise)With these customizations, *Flowise* can now be deployed on HPE PCAI using '*Import Framework*'
54+
55+
### 2. Deploy Flowise via Import Framework
56+
57+
Use the Import Framework in HPE PCAI to deploy Flowise.
58+
59+
![](/img/flowise-deploy-1.jpg)
60+
61+
![](/img/flowise-deploy-2.jpg)
62+
63+
![](/img/flowise-deploy-3.jpg)
64+
65+
![](/img/flowise-deploy-4.jpg)
66+
67+
### 3. Access Flowise UI via its Endpoint
68+
69+
After deployment, Flowise will appear as a tile under *Tools & Frameworks / Data Engineering* tab.
70+
71+
![](/img/flowsie-deployed.jpg)
72+
73+
Click the *Open* button on the *Flowise* Tile, or click on the *Endpoint* URL to launch the Flowise login page. Setup the credentials and login.
74+
75+
![](/img/flowise-home-7-10-2025.jpg)
76+
77+
- - -
78+
79+
## Deploy a LLM in HPE MLIS
80+
81+
### 1. Access the FlowiseAI UI
82+
83+
After deployment, access FlowiseAI via the configured endpoint (e.g., `https://chatbot.ingress.pcai0104.ld7.hpecolo.net`). Log in with your admin credentials.
84+
85+
### 2. Build Your Chatbot Flow
86+
87+
Use FlowiseAI’s drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint.
88+
89+
* **Add Data Sources:** Connect to internal databases or APIs as needed.
90+
* **Configure LLM Node:** Set the endpoint to your deployed MLIS service.
91+
* **Test the Flow:** Use the built-in chat preview to validate responses.
92+
93+
### 3. Secure and Govern Access
94+
95+
Leverage HPE PCAI’s RBAC and network policies to restrict access to the chatbot and underlying data sources. Use MLIS monitoring features to track model usage and performance.
96+
97+
- - -
98+
99+
## Pushing Custom Chatbot Images (Optional)
100+
101+
If you customize FlowiseAI or build your own chatbot container, push the image to your local Harbor registry:
102+
103+
```bash
104+
docker build -t harbor.ingress.pcai0104.ld7.hpecolo.net/demo/flowiseai-chatbot:v1 .
105+
docker push harbor.ingress.pcai0104.ld7.hpecolo.net/demo/flowiseai-chatbot:v1
106+
```
107+
108+
Update your Helm chart to use the new image.
109+
110+
- - -
111+
112+
## Deploying the Chatbot Application
113+
114+
With FlowiseAI and MLIS configured, deploy your chatbot application to HPE PCAI using the Import Framework. The chatbot will be accessible via its endpoint, ready to serve users across your organization.
115+
116+
Monitor usage and performance through the FlowiseAI and MLIS dashboards. Use PCAI’s audit logs for compliance and troubleshooting.
117+
118+
- - -
119+
120+
## Conclusion
121+
122+
By combining FlowiseAI’s intuitive chatbot builder with HPE MLIS’s robust model management, HPE Private Cloud AI empowers organizations to rapidly develop, deploy, and govern conversational AI solutions. This integrated approach ensures data privacy, operational control, and scalability for enterprise chatbot deployments.
123+
124+
Stay tuned to the HPE Developer Community blog for more guides and best practices on leveraging HPE PCAI for your AI

static/img/flowise-deploy-1.jpg

44.8 KB
Loading

static/img/flowise-deploy-2.jpg

32.6 KB
Loading

static/img/flowise-deploy-3.jpg

64.4 KB
Loading

static/img/flowise-deploy-4.jpg

36.2 KB
Loading

static/img/flowise-home-7-10-2025.jpg

38.4 KB
Loading

static/img/flowsie-deployed.jpg

68.9 KB
Loading

static/img/importframework.jpg

24.4 KB
Loading

static/img/santosh-picture-192.jpg

8.95 KB
Loading

0 commit comments

Comments
 (0)