|
| 1 | +--- |
| 2 | +title: Build your first AI Chatbot on HPE Private Cloud AI using Flowise and HPE MLIS |
| 3 | +date: 2025-07-11T13:38:06.049Z |
| 4 | +author: Santosh Nagaraj |
| 5 | +authorimage: /img/santosh-picture-192.jpg |
| 6 | +disable: false |
| 7 | +tags: |
| 8 | + - HPE Private Cloud AI |
| 9 | + - Chatbot |
| 10 | + - hpe-private-cloud-ai |
| 11 | + - HPE MLIS |
| 12 | +--- |
| 13 | +In today’s AI-driven landscape, conversational interfaces are transforming how organizations interact with users and automate workflows. Building a secure, scalable, and customizable chatbot solution requires robust infrastructure and flexible AI tooling. HPE Private Cloud AI provides a powerful platform for deploying and managing AI workloads, while Flowise and HPE Machine Learning Inference Software offer the tools to rapidly build, deploy, and manage chatbots powered by large language models (LLMs). |
| 14 | + |
| 15 | +This blog post walks you through deploying FlowiseAI on HPE PCAI to build a modern chatbot solution. By leveraging these technologies, organizations can accelerate chatbot development, ensure data privacy, and maintain full control over their AI lifecycle. |
| 16 | + |
| 17 | +## HPE Private Cloud AI |
| 18 | + |
| 19 | +[HPE Private Cloud AI (HPE PCAI)](https://developer.hpe.com/platform/hpe-private-cloud-ai/home/) offers a comprehensive, turnkey AI solution designed to address key enterprise challenges, from selecting the appropriate LLMs to efficiently hosting and deploying them. Beyond these core functions, HPE Private Cloud AI empowers organizations to take full control of their AI adoption journey by offering a curated set of pre-integrated *NVIDIA Inference Microservices (NIM)* LLMs, along with a powerful suite of AI tools and frameworks for data engineering, analytics, and data science. |
| 20 | + |
| 21 | +HPE Machine Learning Inference Software is a user-friendly solution designed to simplify and control the deployment, management, and monitoring of machine learning (ML) models, including LLMs, at any scale. |
| 22 | + |
| 23 | +HPE Private Cloud AI has pre-integrated NVIDIA NIM LLMs, a suite of AI tools (including HPE Machine Learning Inference Software), and a flexible *Import Framework* that enables organizations to deploy their own applications or third-party solutions, like FlowiseAI. |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | +## What is Flowise? |
| 28 | + |
| 29 | +[Flowise](https://flowiseai.com/) is an open source generative AI development platform for building AI Agents and LLM workflows. It provides a visual interface for designing conversational flows, integrating data sources, and connecting to various LLM endpoints. Flowise provides modular building blocks for you to build any agentic systems, from simple compositional workflows to autonomous agents. |
| 30 | + |
| 31 | +## Deploying Flowise via import framework |
| 32 | + |
| 33 | +### 1. Prepare the Helm charts |
| 34 | + |
| 35 | +Obtain the Helm chart for Flowise v5.1.1 from [artifacthub.io](https://artifacthub.io/packages/helm/cowboysysop/flowise). Following changes to the Helm chart are needed to deploy it on HPE Private Cloud AI. |
| 36 | + |
| 37 | +Add the following YAML manifest files to *templates/ezua/* directory: |
| 38 | + |
| 39 | +* *virtualService.yaml*: Defines an Istio *VirtualService* to configure routing rules for incoming requests. |
| 40 | +* *kyverno.yaml*: A Kyverno *ClusterPolicy* that automatically adds required labels to the deployment. |
| 41 | + |
| 42 | +Updates to *values.yaml* file |
| 43 | + |
| 44 | +* Set resource request/limits. |
| 45 | +* Update the PVC size |
| 46 | +* Add the following *'ezua'* section to configure the *Istio Gateway* and expose the endpoint. |
| 47 | + |
| 48 | +``` |
| 49 | +ezua: |
| 50 | + virtualService: |
| 51 | + endpoint: "flowise.${DOMAIN_NAME}" |
| 52 | + istioGateway: "istio-system/ezaf-gateway" |
| 53 | +``` |
| 54 | + |
| 55 | +Here's the [reference document](https://support.hpe.com/hpesc/public/docDisplay?docId=a00aie18hen_us&page=ManageClusters/importing-applications.html) for the import framework prerequisites. |
| 56 | + |
| 57 | +These updates are implemented in the revised Flowise Helm charts, and are available in the GitHub repository [ai-solution-eng/frameworks. ](https://github.com/ai-solution-eng/frameworks/tree/main/flowise)With these customizations, *Flowise* can now be deployed on HPE Private Cloud AI using *Import Framework.* |
| 58 | + |
| 59 | +### 2. Deploy Flowise via the import framework |
| 60 | + |
| 61 | +Use the import framework in HPE Private Cloud AI to deploy Flowise. |
| 62 | + |
| 63 | + |
| 64 | + |
| 65 | + |
| 66 | + |
| 67 | + |
| 68 | + |
| 69 | + |
| 70 | + |
| 71 | +### 3. Access Flowise UI via its endpoint |
| 72 | + |
| 73 | +After deployment, Flowise will appear as a tile under *Tools & Frameworks / Data Engineering* tab. |
| 74 | + |
| 75 | + |
| 76 | + |
| 77 | +Click the *Open* button on the *Flowise* Tile, or click on the *Endpoint* URL to launch the Flowise login page. Setup the credentials and login. |
| 78 | + |
| 79 | + |
| 80 | + |
| 81 | +- - - |
| 82 | + |
| 83 | +## Deploy a LLM in HPE MLIS |
| 84 | + |
| 85 | +HPE MLIS is accessed by clicking on *HPE MLIS* tile in *Tools & Frameworks / Data Engineering* tab. |
| 86 | + |
| 87 | + |
| 88 | + |
| 89 | +To deploy a pre-packaged LLM (Meta/Llama3-8b-instruct) in HPE MLIS, you need to know how to add a registry, a packaged model, and how to create deployments. |
| 90 | + |
| 91 | +### 1. Adding a registry |
| 92 | + |
| 93 | +You'll first want to add a new registry called "NGC", which refers to NVIDIA GPU Cloud. This can be used to access pre-packaged LLMs. |
| 94 | + |
| 95 | + |
| 96 | + |
| 97 | +### 2. Adding a packaged model |
| 98 | + |
| 99 | +Create a new packaged model by clicking the *Add New Model* tab. Fill in the details as shown in the below screen shots. |
| 100 | + |
| 101 | + |
| 102 | + |
| 103 | +Choose the registry created in the previous step and select 'meta/llama-3.1-8b-instruct' for the *NGC Supported Models* |
| 104 | + |
| 105 | + |
| 106 | + |
| 107 | +Set the right resources required for the model. Do this by choosing from either a built-in template or "custom" in the *Resource Template* section. |
| 108 | + |
| 109 | + |
| 110 | + |
| 111 | + |
| 112 | + |
| 113 | +Newly created packaged model appears in the UI. |
| 114 | + |
| 115 | + |
| 116 | + |
| 117 | +### 3. Creating deployments |
| 118 | + |
| 119 | +Using the packaged model created in the previous step, create a new deployment by clicking on *Create new deployment.* |
| 120 | + |
| 121 | + |
| 122 | + |
| 123 | +Give a name to the deployment and choose the packaged model created in the previous step. |
| 124 | + |
| 125 | + |
| 126 | + |
| 127 | + |
| 128 | + |
| 129 | +Set auto scaling as required. In this example, we have used 'fixed-1' template. |
| 130 | + |
| 131 | + |
| 132 | + |
| 133 | + |
| 134 | + |
| 135 | +The LLM is now deployed and can be accessed using the endpoint and corresponding API token. |
| 136 | + |
| 137 | + |
| 138 | + |
| 139 | +## Create AI Chatbot in Flowise |
| 140 | + |
| 141 | +Use Flowise's drag-and-drop interface to design your chatbot’s conversational flow. Integrate with HPE MLIS by adding an LLM node and configuring it to use the MLIS inference endpoint. |
| 142 | + |
| 143 | +* **Add New Chatflow:** |
| 144 | + |
| 145 | + |
| 146 | + |
| 147 | +Save the Chatflow using the name "AI Chatbot" and add the following nodes, making the connections shown in the screenshot. |
| 148 | + |
| 149 | +* **Chat Models (Chat NVIDIA NIM):** Set Deployment 'Endpoint' from HPE MLIS as 'Base Path', corresponding 'Model Name' and 'API Key' from HPE MLIS for 'Connect Credential'. |
| 150 | +* **Memory (Buffer Window Memory):** Set appropriate 'Size'. |
| 151 | +* **Chains (Conversation Chain):** Connect 'Chat NVIDIA NIM' and 'Buffer Window Memory' nodes as shown. |
| 152 | + |
| 153 | + |
| 154 | + |
| 155 | +Your new AI Chatbot is now ready! You may quickly test it by clicking the chat icon on the top right corner of the screen. |
| 156 | + |
| 157 | + |
| 158 | + |
| 159 | +### Accessing AI Chatbot from external applications |
| 160 | + |
| 161 | +Flowise provides an API endpoint for the chatbot, with multiple ways of integrating it with your applications. Also, you may explore multiple configurations that are available to enhance the chatbot. |
| 162 | + |
| 163 | + |
| 164 | + |
| 165 | +## Conclusion |
| 166 | + |
| 167 | +By combining Flowise’s intuitive chatbot builder with HPE MLIS’s robust model management, HPE Private Cloud AI empowers organizations to rapidly develop, deploy, and govern conversational AI solutions. This integrated approach ensures data privacy, operational control, and scalability for enterprise chatbot deployments. |
| 168 | + |
| 169 | +Stay tuned to the [HPE Developer Community blog](https://developer.hpe.com/blog/) for more guides and best practices on leveraging HPE Private Cloud AI for your AI. |
0 commit comments