Skip to content

Commit 7c9a46f

Browse files
committed
Update Blog “build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis”
1 parent a0bd97f commit 7c9a46f

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

content/blog/build-your-first-ai-chatbot-on-hpe-private-cloud-ai-using-flowise-and-hpe-mlis.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -88,19 +88,19 @@ HPE MLIS is accessed by clicking on *HPE MLIS* tile in *Tools & Frameworks / Dat
8888

8989
To deploy a pre-packaged LLM (Meta/Llama3-8b-instruct) in HPE MLIS, you need to know how to add a registry, a packaged model, and how to create deployments.
9090

91-
### 1. Add 'Registry'
91+
### 1. Adding a registry
9292

93-
Add a new registry of type 'NVIDIA GPU Cloud' (NGC), which can be used to access pre-packaged LLMs.
93+
You'll first want to add a new registry called "NGC", which refers to NVIDIA GPU Cloud. This can be used to access pre-packaged LLMs.
9494

9595
![](/img/mlis-registry.jpg)
9696

97-
### 2. Add 'Packaged Model'
97+
### 2. Adding a packaged model
9898

99-
Create a new Packaged Model by clicking 'Add new model' tab, and fill-in the details as shown in screen shots.
99+
Create a new packaged model by clicking the *Add New Model* tab. Fill in the details as shown in the below screen shots.
100100

101101
![](/img/package-model-1.jpg)
102102

103-
Choose the 'Registry' created in the previous step, and select 'meta/llama-3.1-8b-instruct' for 'NGC Supported Models'
103+
Choose the registry created in the previous step and select 'meta/llama-3.1-8b-instruct' for the *NGC Supported Models*
104104

105105
![](/img/package-model-2.jpg)
106106

0 commit comments

Comments
 (0)