You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ai-data/managed-inference/how-to/create-deployment.mdx
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,10 +27,10 @@ dates:
27
27
- Specify the GPU Instance type to be used with your deployment.
28
28
4. Enter a **name** for the deployment, and optional tags.
29
29
5. Configure the **network connectivity** settings for the deployment:
30
-
-Enable **Private Network** for secure communication and restricted availability within Private Networks. Choose an existing Private Network from the drop-down list, or create a new one.
31
-
-Enable **Public Network** to access resources via the public internet. Token protection is enabled by default.
30
+
-Attach to a **Private Network** for secure communication and restricted availability. Choose an existing Private Network from the drop-down list, or create a new one.
31
+
-Set up **Public connectivity** to access resources via the public internet. Authentication by API key is enabled by default.
32
32
<Messagetype="important">
33
-
- Enabling both private and public networks will result in two distinct endpoints (public and private) for your deployment.
33
+
- Enabling both private and public connectivity will result in two distinct endpoints (public and private) for your deployment.
34
34
- Deployments must have at least one endpoint, either public or private.
35
35
</Message>
36
36
6. Click **Deploy model** to launch the deployment process. Once the model is ready, it will be listed among your deployments.
0 commit comments