Skip to content

Commit aeb6e60

Browse files
bene2k1RoRoJ
andauthored
Apply suggestions from code review
Co-authored-by: Rowena Jones <[email protected]>
1 parent d2dbd3c commit aeb6e60

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

pages/managed-inference/how-to/create-deployment.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dates:
2929
- Choose the geographical **region** for the deployment.
3030
- For custom models: Choose the model quantization.
3131
<Message type="tip">
32-
Each model comes with a default quantization. Select lower bits quantization to improve performance and enable model to run on smaller GPU Nodes, while potentially reducing precision.
32+
Each model comes with a default quantization. Select lower bits quantization to improve performance and enable the model to run on smaller GPU nodes, while potentially reducing precision.
3333
</Message>
3434
- Specify the GPU Instance type to be used with your deployment.
3535
4. Enter a **name** for the deployment, and optional tags.

0 commit comments

Comments
 (0)