You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 20, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: docs/guides/python/serverless-llama.mdx
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -235,6 +235,16 @@ config:
235
235
ephemeral-storage: 1024
236
236
```
237
237
238
+
<Note>
239
+
Nitric defaults aim to keep you within your free-tier limits. In this example,
240
+
we recommend increasing memory and ephermeral values to allow the llama model
241
+
to load correctly, therefore running this sample project will likely incur more
242
+
costs than a Nitric guide using the defaults.
243
+
244
+
You are responsible for staying within the limits of the free tier or any
245
+
costs associated with deployment.
246
+
</Note>
247
+
238
248
Since we'll use Nitric's default Pulumi AWS Provider make sure you're setup to deploy using that provider. You can find more information on how to set up the AWS provider in the [Nitric AWS Provider documentation](/providers/pulumi/aws).
0 commit comments